Most textbook treatments of Ethernet would concentrate on Thicknet coax, because that is the wiring arrangement used when Xerox invented the LAN. Today this is still used for medium-long distances where medium levels of reliability are needed. Fiber goes farther and has greater reliability, but a higher cost. To connect a number of workstations within the same room, a light duty coax cable called "Thinnet" is commonly used. These other media reflect an older view of workstation computers in a laboratory environment.
However, the PC and Macintosh have changed the geography of networking. Computers are now located on desktops, dorm rooms, and at home. Telephone wire is the clear choice (where possible) for the last hop from basement to desktop.
Drivers to support the PC Ethernet card come in four versions:
This document is intended to explain the basic elements of the Ethernet to a PC user. It assumes that someone else will probably be purchasing the central equipment and installing the wire.
Obviously, no technology could become an international standard for all sorts of equipment if the rules were controlled by a single US corporation. The IEEE was assigned the task of developing formal international standards for all Local Area Network technology. It formed the "802" committee to look at Ethernet, Token Ring, Fiber Optic, and other LAN technology. The objective of the project was not just to standardize each LAN individually, but also to establish rules that would be global to all types of LANs so that data could easily move from Ethernet to Token Ring or Fiber Optics.
This larger view created conflicts with the existing practice under the old Xerox DIX system. The IEEE was careful to separate the new and old rules. It recognized that there would be a period when old DIX messages and new IEEE 802 messages would have to coexist on the same LAN. It published a set of standards of which the most important are:
However, the 802.2 standard would require a change to the network architecture of all existing Ethernet users. Apple had to change its Ethertalk, and did so when converting from Phase 1 to Phase 2 Appletalk. DEC had to change its DECNET. Novell added 802 as an option to its IPX, but it supports both DIX and 802 message formats at the same time.
The TCP/IP protocol used by the Internet refused to change. Internet standards are managed by the IETF group, and they decided to stick with the old DIX message format indefinitely. This produced a deadlock between two standards organizations that has not been resolved.
IBM waited until the 802 committee released its standards, then rigorously implemented the 802 rules for everything except TCP/IP where the IETF rules take precedence. This means that NETBEUI (the format for NETBIOS on the LAN) and SNA obey the 802 conventions.
So "Ethernet" suffers from too many standards. The old DIX rules for message format persist for some uses (Internet, DECNET, some Novell). The new 802 rules apply to other traffic (SNA, NETBEUI). The most pressing problem is to make sure that Novell clients and servers are configured to use the same frame format.
An Ethernet station sends data at a rate of 10 megabits per second. That bit allows 100 nanoseconds per bit. Light and electricity travel about one foot in a nanosecond. Therefore, after the electric signal for the first bit has traveled about 100 feet down the wire, the station has begun to send the second bit. However, an Ethernet cable can run for hundreds of feet. If two stations are located, say, 250 feet apart on the same cable, and both begin transmitting at the same time, then they will be in the middle of the third bit before the signal from each reaches the other station.
This explains the need for the "Collision Detect" part. Two stations can begin to send data at the same time, and their signals will "collide" nanoseconds later. When such a collision occurs, the two stations stop transmitting, "back off", and try again later after a randomly chosen delay period.
While an Ethernet can be built using one common signal wire, such an arrangement is not flexible enough to wire most buildings. Unlike an ordinary telephone circuit, Ethernet wire cannot be just spliced together, connecting one copper wire to another. Ethernet requires a repeater. A repeater is a simple station that is connected to two wires. Any data that it receives on one wire it repeats bit-for-bit on the other wire. When collisions occur, it repeats the collision as well.
In common practice, repeaters are used to convert the Ethernet signal from one type of wire to another. In particular, when the connection to the desktop uses ordinary telephone wire, the hub back in the telephone closet contains a repeater for every phone circuit. Any data coming down any phone line is copied onto the main Ethernet coax cable, and any data from the main cable is duplicated and transmitted down every phone line. The repeaters in the hub electrically isolate each phone circuit, which is necessary if a 10 megabit signal is going to be carried 300 feet on ordinary wire.
Every set of rules is best understood by characterizing its worst case. The worst case for Ethernet starts when a PC at the extreme end of one wire begins sending data. The electric signal passes down the wire through repeaters, and just before it gets to the last station at the other end of the LAN, that station (hearing nothing and thinking that the LAN is idle) begins to transmit its own data. A collision occurs. The second station recognizes this immediately, but the first station will not detect it until the collision signal retraces the first path all the way back through the LAN to its starting point.
Any system based on collision detect must control the time required for the worst round trip through the LAN. As the term "Ethernet" is commonly defined, this round trip is limited to 50 microseconds (millionths of a second). At a signaling speed of 10 million bits per second, this is enough time to transmit 500 bits. At 8 bits per byte, this is slightly less than 64 bytes.
To make sure that the collision is recognized, Ethernet requires that a station must continue transmitting until the 50 microsecond period has ended. If the station has less than 64 bytes of data to send, then it must pad the data by adding zeros at the end.
In simpler days, when Ethernet was dominated by heavy duty coax cable, it was possible to translate the 50 millisecond limit and other electrical restrictions into rules about cable length, number of stations, and number of repeaters. However, by adding new media (such as Fiber Optic cable) and smarter electronics, it becomes difficult to state physical distance limits with precision. However those limits work out, they are ultimately reflections of the constraint on the worst case round trip.
It would be possible to define some other Ethernet-like collision system with a 40 microsecond or 60 microsecond period. Changing the period, the speed, and the minimum message size simply require a new standard and some alternate equipment. AT& T, for example, once promoted a system called "Starlan" that transmitted data a 1 megabit per second over older phone wire. Many such systems are possible, but the term "Ethernet" is generally reserved for a system that transmits 10 megabits per second with a round trip delay of 50 microseconds.
To extend the LAN farther than the 50 microsecond limit will permit, one needs a bridge or router. These terms are often confused:
Ethernets fail in three common ways. A nail can be driven into the cable breaking the signal wire. A nail can be driven in touching the signal wire and shorting it to the external grounded metal shield. Finally, a station on the LAN can break down and start to generate a continuous stream of junk blocking everyone else from sending.
There is a specialized device that finds problems in an Ethernet LAN. It plugs into any attachment point in the cable, and sends out its own voltage pulse. The effect is similar to a sonar "ping." If the cable is broken, then there is no proper terminating resistor. The pulse will hit the loose end of the broken cable and will bounce back. The test device senses the echo, computes how long the round trip took, and then reports how far away the break is in the cable.
If the Ethernet cable is shorted out, a simple volt meter would determine that the proper resistor is missing from the signal and shield wires. Again, by sending out a pulse and timing the return, the test device can determine the distance to the problem.
Most of the thinking about Ethernet repair have been based on the original Thicknet media. However, modern Ethernet installation may not use any of this old coax cable. The connection to the desktop may be based on telephone wire between the PC and a "hub" device. The hubs may stack up in a wiring closet and then be connected to other rooms using fiber optic cable.
Newer generations of "smart" hubs can perform part of the error detection and reporting function. For example, they could isolate a problem in the connection to a particular desktop workstation and automatically isolate that unit from the rest of the network.
Ethernet presents a classic tradeoff. The simplest equipment has a very low cost, but requires some technical expertise to locate and repair errors. More sophisticated equipment may be able to do automatic error detection and recovery, but at a higher price.
The PC software (in PROTOCOL.INI or NET.CFG) can be configured to substitute a different address number. When this option is used, it is called a "locally administered address." If the use of this feature is properly controlled, the address can contain information about the building, department, room, machine, wiring circuit, or owner's telephone number. When accurate, such information can speed problem determination.
The source address field of each frame must contain the unique address (universal or local) assigned to the sending card. The destination field can contain a "multicast" address representing a group of workstations with some common characteristic. A Novell client may broadcast a request to identify all Netware servers on the LAN, while a Microsoft or IBM client machine broadcasts a query to all machines supporting NETBIOS to find a particular server or domain.
In normal operation, an Ethernet adapter will receive only frames with a destination address that matches its unique address, or destination addresses that represent a multicast message. However, most Ethernet adapters can be set into "promiscuous" mode where they receive all frames that appear on the LAN. If this poses a security problem, a new generation of smart hub devices can filter out all frames with private destination addresses belonging to another station.
There are three common conventions for the format of the remainder of the frame:
Before the development of international standards, Xerox administered the Ethernet conventions. As each vendor developed a protocol, a two byte Type code was assigned by Xerox to identify it. Codes were given out to XNS (the Xerox own protocol), DECNET, IP, and Novell IPX. Since short Ethernet frames must be padded with zeros to a length of 64 bytes, each of these higher level protocols required either a larger minimum message size or an internal length field that can be used to distinguish data from padding.
Type field values of particular note include:
0x0600 XNS (Xerox)
0x0800 IP (the Internet protocol)
The IEEE 802 committee was charged to develop protocols that could operate the same way across all LAN media. To allow collision detect, the 10 megabit Ethernet requires a minimum packet size of 64 bytes. Any shorter message must be padded with zeros. The requirement to pad messages is unique to Ethernet and does not apply to any other LAN media. In order for Ethernet to be interchangeable with other types of LANs, it would have to provide a length field to distinguish significant data from padding.
The DIX standard did not need a length field because the vendor protocols that used it (XNS, DECNET, IPX, IP) all had their own length fields. However, the 802 committee needed a standard that did not depend on the good behavior of other programs. The 802.3 standard therefore replaced the two byte type field with a two byte length field.
Xerox had not assigned any important types to have a decimal value below 1500. Since the maximum size of a packet on Ethernet is 1500 bytes, there was no conflict or overlap between DIX and 802 standards. Any Ethernet packet with a type/length field less than 1500 is in 802.3 format (with a length) while any packet in which the field value is greater than 1500 must be in DIX format (with a type).
The 802 committee then created a new field to substitute for Type. The 802.2 header follows the 802.3 header (and also follows the comparable fields in a Token Ring, FDDI, or other types of LAN).
The 802.2 header is three bytes long for control packets or the kind of connectionless data sent by all the old DIX protocols. A four byte header is defined for connection oriented data, which refers primarily to SNA and NETBEUI. The first two bytes identify the SAP. Even with hindsight it is not clear exactly what the IEEE expected this field to be used for. In current use, the two SAP fields are set to 0x0404 for SNA and 0xE0E0 for NETBEUI.
Under SNAP, the 802.2 header appears to be a datagram message (control field 0x03) between SAP ID 0xAA. The first five bytes of what 802.2 considers data are actually a subheader ending in the two byte DIX type value. Any of the old DIX protocols can convert their existing logic to legal 802 SNAP by simply moving the DIX type field back eight bytes from its original location.
The connection between the hub in the wiring closet and the adapter card in the PC forms a single point- to-point Ethernet segment between two stations. The connection to the rest of the LAN involves active electronics in the hub. In current use, this is done with a repeater that copies every bit and propagates collisions.
A new generation of even smarter hubs provides a "bridge" connection between the main LAN and the phone wire. Only multicast messages and private messages specifically addressed to the PC are forwarded to the desktop. This has two advantages:
It provides greater security, because the desktop user cannot spy on traffic addressed to other nodes.
It provides each desktop user with an isolated, private 10 megabit data path free of collisions. The connection between hubs can then use a higher speed fiber optic protocol (such as "ATM") to deliver much greater performance than simple Ethernet. This hybrid (Ethernet to the desktop, something else between the hubs) represents a compromise of high performance and low cost.
However, bridging Ethernet to any other LAN protocol requires some attention to frame formats. Unfortunately, the "standards" are still a mess. DIX and 802 messages flow on the same LAN. Bridges must be aware of the protocol conventions and select the correct frame format when moving data onto or off of an Ethernet.