Monday, February 18, 2013

The Wi-Fi Break-Before-Make Handoff



Basic Wi-Fi handoffs are always either break-before-make or just-in-time. In other words, there is no ability for a wireless phone to decide on a handoff and establish a relationship with a new access point without disconnecting from the previous one. The rules of 802.11 are rather simple here: no client is allowed to associate (send an Association message to one while maintaining data connectivity to another) to two access points at the same time. The reason for this is to remove any ambiguity as to which access point should forward wireline traffic destined to the client; otherwise, both access points would have the requirement of receiving the client's traffic, and therefore would not work in a switched wireline environment.
However, almost all of the important protocols for Wi-Fi happen only after a data connection has been established. This prevents clients from gaining much of a head start on establishing a connection when the old one is at risk.
Let's look at the contents of the Wi-Fi handoff protocol itself step by step. It will be helpful  for further information.
  1. Once a client has decided to hand off, it need not break the connection to the original access point, but it must not use it any longer.
  2. The client has the option of sending a Disassociation message to the old access point, a good practice that lets the old access point free up network resources.
  3. At this point, if the new access point is on a different channel, the client will change the channel of its receiver.
  4. If the new channel is a DFS channel, the client is required to wait until it receives a beacon frame from the access point, unless it has recently heard one as a part of a passive scanning procedure.
  5. The client will send an Authentication message to the new access point, establishing the beginnings of a relationship with this new access point, but not yet enabling data services.
  6. The access point will respond with its own Authentication message, accepting the client. A rejection can occur if load balancing is enabled, and the access point decides that it is oversubscribed, or if key state tables in the access point are full.
  7. The client will send a Reassociation Request message to the access point, requesting data services.
  8. The access point will send a Reassociation Response message to the access point. If the message has a status code for success, the client is now associated with and connected to this access point, and only this access point. Controller-based wireless architectures will usually ensure this by immediately destroying any connection that may have been left over if step 2 has not been performed. The access point may reject the association if it is oversubscribed, or if the additional services the client requests (mostly security or quality-of-service) in the Reassociation Request will not be supported.
    At this point, the client is associated and data services are available. Usually, the access point or controller behind it will send a broadcast frame, spoofed to appear as if it were sent by the client, to the connected Ethernet switch, informing it of the client's presence on that particular link and not on any one that may have been used previously.
    If no security is employed, skip ahead to the admission control mechanisms, towards the end of the list. If PSK security is employed, skip ahead to the four-way handshake. Otherwise, if 802.1X and RADIUS authentication is employed (WPA/WPA2 Enterprise), we'll continue immediately next.
  9. The access point and client can only exchange EAP messages at this point. The client may solicit the EAP exchange with an optional EAP Start message.
  10. The access point will request the client to log in with an EAP Request Identity message.
  11. Depending on the EAP method required by the RADIUS server on the network, the client and access point will continue to exchange a number of data frames, all EAPOL.
  12. The access point relays the RADIUS server's EAP Success or EAP Failure message. If this is a failure, the access point will also likely send a Deauthentication or Disassociation message to the client, to kick it off of the access point.
    At this point, the client and access point have agreed on the pairwise master key (PMK), based on the key material generated during the RADIUS exchange and sent to the access point when the authentication process concluded. But, the access point and client still need to generate a per-connection, pairwise transient key (PTK), which will be used to do the actual encryption. Pre-shared key (PSK) networks skipped the listed EAP exchanges, and use the PSK as the master key.
  13. The access point send the first message in the RSN (802. Hi) four-way handshake. This is an EAPOL Key frame.
  14. The client sends the second message in the four-way handshake.
  15. The access point sends the third message in the four-way handshake.
  16. The client sends the fourth message in the four-way handshake.
    At this point, all data services are enabled, and the client and access point can exchange data frames. However, if a call is in progress, and WMM Admission Control is enabled, the client is required to request the voice resources before it can send or receive a single voice packet with priority. Until this point, both sides may either buffer the packets or send the voice packets as best-effort. 
  17. The client sends the access point an ADDTS Request Action frame, with a TSPEC that specifies the over-the-air resources that both the upstream and downstream part of the voice call will occupy.
  18. The access point weighs whether it has enough resources to accept or deny the request. It sends an ADDTS Response Action frame with the results.
  19. If the request was successful, the client and access point will be sending voice traffic and the call successfully handed off. On the other hand, if the request fails, the client will disconnect from the access point with a Disassociation message, because, although it is allowed to remain on the access point, it can't send or receive any voice traffic.
Hopefully, everything went well and the handoff completed. On the other hand, if any of the processes failed, the connection is broken. The old connection was abandoned early on—in step 8 for sure and step 2 for more charitable clients. In order to not drop the phone call, the phone will need to restart the process from the beginning with another access point—perhaps the original access point it just left, if none is available.
You will notice that the client has a lot of work to do to make the handoff successful, and there are many places where the procedure can go wrong. Even if every request were to be accepted, any loss of some of the messages can cause long timeouts, often up to a second, as each side waits to make sure that no messages are passing each other by.
If nothing at all is done to optimize this transition, the handoff mechanics can take an additional second or two, on top of the second or so taken by the scanning process before the handoff decision was made. In the worst case, the 802.1X communication can take a number of seconds.
Part of the issue is that the mechanisms are nearly the same for a handoff as they are for when the client initially connects. This lack of memory within the network within basic Wi-Fi prevents any optimizations and requires a fresh start each time.

Wireless Local Area Networks


 Introduction


Wireless local area networks (WLANs) are the same as the traditional LAN but they have a wireless interface. With the introduction of small portable devices such as PDAs (personal digital assistants), the WLAN technology is becoming very popular. WLANs provide high speed data communication in small areas such as a building or an office. It allows users to move around in a confined area while they are still connected to the network. Examples of wireless LAN that are available today are NCR's waveLAN and Motorola's ALTAIR. 
In this article, the transmission technology used in WLANs is considered. We will also discuss some of the technical standards for WLANs developed by the IEEE Project 802.11.

Figure 1 : The Motorola Envoy (PDA) [2]


Transmission Technology

There are three main ways by which WLANs transmit information : microwave, spread spectrum and infrared.

Microwave Transmission

Motorola's WLAN product (ALTAIR) transmits data by using low powered microwave radio signals. It operates at the 18GHz frequency band.

Spread Spectrum Transmission

With this transmission technology, there are two methods used by wireless LAN products : frequency hopping and direct sequence modulation.
  • Frequency Hopping 
    The signal jumps from one frequency to another within a given frequency range. The transmitter device "listens" to a channel, if it detects an idle time (i.e. no signal is transmitted), it transmits the data using the full channel bandwidth. If the channel is full, it "hops" to another channel and repeats the process. The transmitter and the receiver "jump" in the same manner.
  • Direct Sequence Modulation 
    This method uses a wide frequency band together with Code Division Multiple Access (CDMA). Signals from different units are transmitted at a given frequency range. The power levels of these signals are very low (just above background noise). A code is transmitted with each signal so that the receiver can identify the appropriate signal transmitted by the sender unit. 
    The frequency at which such signals are transmitted is called the ISM (industrial, scientific and medical) band. This frequency band is reserved for ISM devices. The ISM band has three frequency ranges : 902-928, 2400-2483.5 and 5725-5850 MHz. An exception to this is Motorola's ALTAIR which operates at 18GHz.
    Spread spectrum transmission technology is used by many wireless LAN manufacturers such as NCR for waveLAN product and SpectraLink for the 2000 PCS.

Infrared Transmission

This method uses infrared light to carry information. There are three types of infrared transmission : diffused, directed and directed point-to-point.
  • Diffused 
    The infrared light transmitted by the sender unit fills the area (e.g. office). Therefore the receiver unit located anywhere in that area can receive the signal.
  • Directed 
    The infrared light is focused before transmitting the signal. This method increases the transmission speed.
  • Directed point-to-point 
    Directed point-to-point infrared transmission provides the highest transmission speed. Here the receiver is aligned with the sender unit. The infrared light is then transmitted directly to the receiver.
The light source used in infrared transmission depends on the environmemt. Light emitting diode (LED) is used in indoor areas, while lasers are used in outdoor areas. 
Infrared radiation (IR) has major biological effects. It greatly affects the eyes and skin. Microwave signals are also dangerous to health. But with proper design of systems, these effects are reduced considerably.


Technical Standards

Technical standards are one of the main concerns of users of wireless LAN products. Users would like to be able to buy wireless products from different manufacturers and be able to use them on one network. The IEEE Project 802.11 has set up universal standards for wireless LAN. In this section we will consider some of these standards.

Requirements

In March 1992 the IEEE Project 802.11 established a set of requirements for wireless LAN. The minimum bandwidth needed for operations such as file transfer and program loading is 1Mbps. Operations which need real-time data transmission such as digital voice and process control, need support from time bounded services.

Types of Wireless LAN

The Project 802.11 committee distinguished between two types of wireless LAN : "ad-hoc" and "infrastructred" networks. 

Figure 2 : (a) Infrastructred Wireless LAN; (b) Ad-hoc Wireless LAN. [3] 

Ad-hoc Networks

Figure 2b shows an ad-hoc network. This network can be set up by a number mobile users meeting in a small room. It does not need any support from a wired/wireless backbone. There are two ways to implement this network.
  • Broadcasting/Flooding 
    Suppose that a mobile user A wants to send data to another user B in the same area. When the packets containing the data are ready, user A broadcasts the packets. On receiving the packets, the receiver checks the identification on the packet. If that receiver was not the correct destination, then it rebroadcasts the packets. This process is repeated until user B gets the data.
  • Temporary Infrastructure 
    In this method, the mobile users set up a temporary infrastructure. But this method is complicated and it introduces overheads. It is useful only when there is a small number of mobile users.

Infrastructure Networks

Figure 2a shows an infrastructure-based network. This type of network allows users to move in a building while they are connected to computer resources. 
The IEEE Project 802.11 specified the components in a wireless LAN architecture. In an infrastructure network, a cell is also known as a Basic Service Area (BSA). It contains a number of wireless stations. The size of a BSA depends on the power of the transmitter and receiver units, it also depends on the environment. A number of BSAs are connected to each other and to a distribution system by Access Points (APs). A group of stations belonging to an AP is called a Basic Service Set (BSS). Figure 3 shows the basic architecture for wireless LANs. 

Figure 3 : Architecture for Wireless LANs [2] 

Conclusion

Wireless LAN provide high speed data communication. The minimum data rate specified by the IEEE Project 802.11 is 1Mbps. NCR's waveLAN operates at 2Mbps, while Motorola's ALTAIR operates at 15Mbps. 
Because of their limited mobility and short transmission range, wireless LANs can be used in confined areas such as a conference room. In the U.S, almost all WLANs products use spread spectrum transmission. Therefore they transmit information on the ISM band. But with this frequency band, users can experience interference from other sources using this band. 

Intelligent Network (IN)



The first service introduced in the PSTN with the help of network databases in 1980 was calling card service; soon after that, a series of value-added services for businesses called inward wide area telecommunications service (INWATS) were introduced. When the U.S. Federal Communications Commission (FCC) approved a tariff for expanded 800 service in 1982, the Bell system was ready to support it with many new features due to the distributed nature of the implementation. For example, a customer dialing an 800 number of a corporation could be connected to a particular office depending on the time of day or day of week. As the development of such features progressed, it became clear that in many cases it would be more efficient to decide how to route a customer’s call after prompting the customer with a message that provided several options, and instructions on how to select them by pushing dial buttons on the customer’s telephone. For the purpose of customer interaction, new devices that could maintain both the circuit connections to customers (in order to play announcements and collect digits) and connections to the SS No. 7 network (to receive instructions and report results to the databases) were invented and deployed. The network database ceased to be just a database—its role was not simply to return responses to the switch queries but also to instruct the switches and other devices as to how to proceed with the call. Computers previously employed only for storing the databases were programmed with the so-called service logic, which consisted of scripts describing the service. This was the historical point at which the service logic started to migrate from the switches.

After the 1984 court decree broke up the Bell System, the newly created Regional Bell Operating Companies (RBOCs) ordered their R&D arm, Bell Communications Research, to develop a general architecture and specific requirements for central, network-based support of services. An urgent need for such an architecture was dictated by the necessity of buying the equipment from multiple vendors. This development resulted in two business tasks that Bellcore was to tackle while developing the new architecture: (1) The result had to be equipment-independent and (2) as many service functional capabilities as possible were to move out of the switches (to make them cheaper). The tasks were to be accomplished by developing the requirements and getting the vendors to agree to them. As Bellcore researchers and engineers were developing the new architecture, they promoted it under the name of Intelligent Network. The main result of the Bellcore work was a set of specifications called Advanced Intelligent Network (AIN), which went through several releases.

AT&T, meanwhile, continued to develop its existing architecture, and its manufacturing arm, AT&T Network Systems, built products for the AT&T network and RBOCs. Only the latter market, however, required adherence to the AIN specifications. In the second half of the 1980s, similar developments took place around the world—in Europe, Japan, and Australia. In 1989, a standards project was initiated in ITU to develop recommendations for the interfaces and protocols in support of Intelligent Network (IN).

To conclude the historical review of IN, we give you some numbers: Today, in the United States, at least half of all interexchange carrier voice calls are IN supported. This generates on the order of $20 billion in revenue for IXCs. LECs use IN to implement local number portability (LNP), calling name and message delivery, flexible call waiting, 800 service carrier selection, and a variety of other services (Kozik et al., 1998). The IN technology also blends wireless networks and the PSTN, is being used strategically in the PSTN-Internet convergence.

We are ready now to formulate a general definition of IN: IN is an architectural concept for the real-time execution of network services and customer applications. The architecture is based on two main principles: network independence and service independence. Network independence means that the IN function is separated from the basic switching functions as well as the means of interconnection of the switches and other network components. Service independence means that the IN is to support a wide variety of services by using common building blocks.

The IN execution environment includes the switches, computers, and specialized devices, which, at the minimum, can communicate with the telephone user by playing announcements and recognizing dial tones. (More sophisticated versions of such devices can also convert text to voice and even vice versa, send and receive faxes, and bridge teleconferences). All these components are interconnected by means of a data communications network. The network can be as small as the local area network (LAN), in which case the computers and devices serve one switch (typically a PBX), or it can span most switches in an IXC or LEC. In the latter case, the data network is SS No. 7, and usually the term IN means this particular network-wide arrangement. [In the case of a single switch, the technology is called computer-telephony integration (CTI).]

The overall IN architecture also includes the so-called service creation and service management systems used to program the services and distribute these programs and other data necessary for their execution among the involved entities.

Figure 1 depicts the network-wide IN execution environment. We will need to introduce more jargon now. The service logic is executed by a service control point (SCP), which is queried—using the SS No. 7 transaction mechanism—by the switches. The switches issue such queries when their internal logic detects triggers (such as a telephone number that cannot be translated locally, a need to authorize a call, an event on the line—such as called party being busy, etc.). The SCP typically responds to the queries, but it can also start services (such as wake-up call) on its own by issuing an instruction to a switch to start a call.
Figure 1: The IN architecture.

As we noted before, to support certain service features (such as 800 number translation), the SCP may need to employ special devices (in order to play announcements and collect digits or establish a conference bridge). This job is performed by the intelligent peripheral (IP). The IP is connected to the telephone network via a line or trunk, which enables it to communicate with a human via a voice circuit. The IP may be also connected to the SS No. 7 network, which allows it to receive instructions from the SCP and respond to them. (Alternatively, the SCP instructions can be relayed to the IP through the switch to which it is connected.) As SCPs have become executors of services (rather than just the databases they used to be), the function of the databases has been moved to devices called service data points (SDPs).

Finally, there is another device, called a service node (SN), which is a hybrid of the IP, the SCP, and a rather small switch. Similar to the SCP, the SN is a general-purpose computer, but unlike the SCP it is equipped with exotic devices such as switching fabric and other things typically associated with an IP. The SN connects to the network via the ISDN access mechanism, and it runs its own service logic, which is typically engaged when a switch routes a call to it. An example of its typical use is in voice-mail service. When a switch detects that the called party is busy, it forwards the call to the SN, which plays the announcement, interacts with the caller, stores voice messages and reads them back, and so on. The protocols used for the switch-to-SCP, SCP-to-SDP, and SCP-to-IP communications are known as Intelligent Network Application Part (INAP), which is the umbrella name. INAP, has evolved from the CCS switch-to-database transaction interactions; it is presently based on the Transaction Capabilities (TC) protocol of Signalling System No. 7.

Because the SCP and SN are general-purpose computers, they can be easily connected to the Internet and thus engage the Internet endpoints in the PSTN services. This observation was made as early as 1995, and it has already had far-reaching consequences, as will be seen in the material that follows.

The PSTN Access to IP Networks



Most of the technologies in the area of PSTN access to IP networks have been relatively well understood—that is, supported by the standards and widely implemented in products. For this reason, much material on this subject resides in the next two parts (which cover available standards and products, respectively). The technologies we describe here relate to physical access to the network. We have already described the ISDN; with the growing demand for the Internet access, residential subscription to the ISDN has grown (although not necessarily for the purposes for which the ISDN was invented). Typically, users bundle the B and D channels to get one big data pipe, and use this pipe for Internet access. Other types of access technologies are described in the following section.
An important problem facing the PSTN today is the data traffic that it carries to IP networks; the PSTN was not designed for data traffic and therefore needs to offload this traffic as soon as possible. We describe the problem and the way it is tackled by the industry in a separate section, which, to make the overall picture more complete, we tie in with the technique of tunneling as the paradigm for designing IP VPNs. Both technologies have been developed independently and for different purposes; both, however, work together to resolve the access issues.

Physical Access

We talk about approaches to integration of the Internet with telephony in which the action occurs at the network layer or higher—things like carrying voice over IP or using control signals originating within the Internet to cause connections to appear and disappear within the telephony network. However, integration at the lowest level—the physical level—is also of great practical importance, and nowhere more so than in the access portion of the network. Here, advances in digital signaling processing techniques and in high-speed electronics have resulted in remarkable progress in just the last few years, allowing access media originally deployed more than a century ago for telephony to also support access to the Internet at previously unimagined speeds. In our brief survey of these new access technologies, we will first provide an overview of the access environment, and then go on to describe both the 56-kbps PCM modem and the xDSL class of high-speed digital lines.
The Access Environment
Today it is quite possible, and not at all uncommon, for business users to obtain direct high-speed optical fiber access to telephony and data networks, including the Internet. For smaller locations, such as individual homes and small business sites, despite experiments in the 1980s with fiber to the home and in the early 1990s with hybrid fiber coax, physical access choices mostly come down to twisted pair telephone line and cable TV coax. We will not cover business fiber access or the cable modem story here, on the grounds that the former is a relatively well understood if impressively capable technology and that the latter is somewhat outside the scope of our Internet/ telephony focus. Instead, we will look at recent developments in greatly speeding up access over ordinary telephone lines.
The twisted pair telephone line was developed in the 1880s as an improvement over earlier single-wire and parallel-wire designs. The single-wire lines, which used earth return, were noisy and subject to the variable quality of grounding connections, while the parallel-wire lines were subject to cross talk from one line to another. The twists in a twisted pair set up a self-canceling effect that reduces electromagnetic radiation from the line and thus mitigates cross talk. This simple design creates a very effective transmission medium that has found many uses in data communication (think of 10BaseT LANs and their even higher-speed successors) as well as in telephony. Two-wire telephone access lines are also called loops, as the metallic forward and return paths are viewed as constituting a loop for the current that passes through the telephone set.
In modern telephone networks, homes that are close enough to the central office are directly connected to it by an individual twisted pair (which may be spliced and cross-connected a number of times along the way). The twisted pair from a home farther away is connected instead to the remote terminal of a digital loop carrier (DLC) system. The DLC system then multiplexes together the signals from many telephone lines and sends them over a fiber-optic line (or perhaps over a copper line using an older digital technology like T1) to the central office. In the United States, close enough for a direct twisted pair line generally means less than 18,000 feet (18 kft). For a variety of reasons (including installation prior to the invention of DLCs), there are a fair number of twisted pair lines more than 18 kft in length. These use heavy-gauge wire, loading coils, or even amplifiers to achieve the necessary range. The statistics of loop length and the incidence of DLC use vary greatly among countries depending on demographic factors. In densely populated countries, loops tend to be short and DLCs may be rare. Another loop design practice that varies from country to country is the use of bridged taps. These unterminated twisted pair stubs are often found in the United States, but rarely in Europe and elsewhere.

From the point of view of data communication, the intriguing thing about this access environment is that in general it is less band-limited than an end-to-end telephone network connection, which of course is classically limited to a 4-kHz bandwidth. While there is indeed a steady falloff in the ease with which signals may be transmitted as their frequency increases, on most metallic loops (the exceptions are loops with loading coils and, more rarely, loops with active elements such as amplifiers) there is no sharp bandwidth cutoff. Thus, the bandwidth of a twisted pair loop is somewhat undefined and subject to being extended by ingenious signal processing techniques.

For decades, the standard way of pumping data signals over the telephone network was to use voiceband modems. Depending on their vintage, readers may remember when the data rate achievable by such devices was limited to 2400, 4800, or 9600 bps. This technology finally reached its limit a few years ago at around 33.6 kbps. By exploiting the extra bandwidth available in the loop plant, xDSL systems are able to reach much higher access speeds. We will describe these systems shortly, but first will take a small detour to talk about another intriguing recent advance in access that exploits a somewhat more subtle reservoir of extra bandwidth in the telephone network: the 56-kbps PCM modem.
The PCM Modem
Conventional voiceband modems are designed under the assumption that the end-to-end switched or private line connection through the telephone network is an analog connection with a bandwidth of just under 4 kHz, subject to the distortion of additive white Gaussian noise (AWGN). When the first practical voiceband modems were designed about 40 years ago, this was literally true. The path seen by a signal traveling from one telephone line to another over a long-distance switched network connection might be something like this: First over an analog twisted-pair loop to an electromechanical step-by-step switch, then over a metallic baseband or wireline analog carrier system to an electromechanical crossbar toll switch, then over a long-haul analog carrier system physically implemented as multiple microwave shots from hill to hill across a thousand miles, to another electromechanical crossbar toll switch, and back down through another analog carrier system to a local crossbar switch to the terminating analog loop. Private line connections were the same, except that permanently soldered jumper wires on cross-connect fields substituted for the electromechanical switches. Noise, of course, was added at every analog amplifier along the way for both the switched and private line cases.

A remarkable fact is that although when modeled as a black box the modern telephone network at the turn of the twenty-first century looks exactly the same as it did 40 years ago (a band-limited analog channel with some noise added to it), the interior of the network has been completely transformed to a concatenation of digital systems—mostly fiber-optic transmission systems and digital switches. Voice is carried through this network interior as sequences of 8-bit binary numbers produced by pulse-code modulation (PCM) encoders. Only the analog loops on both ends remain as a physical legacy of the old network.

By the way, what is it that makes these loops analog? After all, they are only long thin strands of copper metal—the most passive sort of electrical system imaginable. How does the loop know whether a signal impressed upon it is analog or digital? The answer is that it doesn’t know! In fact, in addition to the smoothly alternating electrical currents of analog voice, loops can carry all sorts of digital signals produced by modems and by all the varieties of digital subscriber line (DSL) systems. Ironically, the analog quality of the loop really derives from the properties of the analog telephone at the premises end of the loop and of the PCM encoder/decoder at the central office end—or, more precisely, from the assumption that the job of the PCM encoder is to sample a general band-limited analog waveform and produce a digital approximation of it, distorted by quantization noise—inevitable because the finite-length 8-bit word can only encode the signal level with finite precision.

It is this quantization noise, which averages about 33 to 39 dB, in combination with the bandwidth limitation of approximately 3 to 3.5 kHz, that limits conventionally designed modems to just over 33 kbps as calculated using the standard Shannon channel capacity formula (Ayanoglu et al., 1998).

Enter the PCM modem. Quoting Ayanoglu et al., who developed this technology at Bell Labs in the early 1990s: “The central idea behind the PCM modem is to avoid the effects of quantization distortion by utilizing the PCM quantization levels themselves as the channel symbol alphabet.” In other words, rather than designing the modem output signals without reference to the operation of the PCM encoder and then letting them fall subject to the distortion of randomly introduced quantization noise, the idea is to design the modem output so that “the analog voltage sampled by the codec passes through the desired quantization levels precisely at its 8-kHz sampling instants.” In theory, then, a pair of PCM modems attached to the two analog loops in an end-to-end telephone connection could commandeer the quantization levels of the PCM codecs at the central office ends of the loops and use them to signal across the network at something approaching the 64-kbps output rate of the voice coders. Actually, filters in the central office equipment limit the loop bandwidth to 3.5 kHz, and this in turn means that no more than 56 kbps can be achieved. Also, it turns out that there are serious engineering difficulties with attempting to manipulate the output of the codecs by impressing voltage levels on the analog side.

Fortunately, there is an easier case that is also of great practical importance to the business of access to data networks—including the Internet. Most ISPs and corporate remote access networks employ a system of strategically deployed points of presence at which dial-up modem calls from subscribers to their services are concentrated. At these points, the calls are typically delivered from the telephone company over a multiplexed digital transmission system, such as a T1 line. The ISP or corporate network can then be provided with a special form of PCM modem at the POP site that writes or reads 8-bit binary numbers directly to or from the T1 line (or other digital line), thus permitting the modem on the network side to directly drive the output of the codec on the analog line side as well as to directly observe the PCM samples it produces in the other direction. The result is that, in the direction from the network toward the consumer (the direction in which heavy downloads of things like Web pages occur), a rate approaching 56 kbps can be achieved. The upstream signal, originating in an analog domain where direct access to the PCM words is not possible, remains limited to somewhat lower speeds.

So hungry are residential and business users for bandwidth that 56-kbps modems became almost universally available on new PCs and laptops shortly after the technology was reduced to silicon—and even before the last wrinkles of standards compatibility were ironed out. The standards issues have since been worked through by ITU-T study group (SG) 16, and the 56-kbps modem is now the benchmark for dial-up access over the telephony network to the Internet.
Digital Subscriber Lines
Digital subscriber line (DSL) is the name given to a broad family of technologies that use clever signal design and signal processing to exploit the extra bandwidth of the loop plant and deliver speeds well in excess of those achievable by conventional voiceband modems. The term is often given as xDSL, where x stands for any of many adjectives used to describe different types of DSL. In fact, so many variations of DSL have been proposed and/or hyped, with so many corresponding values of x, that it can be downright confusing—too bad, really, since DSL technology has so much to offer. We will attempt to limit the confusion by describing the types of DSL that appear to be of most practical importance in the near term, with a few words about promising new developments.

The term DSL first appeared in the context of ISDN—which struggled with low acceptance rates and slow deployment until it enjoyed a mini-Renaissance in the mid-1990s, buoyed by the unrelenting demand for higher-speed access to the Internet. The ISDN DSL sends 160 kbps in both directions at once over a single twisted pair. The total bit rate accommodates two 64-kbps B channels, one 16-kbps D channel, and 16 kbps for framing and line control. Bidirectional transmission is achieved using an echo-canceled hybrid technology in most of the world. In Japan, bidirectionality is achieved using Ping Pong, called time compression multiplexing by the more serious-minded, in which transmission is performed at twice the nominal rate in one direction for a while, and then, after a guard time, the line is turned around and the other direction gets to transmit. ISDN DSLs can extend up to 18 kft, so they can serve most loops that go directly to the central office or to a DLC remote terminal. Special techniques may be used to extend the range in some cases, at a cost in equipment and special engineering. ISDN DSL was a marvel of its day, but is relatively primitive in comparison to more recently developed varieties of DSL.
HDSL
The next major type of DSL to be developed was the high-bit-rate digital subscriber line (HDSL). The need for HDSL arose when demand accelerated for direct T1 line interconnection to business customer locations providing for 1.544-Mbps access. T1 was a technology for digital transmission over twisted pairs that was originally developed quite a long time ago (the early 1960s, in fact) with application to metropolitan area telephone trunking in mind. With its 1.544-Mbps rate, a T1 line could carry twenty-four 64-kbps digital voice signals over two twisted pairs (one for each transmission direction). In this application, T1 was wildly successful, and by the late 1970s it had largely displaced baseband metallic lines and older analog carrier systems for carrying trunks between telephone central offices within metropolitan regions—distances up to 50 miles or so. However, applying T1 transmission technology directly to twisted pairs going to customer premises presented several difficulties. A basic one was that T1 required a repeater every 3000 to 5000 feet. This represented a major departure from practice in the loop plant, which was engineered around the assumption that each subscriber line was connected to the central office by a simple wire pair with no electronics along the way—or at least for up to 18 kft or so when a DLC system might be encountered. Also, T1 systems employ high signal levels that present problems of cross talk and difficulties for loop plant technicians not used to dealing with signals more powerful than those produced by human speech impinging on carbon microphones.

A major requirement for the HDSL system was therefore to provide for direct access to customer sites over the loop plant without the use of repeaters. The version of HDSL standardized by the ITU-T as G.991.1 in 1998 achieves repeaterless transmission over loops up to 12 kft long at both the North American T1 rate of 1.544 Mbps and the E1 rate of 2.048 Mbps used in Europe and some other places. Repeaters can be used to serve longer loops if necessary. When employed, they can be spaced at intervals of 12 kft or so, rather than the 3 to 5 kft required in T1. The repeaterless (or few-repeater) feature greatly reduces line conditioning expenses for deployment in the loop plant compared to traditional T1. In addition, HDSL can tolerate (within limits) the presence of bridged taps, avoiding the expense of sending out technicians to remove these taps.
HDSL systems typically use two twisted pairs, just as does T1. However, rather than simply using one pair for transmitting from east to west and the other for west to east, HDSL reduces signal power at high frequencies by sending in both directions at once on each pair, but at only half the total information rate. The two transmission directions are separated electronically by using echo-canceled hybrids, just as in ISDN DSL.

Overall, HDSL provides a much more satisfactory solution for T1/E1 rate customer access than the traditional T1-type transmission system. Work is currently under way in the standards bodies on a second-generation system, called SDSL (for “symmetric” or “single-pair” DSL) or sometimes HDSL2, which will achieve the same bit rates over a single wire pair. To do this without recreating the cross talk problems inherent in T1 requires much more sophisticated signal designs borrowed from the most advanced modem technology, which in turn requires much more powerful processors at each end of the loop for implementation. By now the pattern should be familiar—to mine the extra bandwidth hidden in the humble loop plant, we apply high-speed computation capabilities that were quite undreamed of when Alexander Graham Bell began twisting pairs of insulated wire together and observing what a nice clean medium they produced for the transmission of telephone speech!
ADSL
The second major type of DSL of current practical significance is asymmetric digital subscriber line (ADSL). Compared to HDSL, ADSL achieves much higher transmission speeds (up to 10 Mbps) in the downstream direction (from the central office toward the customer) and does this over a single wire pair. The major trade-off is that speeds in the upstream direction (from the customer toward the central office) are reduced, being limited to 1 Mbps at most. ADSL is also capable of simultaneously supporting analog voice transmission.
Considering these basic characteristics, it is clear that ADSL is particularly suited to residential service in that it can support:

  • High-speed downloading in applications like Web surfing

  • Rather lower speeds from the consumer toward the ISP

  • Ordinary voice service on the same line
On the other hand, these characteristics also meet the needs of certain small business (or remote business site) applications as well. The basic business proposition of ADSL is that these asymmetric characteristics, which are the key to achieving the high downstream rate, represent a significant market segment. Time will tell how ADSL fares against other access options such as cable modems and fixed wireless technologies, but the proposition seems to be a plausible one.
The way ADSL exploits asymmetry to achieve higher transmission rates has to do with the nature of cross talk and with the frequency-dependent transmission characteristics of telephone lines. Earlier we noted that there is not a sharp frequency cutoff on unloaded loops, but there is a steady decline in received signal power with increasing frequency. If a powerful high-frequency (high-bit-rate) transmitter is located near a receiver trying to pick up a weak incoming high-frequency signal, the receiver will be overwhelmed by near-end cross talk. The solution is to transmit the high-frequency (high-bit-rate) signal in only one direction. A basic ADSL system is thus an application of classic frequency division multiplexing, in which a wide, high-frequency band is used for the high-bit-rate downstream channel, a narrower and lower-frequency channel is used for the moderate-bit-rate upstream transmission, and the baseband region is left clear for ordinary analog voice (see Figure 1).
The basic concept of ADSL is thus rather simple. However, implementations utilize some very advanced coding, signal processing, and error control techniques in order to achieve the desired performance. Also, a wide variety of systems using differing techniques have been produced by various manufacturers, making standardization something of a challenge. Key ITU-T standards are G.992.1 and G.992.2. The latter provides for splitterless ADSL, which deserves some additional description.
ADSL Lite
In the original ADSL concept, a low-pass filter is installed at the customer end of the line to separate the baseband analog voice signal from the high-speed data signals (see Figure 2). In most cases, this filter requires the trouble and expense of an inside wiring job at the customer premises. To avoid this expense, splitterless ADSL, also known more memorably as ADSL Lite, eliminates the filter at the customer end. This lack of a filter can create some problems, such as error bursts in the data transmission when the phone rings or is taken off hook, or hissing sounds in some telephone receivers. However, the greatly simplified installation was viewed as well worth the possible small impairments by most telephone companies, and they pushed hard for the adoption of splitterless ADSL in standards.
Factors Affecting Achieved Bit Rate
Like ISDN DSL and HDSL, a basic objective of ADSL is to operate over a large fraction of the loops that are up to 18 kft long. However, the actual bit rate delivered to the customer may vary depending on the total loss and noise characteristics of the loop. The ANSI standard for ADSL (T1.413) provides for rate-adaptive operation much like that employed by high-speed modems. The downstream rate can be as high as 10 Mbps on shorter, less noisy loops, but may go down to 512 kbps on very long or noisy loops. Upstream rates may be as high as 900 kbps or as low as 128 kbps.
Future DSL Developments
We have already mentioned that work is under way on an improved version of HDSL, called HDSL2. Another name for this sometimes seen in the literature is symmetric DSL or single-pair DSL (SDSL).
Another new system, called very-high-rate DSL (VDSL), is under discussion in standards bodies. It will provide for very high downstream rates of up to 52 Mbps. VDSL would work in combination with optical transmission into the neighborhood of the customer. High-speed transmission over the copper loop would only be used for the last kilometer or so.
Applicability
We’ve described a number of advanced access technologies that can support remarkably high-data-rate access to data networks (including the Internet) over the existing telephone plant. How do you decide which ones, if any, to use?
In the case of the 56-kbps PCM modem, the decision will likely be made for you by the manufacturer of your PC or laptop. It’s simply the latest in modems and is often supplied as a standard feature.
For xDSL, the situation is a bit more complex. In most cases, you obtain a service from a telephone company or other network provider that uses HDSL or ADSL as an underlying transmission technology. The technology may or may not be highlighted in the service provider’s description of the offering. Essentially, the decision comes down to weighing the price of the service against how well it satisfies the needs of the application, including speed but also such factors as guarantees of reliability, speed of installation, whether an analog voice channel is included or needed, and so on. If you are more adventurous, you may try obtaining raw copper pairs from a service provider and applying your own xDSL boxes. If you contemplate going this route, you really need to learn a lot more about the transmission characteristics of these systems than we’ve covered here, and you should perhaps start by consulting some of the references listed in our bibliography.

Internet Offload and Tunneling

Internet traffic has challenged the foundation of the PSTN—the way it has been engineered. Contrary to the widespread view (based on the perceived high quality that users of telephony have enjoyed for many years), which holds that the telephone networks can take any calls of any duration, the PSTN has actually been rather tightly engineered to use its resources so as to adapt to the patterns of voice calls. Typical Internet access calls last 20 minutes, while typical voice calls last between 3 and 5 minutes (Atai and Gordon, 1997). The probability of the duration of a voice call exceeding one hour is 1 percent, versus 10 percent for Internet access calls. As the result, the access calls tie up the resources of local switches and interoffice trunks, which in turn increases the number of uncompleted calls on the PSTN. (As we mentioned in the section on network traffic management, the PSTN can block calls to a switch with a high number of busy trunks or lines. The caller typically receives a fast busy signal in this case.) In today’s PSTN, the call blocking rate is the principal indicator of the quality of service. The actual bandwidth of voice circuits is grossly wasted—Internet users consume only about 20 percent of the circuit bandwidth. The situation is only further complicated by flat-rate pricing of online services—believed to encourage Internet callers to stay on line twice as long as they would with a metered-rate plan.
The three problem areas identified in Atai and Gordon (1997) are (1) the local (ingress) switch from which the call has originated; (2) the tandem switch and interoffice trunks; and (3) the local (egress) switch that terminates calls at the ISP modem pool (Atai and Gordon, 1997). (The cited document does not take into account the IXC issues, but it is easy to see they are very similar to the second problem area.) The third problem area is the most serious because it can cause focused overload. Presently, such egress switches make up roughly a third of all local switches. The acuteness of the problem has been forcing the carriers to segregate the integrated traffic and offload it to a packet network as soon as possible.
The two options for carrying out the offloading are (1) to allow the Internet traffic to pass through the ingress switch, where it would be identified, and (2) to intercept the Internet traffic on the line side of the ingress switch. In all cases, however, the Internet traffic must first be identified. Identifying Internet traffic is best done by IN means. One way (which is unlikely to be implemented) is to collect all the ISP and enterprise modem pool access numbers and trigger on them—not a small feat, even if a feasible one. This triggering would slow down all local switches to a great extent. The other solution is to use local number portability queries; to implement the solution, all modem pool numbers would have to be configured as ported numbers. The third, and much better, way to carry out the offloading is for ISPs and enterprise modem pools to use a single-number service (an example is an 800 number in the United States) and let the IN route the call. The external service logic would inform the switch about the nature of the call (this information would naturally be stored). Many large enterprises already use 800 numbers for their modem pools. The fourth solution is to assign a special prefix to the modem pool number; then the switch would know right away, even before all the digits had been dialed, that it was dealing with an Internet dial-up. (Presently, however, switches often identify an Internet call by detecting the modem signals on the line.)
Two post-switch offloading solutions are gaining momentum. The first is terminating all calls in a special multiservice module—effectively a part of the local switch—in the PSTN. The multiservice module would then send the data traffic (over an ATM, frame relay, or IP network) to the ISP or enterprise access server (which would no longer need to be involved with the modems). The other solution is to terminate all calls at network access servers that would act as switches in that they would establish a trunk with the ingress switch. The access servers would then communicate with the ISP or enterprise over the Internet. One problem with this solution is that access servers would have to be connected to the SS No. 7 network, which is expensive and, so far, hardly justified. To correct this situation, a new SS No. 7 network element, the SS7 gateway, acts as a proxy on behalf of several access servers (thus significantly cutting the cost). The access servers communicate with the SS7 gateway via an enhanced (that is, modified) ISDN access protocol, as depicted in Figure 3.
At this point you may ask: How are the network access servers connected to the rest of the ISP or enterprise network? Until relatively recently, this was done by means of leased telephone lines (permanent circuits) or private lines, both of which were (and still are) quite expensive. Another way to connect the islands of a network is by using tunneling, that is, sending the packets whose addresses are significant only to a particular network. These packets are encapsulated (as a payload) into the packets whose addresses are significant to the whole of the Internet, and they travel between the two border points of the network through what is metaphorically called a tunnel. Again, the packets themselves are not looked at by the intermediate nodes, because to nodes the packets are nothing but the payload encapsulated in outer packets. Only the endpoints of a tunnel are aware of the payload, which is extracted by and acted on by the destination endpoint. Tunnels are essential for an application called the virtual private network (VPN).With tunneling, for example, the two nodes of a private network that have no direct link between them may use the Internet or another IP network as a link.We will address tunneling systematically as far as security and the use of the existing protocols is concerned. Another essential aspect of tunneling is quality of service (QoS), so we address that issue again when reviewing the multiprotocol label switching (MPLS) technology. As you have probably noticed, we have already ventured into a purely IP area. This is one example where it is virtually impossible to describe a PSTN solution without invoking its IP counterpart.
Going back to the employment of the SS7 gateway, we should note one important technological development: With the SS7 gateway, an ISP can be connected to a LEC as a CLEC

The Difference between Network Assistance and Network Control


 If you have read the sections on cellular handoff, you'll know that there are broadly two different methods for phone handoffs to occur. The first method, network control, is how the network determines when the phone is to hand off and to which base station the phone is to connect. In this method, the mobile phone may participate by assisting in the handoff process, usually by providing information about the radio environment. The second method, network assistance, is where the network has the ability to provide that assistance, but the mobile phone is fundamentally the device that decides.
For transitions across basic service sets (BSSs) in Wi-Fi, the client is in control, and the network can only assist. Why is this? An early design decision in Wi-Fi was made, and the organization broke away from the comparatively long history of cellular networking. In the early days of Wi-Fi, each cell was unmanaged. An access point, compared to a client, was thought of as the dumber of the two devices. Although the access point was charged with operating the power saving features (because it is always plugged in), the client was charged with making sure the connection to the network stayed up. If anything goes wrong and a connection drops, the client is responsible for searching out for one of any number of networks the client might be configured to connect to, and the network needed to learn only about the client at that point. It makes a fair amount of sense. Cellular networks are managed by service providers, and the force of law prevents people from introducing phones or other devices that are not sanctioned and already known about by the service provider. Therefore, a cell phone could be the slave in the master/slave relationship. On the other hand, with Wi-Fi putting the power of the connection directly into the hands of the client, the network never needs to have the client be provisioned beforehand, and any device can connect. In many ways, this fact alone is why Wi-Fi holds its appeal as a networking technology: just connect and go, for guest, employee, or owner.
This initial appeal, and tremendous simplicity which comes with it, has its downsides, and quickly is meeting its limitations. Cellular phones, being managed entities, never require the user to understand the nature of the network. There are no SSIDs, no passphrases to enter. The phone knows what it is doing, because it was built and provisioned by the service provider to do only that. It simply connects, and when it doesn't, the screen shows it and users know to drive around until they find more bars. But in Wi-Fi, as long as the handset owns the process of connecting, these other complexities will always exist.
Now, you might have noticed that SSIDs and passwords have to do only with selecting the "service provider" for Wi-Fi, and once the user has that down (which is hopefully only once, so long as the user is not moving into hotspots or other networks), the real problem is with the BSSID, or the actual, distinct identities of each cell. That way of thinking has a lot to it, but misses the one point. The Wi-Fi client has no way of knowing that two access points—even with the same SSID—belongs to the same "network." In the original Wi-Fi, there is not even a concept of a "network," as the term is never used. Access points exist, and each one is absolutely independent. No two need to know about each other. As long as some Ethernet bridge or switch sits behind a group of them, clients can simply pass from one to the other, with no network coordination. This is what I mean, then, by client control. In this view of the world, there really is no such thing as a handoff. Instead, there is just a disconnection. Perhaps, maybe, the client will decide to reconnect with some access point after it disconnects from the first. Perhaps this connection will even be quick. Or perhaps it will require the user to do something to the phone first. The original standards remain silent—as would have phones, had the process not been improved a bit.
Network assistance can be added into this wild-west mixture, however. This slight shift in paradigm by the creators of the Wi-Fi and IEEE standards is to give the client more information, providing it with ways of knowing that two access points might belong to the same network, share the same backend resources, and even be able to perform some optimizations to reduce the connection overhead. This shift doesn't fundamentally change the nature of the client owning the connection, however. Instead, the client is empowered with increasingly detailed information. Each client, then, is still left to itself to determine what to do and when to do it. It is an article of faith, if you will, that how the client determines what to do is "beyond the scope of the standard," a phrase in the art meaning that client vendors want to do things their own way. The network is just a vessel—a pipe for packets.
You'll find, as you explore voice mobility deployments with Wi-Fi as a leg, that this way of thinking is as much the problem as it is a way to make things simple. Allowing the client to make the choice is putting the steering wheel of the network—or at least, a large portion of the driving task—in the hands of hundreds of different devices, each made by its own manufacturer in its own year, with its own software, and its own applications. The complexity can become overwhelming, and the more successful voice mobility networks find the right combinations of technologies to make that complexity manageable, or perhaps to make it go away entirely.