Monday, May 28, 2007

Trunking and VLAN Identification

Setting up VLANs on a single switch is relatively simple. First you define different VLANs, and then make ports members of those VLANs. However, when you interconnect or link switches across a network (referred to as trunking), you’ll need a way for switches to know on what VLAN a frame belongs. There are two main types of trunk links, as described below.

Access Link. When a link connects a single VLAN between switches, and no traffic for other VLANs is passed over that link, it is considered an access link. The only traffic that moves across an access link is traffic belonging the VLAN defined for the ports that are connected.

Trunk Link. If a link connects two switches, and the switches have 2 or more VLANs defined, it wouldn’t make much sense to set up a separate access link for each VLAN. Instead, it would be great if we could have traffic from multiple VLANs move across a single link. If a VLAN identification (frame tagging) technique is used, this is possible. The link is then known as a trunk link.

Consider the figure below, which outlines both access and trunk links.

Figure: Access and Trunk Links.


Remember that switches are always connected together using a crossover cable.

If you remember back to Chapter 2, none of the Ethernet frames we looked at had any field used to identify the VLAN membership of frame. In order for VLANs to work properly between switches, we’ll need some way to be able to let switches know what VLAN a frame is meant for.

Enter frame tagging. Frame tagging is a technique where additional VLAN identification information is added to a frame. Two main protocols exists for the purpose of Ethernet frame tagging – Inter Switch Linking (ISL) and IEEE 802.1q. Both modify a frame in different ways to add VLAN identifiers. Once implemented, VLAN tagging allows ports on the same VLAN (but on different switches) to communicate as though they were part of a single physical switch.

Adding more information to a frame creates a slight dilemma. Remember that an Ethernet frame has a maximum size of 1518 bytes. How can we add information to a large frame without making it appear oversized and thus invalid to network devices? Well, we need to configure the ports that link switches to use a VLAN identification protocol. When configured with VLAN tagging, a switch port will tag a frame with VLAN information when sending it out a trunk port. This tagging will be stripped away by the switch at the receiving end of the link. In this way, end devices need not be aware that any special framing or tagging took place. It also helps avoid end systems seeing these frames as being invalid. A VLAN tagged frame has a maximum size of 1522 bytes. The figure below illustrates the process by which a frame is tagged to include VLAN identification information. Note that the special tagging is added before it leaves the Switch 1 trunk port, and is removed once it enters the trunk port on Switch 2.

Figure: Frame tagging over a trunk link.

A number of different protocols exist for the purpose of adding VLAN identification to frames. These include:

InterSwitch Link (ISL). ISL is a Cisco proprietary VLAN identification protocol that can be used only on Fast Ethernet and Gigabit Ethernet trunk ports. Because the protocol is proprietary, it can only be used to trunk between Cisco devices. ISL actually re-encapsulates the entire original frame with a new header and a new CRC value.

IEEE 802.1q. The IEEE 802.1q is the industry standard method of VLAN identification. This protocol doesn’t entirely re-encapsulate a frame, but instead adds VLAN identification information into Ethernet frames. This in turn can make Ethernet frames as large as 1522 bytes. When you want to use VLAN identification on a network that includes equipment from different vendors, 802.1q should be used.

Dynamic Trunking Protocol (DTP). An enhancement of Cisco’s Dynamic ISL (DISL) protocol, DTP dynamically negotiates both ends of a trunk link to use a common VLAN identification protocol, such as ISL or 802.1q.

FDDI 802.10. While trunking protocols such as ISL are meant to create a trunk link between only two switches, 802.10 encapsulation allows VLAN tagging to be used on a shared FDDI backbone. It does this by adding a 4-byte Security Association Identifier (SAID) field to the FDDI frame header.

ATM Lane. When Ethernet or Token Ring networks connect over ATM, LAN Emulation (LANE) must be used to emulate their native environments (since ATM doesn’t support broadcasts, for example). In cases where VLANs are required over ATM connections, Emulated LANs (ELANs) need to be defined. Each ATM ELAN maps to a single VLAN.

Tip: VLAN tagging methods like ISL allow VLAN membership information to be transported with a frame across trunk links.

Saturday, May 26, 2007

Virtual Router Redundancy Protocol

Virtual Router Redundancy Protocol (VRRP) is a non-proprietary redundancy protocol described in RFC 3768 designed to increase the availability of the default gateway servicing hosts on the same subnet. This increased reliability is achieved by advertising a "virtual router" (an abstract representation of master and backup routers acting as a group) as a default gateway to the host(s) instead of one physical router. Two or more physical routers are then configured to stand for the virtual router, with only one doing the actual routing at any given time. If the current physical router that is routing the data on behalf of the virtual router fails, an arrangement is made for another physical router to automatically replace it. The physical router that is currently forwarding data on behalf of the virtual router is called the master router. Physical routers standing by to take over from the master router in case something goes wrong are called backup routers.

VRRP can be used over Ethernet, MPLS and token ring networks. Implementations for IPv6Extreme Networks, Dell, Nokia, Nortel Networks, Cisco Systems, Inc, Allied Telesis, Juniper Networks, Huawei, Foundry Networks, Radware and 3Com Corporation all offer routers and Layer 3 switches that can use the VRRP protocol. VRRP implementations for Linux and BSD are also available. are in development, but not yet available. The VRRP protocol is more widely implemented than its competitors. Vendors like

VRRP is not a routing protocol as it does not advertise IP routes or affect the routing table in any way.

Implementation

A virtual router must use 00-00-5E-00-01-XX as its Media Access Control (MAC) address. The last byte of the address (XX) is the Virtual Router IDentifier (VRID), which is different for each virtual router in the network. This address is used by only one physical router at a time, and is the only way that other physical routers can identify the master router within a virtual router. Physical routers acting as virtual routers must communicate within themselves using packets with multicast IP address 224.0.0.18 and IP protocol number 112.

Master routers have a priority of 255 and backup router(s) can have priority between 1-254. When a planned withdrawal of a master router is to take place, it changes its priority to zero which forces a backup router to take up the master router status more quickly. This is in order to reduce the black hole period.

Elections of master routers

A failure to receive a multicast packet from the master router for a period longer than three times the advertisement timer causes the backup routers to assume that the master router is dead. The virtual router then transitions into an unsteady state and an election process is initiated to select the next master router from the backup routers. This is fulfilled though the use of multicast packets.

It should be noted that backup router(s) are only supposed to send multicast packets during an election process. One exception to this rule is when a physical router is configured to always overthrow the current master after it has been introduced into the virtual router. This allows a system administrator to force a physical router to the master state immediately after booting, for example when that particular router is more powerful than others within the virtual router or when that particular router uses the least expensive bandwidth. The backup router with the highest priority becomes the master router by raising its priority to 255 and sending Address Resolution Protocol packets with the virtual MAC address and its physical IP address. This redirects the host's packets from the fallen master router to the current master router. In cases where backup routers all have the same priority, the backup router with the highest IP address becomes the master router.

All physical routers acting as a virtual router must be within one hop of each other. Communication within the virtual router takes place periodically. This period can be adjusted by changing advertisement interval timers. The shorter the advertisement interval, the shorter the black hole period, though at the expense of more traffic in the network. Security is achieved by responding only to first hop packets, though other mechanisms are provided to reinforce this, particularly against local attacks. Some details have been omitted to improve readability. Notable among these is the use of skew time, derived from a router's priority and used to reduce the chance of the thundering herd problem occurring during election.

Backup router utilization can be improved by load sharing. For more on this, see RFC 3768.

History

VRRP is based on Cisco's proprietary HSRP concepts. VRRP is actually a standardized version of Cisco's HSRP. Those protocols, while similar in concept, are not compatible. Therefore, on newer installations it is recommended to implement VRRP, because it is the standard.

Friday, May 25, 2007

VRRP using VPN 3000 Series Concentrator

Introduction

The Virtual Router Redundancy Protocol (VRRP) eliminates the single point of failure inherent in the static default routed environment. VRRP specifies an election protocol that dynamically assigns responsibility for a virtual router (a VPN 3000 Series Concentrator cluster) to one of the VPN Concentrators on a LAN. The VRRP VPN Concentrator that controls the IP address(es) associated with a virtual router is called the Master, and forwards packets sent to those IP addresses. When the Master becomes unavailable, a backup VPN Concentrator takes the place of the Master.

Note: Refer to "Configuration | System | IP Routing | Redundancy" in the VPN 3000 Concentrator Series User Guide or the online Help for that section of the VPN 3000 Concentrator Manager for complete information on VRRP and how to configure it.

How Does the VPN 3000 Concentrator Implement VRRP?

  1. Redundant VPN Concentrators are identified by group.

  2. A single Master is chosen for the group.

  3. One or more VPN Concentrators can be Backups of the group's Master.

  4. The Master communicates its state to the Backup devices.

  5. If the Master fails to communicate its status, VRRP tries each Backup in order of precedence. The responding Backup assumes the role of Master.

    Note: VRRP enables redundancy for tunnel connections only. Therefore, if a VRRP failover occurs, the backup only listens to tunnel protocols and traffic. Pinging the VPN Concentrator does not work. Participating VPN Concentrators must have identical configurations. The virtual addresses configured for VRRP must match those configured on the interface addresses of the Master.

Configure VRRP

VRRP is configured on the public and private interfaces in this configuration. VRRP applies only to configurations where two or more VPN Concentrators operate in parallel. All participating VPN Concentrators have identical user, group, and LAN-to-LAN settings. If the Master fails, the Backup begins to service traffic formerly handled by the Master. This switchover occurs in 3 to 10 seconds. While IPsec and Point-to-Point Tunnel Protocol (PPTP) client connections are disconnected during this transition, users need only to reconnect without changing the destination address of their connection profile. In a LAN-to-LAN connection, switchover is seamless.

vrrp.gif

This procedure shows how to implement this sample configuration.

On the Master and Backup systems:

  1. Select Configuration > System > IP Routing > Redundancy. Change only these parameters. Leave all other parameters in their default state:

    1. Enter a password (maximum of 8 characters) in the Group Password field.

    2. Enter the IP addresses in the Group Shared Addresses (1 Private) of Master and all Backup systems. For this example, the address is 10.10.10.1.

    3. Enter the IP addresses in the Group Shared Addresses (2 Public) of Master and all Backup systems. For this example, the address is 63.67.72.155.

  2. Go back to the Configuration > System > IP Routing > Redundancy windows on all units and check Enable VRRP.

    Note: If you configured Load Balancing between the two VPN Concentrators before and you are configuring VRRP on them, make sure you take care of the IP address pool configuration. If you use the same IP pool as before, you need to change them. This is necessary because the traffic from one IP pool in a Load Balancing scenario is directed to only one of the VPN Concentrators.

Synchronize the Configurations

This procedure shows how to synchronize the configuration from Master to Slave either by doing load balancing or primary to secondary if doing VRRP.

  1. On Master or Primary select Administration > File Management and from the CONFIG row click View.

    vrrp-1.gif

  2. When the web browser opens with the configuration, highlight and copy the configuration (cntrl-a, cntrl-c).

  3. Paste the configuration in WordPad.

  4. Select Edit > Replace and enter the public interface IP address of Master or Primary in the Find What field. In the Replace With field, enter the IP address that you plan to assign on the Slave or Backup.

    Do the same for the private IP and the external interface if you have it configured.

  5. Save the file and give it a name that you choose. However, ensure you save it as a "text document" (for example, synconfig.txt).

    You cannot save as .doc (the default) and then change the extension later. The reason is because it saves the format and the VPN Concentrator only accepts text.

  6. Go to the Slave or Secondary and select Administration > File Management > File Upload.

    vrrp-2.gif

  7. Enter config.bak in the File on the VPN 3000 Concentrator field and browse to the saved file on your PC (synconfig.txt). Then click Upload.

    The VPN Concentrator uploads it and automatically changes the synconfig.txt to config.bak.

  8. Select Administration > File Management > Swap Configuration Files and click OK to make the VPN Concentrator boot up with the uploaded configuration file.

    vrrp-3.gif

  9. After you are redirected to the System Reboot window, leave the default settings and click Apply.

    vrrp-4.gif

    After it comes up, it has the same configuration as the Master or Primary with the exception of the addresses that you previously changed.

    Note: Do not forget to change the parameters in the Load Balancing or Redundancy (VRRP) window. Select Configuration > System > IP Routing > Redundancy.

    vrrp-6.gif

    Note: Alternatively, select Configuration > System > Load Balancing.

    vrrp-5.gif

Source : cisco.com

Thursday, May 24, 2007

Virtual Private LAN Service (VPLS)

Virtual private LAN service (VPLS) is a way to provide Ethernet based multipoint to multipoint communication over IP/MPLS networks. It allows geographically dispersed sites to share an ethernet broadcast domain by connecting sites through pseudo-wires. The technologies that can be used as pseudo-wire can be Ethernet over MPLS, L2TPv3 or even GRE. There are two IETF standards describing VPLS establishment, currently in Internet Draft status, but expected to be published as RFCs soon.

VPLS is a Virtual Private Network (VPN) technology. In contrast to layer 2 MPLS VPNs or L2TPv3, which allow only point-to-point layer 2 tunnels, VPLS allows any-to-any (multipoint) connectivity.

In a VPLS, the Local Area Network (LAN) at each site is extended to the edge of the provider network. The provider network then emulates a switch or bridge to connect all of the customer LANs to create a single bridged LAN.

Mesh establishment

Since VPLS emulates a LAN, full mesh connectivity is required. There are two methods for full mesh establishment for VPLS: using BGP and using Label Distribution Protocol (LDP). The "control plane" is the means by which Provider Edge (PE) routers communicate for auto-discovery and signaling. Auto-discovery refers to the process of finding other PE routers participating in the same VPN or VPLS. Signaling is the process of establishing pseudo-wires (PW). The PWs constitute the "data plane", whereby PEs send customer VPN/VPLS traffic to other PEs.

With BGP, one has auto-discovery as well as signaling. The mechanisms used are very similar to those used in establishing Layer-3 MPLS VPNs. Each PE is configured to participates in a given VPLS. The PE, through the use of BGP, simultaneously discovers all other PEs in the same VPLS, and establishes a full mesh of pseudo-wires to those PEs.

With LDP, each PE router must be configured to participate in a given VPLS, and, in addition, be given the addresses of other PEs participating in the same VPLS. A full mesh of LDP sessions is then established between thesel PEs. LDP is then used to create an equivalent mesh of PWs between those PEs.

An advantage to using PWs as the underlying technology for the data plane is that in case of failure, traffic will automatically be routed along available backup paths in the service provider's network. Failover will be much faster than could be achieved with e.g. Spanning Tree ProtocolWAN link to ethernet switches in both locations. (STP). VPLS is thus a more reliable solution for linking together ethernet networks in different locations than simply connecting a

Label stack

VPLS MPLS packets have a two-label stack. The outer label is used for normal MPLS routing in the service provider's network. If BGP is used to establish the VPLS, the inner label is allocated by a PE as part of a label block. If LDP is used, the inner label is a Virtual Circuit ID (VCID), assigned by LDP when it first established a mesh between the participating PEs. Every PE keeps track of assigned inner label, and associates these with the VPLS instance.

Ethernet emulation

PEs participating in a VPLS-based VPN must appear as an ethernet bridge to connected CEs. Received ethernet frames must be treated in such a way as to ensure CEs can be simple ethernet devices.

When a PE receives a frame from a CE, it inspects the frame and learns the CE's MAC address, storing it locally along with LSP routing information. It then checks the frame's destination MAC address. If it is a broadcast frame, or the MAC address is not known to the PE, it floods the frame to all PEs in the mesh.

Ethernet does not have a time to live (TTL) field in its frame header, so loop avoidance must be arranged by other means. In regular ethernet deployments, Spanning Tree Protocol is used for this. In VPLS, loop avoidance is arranged by the following rule: A PE never forwards a frame received from a PE, to another PE. The use of a full mesh combined with split horizon forwarding guarantees a loop-free broadcast domain.

Scalability

VPLS is typically used to link a large number of sites together. Scalability is therefore an important issue that needs addressing.

Hierarchical VPLS

VPLS requires a full mesh in both the control and data planes; this can be difficult to scale. For BGP, the control plane scaling issue has long been addressed, through the use of route reflectorspoint-to-multipoint LSPs as the underlying transport. (RRs). RRs are extensively used in the context of Internet routing, as well as for several types of VPNs. To scale the data plane for multicast and broadcast traffic, there is work in progress to use

For LDP, a method of subdividing a VPLS VPN into two or three tiered hierarchical networks was developed. Called Hierarchical VPLS (HVPLS), it introduces a new type of MPLS device: the Multi-Tenant Unit (MTU) switch. This switch aggregates multiple customers into a single PE, which in turn needs only one control and data plane connection into the mesh. This can significantly reduce the number of LDP sessions and LSPs, and thus unburden the core network, by concentrating customers in edge devices.

MAC addresses

Since VPLS links multiple ethernet broadcast domains together, it effectively creates a much larger broadcast domain. Since every PE must keep track of all MAC addresses and associated LSP routing information, this can potentially result in a large amount of memory being needed in every PE in the mesh.

To counter this problem, sites may use a router as the CE device. This hides all MAC addresses on that site behind the CE's MAC address.

PE devices may also be equipped with Content-addressable memory (CAM), similar to high-end ethernet switches.

PE auto-discovery

In a VPLS-based VPN with a large number of sites, manually configuring every participating PE does not scale well. If a new PE is taken into service, every existing PE needs to have its configuration adjusted to establish an LDP session with the new PE. Standardization work is in progress to enable auto-discovery of participating PEs. Two implementations are being worked on:

BGP

The BGP method of PE auto-discovery is based on that used by Layer-3 MPLS VPNs to distribute VPN routes among PEs participating in a VPN. The BGP4 Multi-Protocol (BGP-MP) extensions are used to distribute VPN IDs and VPN-specific reachability information. Since iBGP requires either a full mesh of BGP sessions or the use of a route reflector, enabling the VPN ID in a participating PEs existing BGP configuration provides it with a list of all PEs in that VPN. Note that this method is for auto-discovery alone; LDP is still used for signaling. The method of establishing VPLS with BGP described above accomplishes both auto-discovery and signaling.

RADIUS

This method requires ALL PEs to be configured with one or more RADIUS servers to use. When the first CE router in a particular VPLS VPN connects to the PE, it uses the CE's identification to request authentication from the RADIUS server. This identification may be provided by the CE, or may be configured into the PE for that particular CE. In addition to a username and password, the identification string also contains a VPN name, and an optional provider name.

The RADIUS server keeps track of all PEs that requested authentication for a particular VPN, and returns a list of them to the PE requesting authentication. The PE then establishes LDP sessions to every PE in the list.

Source : wikipedia.org

Wednesday, May 23, 2007

WiMAX Technology

WiMAX is a standards-based technology enabling the delivery of last mile wireless broadband access as an alternative to wired broadband like cable and DSL. WiMAX provides fixed , nomadic, portable and, soon, mobile wireless broadband connectivity without the need for direct line-of-sight with a base station. In a typical cell radius deployment of three to ten kilometers, WiMAX Forum Certified™ systems can be expected to deliver capacity of up to 40 Mbps per channel, for fixed and portable access applications.

This is enough bandwidth to simultaneously support hundreds of businesses with T-1 speed connectivity and thousands of residences with DSL speed connectivity. Mobile network deployments are expected to provide up to 15 Mbps of capacity within a typical cell radius deployment of up to three kilometers. It is expected that WiMAX technology will be incorporated in notebook computers and PDAs by 2007, allowing for urban areas and cities to become “metro zones” for portable outdoor broadband wireless access.

Definitions

The terms "fixed WiMAX", "mobile WiMAX", "802.16d" and "802.16e" are frequently used incorrectly. Correct definitions are:

802.16d

Strictly speaking, 802.16d has never existed as a standard. The standard is correctly called 802.16-2004. However, since this standard is frequently called 802.16d, that usage also takes place in this article to assist readability.

802.16e

Just as 802.16d has never existed, a standard called 802.16e hasn't either. It's an amendment to 802.16-2004, so is not a standard in its own right. It's properly referred to as 802.16e-2005.

Fixed WiMAX

This is a phrase frequently used to refer to systems built using 802.16-2004 as the air interface technology.

Mobile WiMAX

A phrase frequently used to refer to systems built using 802.16e-2005 as the air interface technology. "Mobile WiMAX" implementations are therefore frequently used to deliver pure fixed services.

Comparison with Wi-Fi

Possibly due to the fact both WiMAX and Wi-Fi begin with the same two letters, and are based upon IEEE standards beginning with 802., and both have a connection to wireless connectivity and the Internet, comparisons and confusion between the two are frequent. Despite this, both standards are aimed at different applications.

  • WiMAX is a long range (many kilometers) system that uses licensed or unlicensed spectrum to deliver a point-to-point connection to the Internet from an ISP to an end user. Different 802.16 standards provide different types of access, from mobile (analogous to access via a cellphone) to fixed (an alternative to wired access, where the end user's wireless termination point is fixed in location.)
  • Wi-Fi is a shorter range (range is typically measured in hundreds of m) system that uses unlicensed spectrum to provide access to a network, typically covering only the network operator's own property. Typically Wi-Fi is used by an end user to access their own network, which may or may not be connected to the Internet. If WiMAX provides services analogous to a cellphone, Wi-Fi is more analogous to a cordless phone.
  • WiMAX is highly scalable from what are called 'femto' scale remote stations to multi-sector 'maxi' scale base that handle complex tasks of management and mobile handoff functions and include MIMO-AAS smart antenna subsystems.

Due to the ease and low cost with which Wi-Fi can be deployed, it is sometimes used to provide Internet access to third parties within a single room or building available to the provider, sometimes informally, and sometimes as part of a business relationship. For example, many coffee shops, hotels, and transportation hubs contain Wi-Fi access points providing access to the Internet for patrons.

Spectrum Allocations issues

The 802.16 specification applies across a wide swath of the RF spectrum. However, specification is not the same as permission to use. There is no uniform global licensed spectrum for WiMAX. In the US, the biggest segment available is around 2.5 GHz[3], and is already assigned, primarily to Sprint Nextel and Clearwire. Elsewhere in the world, the most likely bands used will be around 3.5 GHz, 2.3/2.5 GHz, or 5 GHz, with 2.3/2.5 GHz probably being most important in Asia. Some countries in Asia like India, Vietnam and Indonesia will use 3.3 GHz.

There is some prospect in the United States that some of a 700 MHz band might be made available for WiMAX use, but it is currently assigned to analog TV and awaits the complete rollout of digital TV before it can become available, likely by 2009. In any case, there will be other uses suggested for that spectrum when it actually becomes open. The FCC auction for this spectrum is scheduled for the end of 2007.

It seems likely that there will be several variants of 802.16, depending on local regulatory conditions and thus on which spectrum is used, even if everything but the underlying radio frequencies is the same. WiMAX equipment will not, therefore, be as portable as it might have been - perhaps even less so than WiFi, whose assigned channels in unlicensed spectrum vary little from jurisdiction to jurisdiction. Manufacturers are compelled to provide multi-spectrum devices that can be used across different regions and regulatory requirements. WISOA is an organization that promotes roaming among service providers. However, this is no different than current mobile phones with dual band, triband and even quadband capabilities. Equipment vendors have already announced the development of multiband subscriber units.

WiMax profiles define channel size, TDD/FDD and other necessary attributes in order to have interoperating products. The current fixed profiles define for both TDD and FDD profiles. At this point, all of the mobile profiles are TDD only. The fixed profiles have channel sizes of 3.5 MHz, 5 MHz, 7 MHz and 10 MHz. The mobile profiles are 5 MHz and 10 MHz. One of significant advantages of WiMax is spectrum efficiency. For example, 802.16-2004 (fixed) has a spectral efficiency of 3.7 bits/hertz. Compared to similar technologies that often are less than 1 bit/hertz efficient such as WiFi.

Limitations

A commonly held misconception is that WiMAX will deliver 70 Mbit/s, over 30 miles (48 kilometers). Each of these is true individually, given ideal circumstances, but they are not simultaneously true. In practice this means that in line-of-sight environments you could deliver symmetrical speeds of 10 Mbit/s at 10 km but in urban environments it is more likely that 30% of installations may be non-line-of-sight and therefore users may only receive 10 Mbit/s over 2 km. WiMAX has some similarities to DSL in this respect, where one can either have high bandwidth or long reach, but not both simultaneously. The other feature to consider with WiMAX is that available bandwidth is shared between users in a given radio sector, so if there are many active users in a single sector, each will get reduced bandwidth. However, unlike SDSL where contention is very noticeable at a 5:1 ratio (if you are sharing your connection with a large media firm for example), WiMAX does not have this problem. Typically each cell has a whole 100 Mbit/s backhaul so there is no contention here. In practice, many users will have a range of 2-, 4-, 6-, 8- or 10 Mbit/s services and the bandwidth can be shared. If the network becomes busy the business model is more like GSM or UMTS than DSL. It is easy to predict capacity requirements as you add customers. Additional radio cards can be added on the same sector to increase the capacity.

Associations

WiMAX Forum

WiMAX Forum logo

The WiMAX Forum is the organization dedicated to certifying the interoperability of WiMAX products. Those that pass conformance and interoperability testing achieve the "WiMAX Forum Certified" designation and can display this mark on their products and marketing materials. Some vendors claim that their equipment is "WiMAX-ready", "WiMAX-compliant", or "pre-WiMAX", if they are not officially WiMAX Forum Certified.

WiMAX Spectrum Owners Alliance - WiSOA

WiSOA logo

WiSOA is the first global organization composed exclusively of owners of WiMAX spectrum without plans to deploy WiMAX technology in those bands. WiSOA is focussed on the regulation, commercialisation, and deployment of WiMAX spectrum in the 2.3–2.5 GHz and the 3.4–3.5 GHz ranges. WiSOA are dedicated to educating and informing its members, industry representatives and government regulators of the importance of WiMAX spectrum, its use, and the potential for WiMAX to revolutionise broadband.


Tuesday, May 22, 2007

Simple Network Management Protocol (SNMP)

The simple network management protocol (SNMP) forms part of the internet protocol suite as defined by the Internet Engineering Task Force (IETF). SNMP is used by network management systems to monitor network-attached devices for conditions that warrant administrative attention. It consists of a set of standards for network management, including an Application Layer protocol, a database schema, and a set of data objects.

SNMP exposes management data in the form of variables on the managed systems, which describe the system configuration. These variables can then be queried (and sometimes set) by managing applications.

Overview and Basic Concepts

In typical SNMP usage, there are generally a number of systems to be managed, and one or more systems managing them. A software component called an agent (see below) runs on each managed system and reports information via SNMP to the managing systems.

Essentially, SNMP agents expose management data on the managed systems as variables (such as "free memory", "system name", "number of running processes", "default route"). The managing system can retrieve the information through the GET, GETNEXT and GETBULKTRAP or INFORMSET protocol operation to actively manage a system. Configuration and control operations are used only when changes are needed to the network infrastructure and the monitoring operations are frequently performed on a regular basis. protocol operations or the agent will send data without being asked using protocol operations. Management systems can also send configuration updates or controlling requests through the

The variables accessible via SNMP are organized in hierarchies. These hierarchies, and other metadata, are described by Management Information Bases (MIBs).

Management Information Base (MIBs)

The SNMP's extensible design is achieved with management information bases (MIBs), which specify the management data of a device subsystem, using a hierarchical namespace containing object identifiers, implemented via ASN.1. The MIB hierarchy can be depicted as a tree with a nameless root, the levels of which are assigned by different organizations. The top-level MIB object IDs belong to different standards organizations, while lower-level object IDs are allocated by associated organizations. This model permits management across all layers of the OSI reference model, extending into applications such as databases, email, and the Java EE reference model, as MIBs can be defined for all such area-specific information and operations.

A MIB is a collection of information that is organized hierarchically. MIBs are accessed using a network-management protocol such as SNMP. They comprise managed objects and are identified by object identifiers.

A managed object (sometimes called a MIB object, an object, or a MIB) is one of any number of specific characteristics of a managed device. Managed objects comprise one or more object instances, which are essentially variables.

Two types of managed objects exist:

  1. Scalar objects define a single object instance.
  2. Tabular objects define multiple related object instances that are grouped in MIB tables.

An example of a managed object is atInput, which is a scalar object that contains a single object instance, the integer value that indicates the total number of input AppleTalk packets on a router interface.

An object identifier (or object ID or OID) uniquely identifies a managed object in the MIB hierarchy.


Abstract Syntax Notation One (ASN.1)

In telecommunications and computer networking, Abstract Syntax Notation One (ASN.1) is a standard and flexible notation that describes data structures for representing, encoding, transmitting, and decoding data. It provides a set of formal rules for describing the structure of objects that are independent of machine-specific encoding techniques and is a precise, formal notation that removes ambiguities.

ASN.1 is a joint ISO and ITU-T standard, originally defined in 1984 as part of CCITT X.409:1984. ASN.1 moved to its own standard, X.208, in 1988 due to wide applicability. The substantially revised 1995 version is covered by the X.680 series.

An adapted subset of ASN.1, Structure of Management Information (SMI), is specified in SNMP to define sets of related MIB objects; these sets are termed MIB modules.


SNMP Basic Components

An SNMP-managed network consists of three key components:

  1. Managed devices
  2. Agents
  3. Network-management systems (NMSs)

A managed device is a network node that contains an SNMP agent and that resides on a managed network. Managed devices collect and store management information and make this information available to NMSs using SNMP. Managed devices, sometimes called network elements, can be routers and access servers, switches and bridges, hubs, computer hosts, or printers.

An agent is a network-management software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP.

An NMS executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs must exist on any managed network.

Architecture

The SNMP framework consists of master agents, subagents and management stations.

Master Agent

A master agent is a piece of software running on an SNMP-capable network component, for example a router that responds to SNMP requests from the management station. Thus it acts as a server in client-server architecture terminology or as a daemon in operating systemsubagents to provide information about the management of specific functionality. terminology. A master agent relies on

Master agents can also be referred to as managed objects.

Subagent

A subagent is a piece of software running on an SNMP-capable network component that implements the information and management functionality defined by a specific MIB of a specific subsystem.: for example the ethernet link layer. Some capabilities of the subagent are:

  • Gathering information from managed objects
  • Configuring parameters of the managed objects
  • Responding to managers' requests
  • Generating alarms or traps

Management Station

The manager or management station is the final component in the SNMP architecture. It functions as the equivalent of a client in the client-server architecture. It issues requests for management operations on behalf of an administrator or application and receives traps from agents as well.

The SNMP protocol

SNMPv1 and ASN.1 Data Types

The SNMPv1 SMI specifies that all managed objects have a certain subset of Abstract Syntax Notation One (ASN.1) data types associated with them. Three ASN.1 data types are required:

  1. The name serves as the object identifier (object ID).
  2. The syntax defines the data type of the object (for example, integer or string). The SMI uses a subset of the ASN.1 syntax definitions.
  3. The encoding data describes how information associated with a managed object is formatted as a series of data items for transmission over the network.

SNMPv1 and SMI-Specific Data Types

The SNMPv1 SMI specifies the use of a number of SMI-specific data types, which are divided into two categories:

  1. Simple data types
  2. Application-wide data types.

Three simple data types are defined in the SNMPv1 SMI, all of which are unique values:

  1. The integer data type is a signed integer in the range of -2,147,483,648 to 2,147,483,647.
  2. Octet strings are ordered sequences of 0 to 65,535 octets.
  3. Object IDs come from the set of all object identifiers allocated according to the rules specified in ASN.1.

Seven application-wide data types exist in the SNMPv1 SMI: network addresses, counters, gauges, time ticks, opaques, integers, and unsigned integers.

  1. Network addresses represent an address from a particular protocol family. SNMPv1 supports only 32-bit IP addresses.
  2. Counters are non-negative integers that increase until they reach a maximum value and then return to zero. In SNMPv1, a 32-bit counter size is specified.
  3. Gauges are non-negative integers that can increase or decrease but that retain the maximum (minimum) value reached, if it exceeds (or fall below) the maximum (or minimum) value, as specified in RFC 2578.
  4. A time tick represents a hundredth of a second since some event.
  5. An opaque represents an arbitrary encoding that is used to pass arbitrary information strings that do not conform to the strict data typing used by the SMI.
  6. An integer represents signed integer-valued information. This data type redefines the integer data type, which has arbitrary precision in ASN.1 but bounded precision in the SMI.
  7. An unsigned integer represents unsigned integer-valued information and is useful when values are always non-negative. This data type redefines the integer data type, which has arbitrary precision in ASN.1 but bounded precision in the SMI.

SNMPv1 MIB Tables

The SNMPv1 SMI defines highly structured tables that are used to group the instances of a tabular object (that is, an object that contains multiple variables). Tables are composed of zero or more rows, which are indexed in a way that allows SNMP to retrieve or alter an entire row with a single Get, GetNext, or Set command.

SNMPv2 and Structure of Management Information

The SNMPv2 SMI is described in RFC 2578. It makes certain additions and enhancements to the SNMPv1 SMI-specific data types, such as including bit strings, network addresses, and counters. Bit strings are defined only in SNMPv2 and comprise zero or more named bits that specify a value. Network addresses represent an address from a particular protocol family. SNMPv1 supports only 32-bit IP addresses, but SNMPv2 can support other types of addresses as well. Counters are non-negative integers that increase until they reach a maximum value and then return to zero. In SNMPv1, a 32-bit counter size is specified. In SNMPv2, 32-bit and 64-bit counters are defined.

The SNMP protocol operates at the application layer (layer 7) of the OSI model. It specifies (in version 1) five core protocol data units (PDUs):

  1. GET REQUEST - used to retrieve a piece of management information.
  2. GETNEXT REQUEST - used iteratively to retrieve sequences of management information.
  3. GET RESPONSE - used by the agent to respond with data to get and set requests from the manager.
  4. SET REQUEST - used to initialise and make a change to a value of the network element.
  5. TRAP - used to report an alert or other asynchronous event about a managed subsystem. In SNMPv1, asynchronous event reports are called traps while they are called notifications in later versions of SNMP. In SMIv1 MIB modules, traps are defined using the TRAP-TYPE macro; in SMIv2 MIB modules, traps are defined using the NOTIFICATION-TYPE macro.

Other PDUs were added in later versions, including:

  1. GETBULK REQUEST - a faster iterator used to retrieve sequences of management information.
  2. INFORM - an acknowledged trap.

Typically, SNMP uses UDP ports 161 for the agent and 162 for the manager. The Manager may send Requests from any available ports (source port) to port 161 in the agent (destination port). The agent response will be given back to the source port. The Manager will receive traps on port 162. The agent may generate traps from any available port.

Many distributions change this, however, and this is not necessarily always true.

SNMPv2 SMI Information Modules

The SNMPv2 SMI also specifies information modules, which specify a group of related definitions. Three types of SMI information modules exist: MIB modules, compliance statements, and capability statements.

  • MIB modules contain definitions of interrelated managed objects.
  • Compliance statements provide a systematic way to describe a group of managed objects that must be implemented for conformance to a standard.
  • Capability statements are used to indicate the precise level of support that an agent claims with respect to a MIB group. An NMS can adjust its behavior toward agents according to the capabilities statements associated with each agent.

SNMPv3

Simple Network Management Protocol version 3 is defined by RFC 3411RFC 3418Simple Network Management Protocol version 3 primarily added security and remote configuration enhancements to SNMP. SNMPv3 is the current standard version of SNMP as of 2004. The IETF considers earlier versions as "Obsolete" or "Historical". In December 1997 the "Simple Times" newsletter published several articles written by the SNMPv3 RFC editors explaining some of the ideas behind version 3 specifications. (also known as 'STD0062').

SNMPv3 provides important security features:

  • Message integrity to ensure that a packet has not been tampered with in transit.
  • Authentication to verify that the message is from a valid source.
  • Encryption of packets to prevent snooping by an unauthorized source.

Monday, May 21, 2007

Hot Standby Router Protocol (HSRP)

The Hot Standby Router Protocol, HSRP, provides a mechanism which is designed to support non-disruptive failover of IP traffic in certain circumstances. In particular, the protocol protects against the failure of the first hop router when the source host cannot learn the IP address of the first hop router dynamically. The protocol is designed for use over multi-access, multicast or broadcast capable LANs (e.g., Ethernet). HSRP is not intended as a replacement for existing dynamic router discovery mechanisms and those protocols should be used instead whenever possible. A large class of legacy host implementations that do not support dynamic discovery are capable of configuring a default router. HSRP provides failover services to those hosts.

Using HSRP, a set of routers work in concert to present the illusion of a single virtual router to the hosts on the LAN. This set is known as an HSRP group or a standby group. A single router elected from the group is responsible for forwarding the packets that hosts send to the virtual router. This router is known as the active router. Another router is elected as the standby router. In the event that the active router fails, the standby assumes the packet forwarding duties of the active router. Although an arbitrary number of routers may run HSRP, only the active router forwards the packets sent to the virtual router.

To minimize network traffic, only the active and the standby routers send periodic HSRP messages once the protocol has completed the election process. If the active router fails, the standby router takes over as the active router. If the standby router fails or becomes the active router, another router is elected as the standby router.

On a particular LAN, multiple hot standby groups may coexist and overlap. Each standby group emulates a single virtual router. For each standby group, a single well-known MAC address is allocated to the group, as well as an IP address. The IP address SHOULD belong to the primary subnet in use on the LAN, but MUST differ from the addresses allocated as interface addresses on all routers and hosts on the LAN, including virtual IP addresses assigned to other HSRP groups.

If multiple groups are used on a single LAN, load splitting can be achieved by distributing hosts among different standby groups.

MAC header IP header UDP packet HSRP packet

HSRP packet:

0001



0607 0809



1415 16 17 18


22 23 24 25 26


30 31
Version Opcode State Hellotime
Holdtime Priority Group Reserved
Authentication Data
Virtual IP Address

Version. 8 bits.
HSRP version number.

Opcode. 8 bits.

OpcodeDescription
0 Hello. The router is running and is capable of becoming the active or standby router.
1 Coup. The router wishes to become the active router.
2 Resign. The router no longer wishes to be the active router.

State. 8 bits.
This field describes the current state of the router sending the message.

StateDescription
0 Initial. This is the starting state and indicates that HSRP is not running. This state is entered via a configuration change or when an interface first comes up.
1 Learn. The router has not determined the virtual IP address, and not yet seen an authenticated Hello message from the active router. In this state the router is still waiting to hear from the active router.
2 Listen. The router knows the virtual IP address, but is neither the active router nor the standby router. It listens for Hello messages from those routers.
4 Speak. The router sends periodic Hello messages and is actively participating in the election of the active and/or standby router. A router cannot enter Speak state unless it has the virtual IP address.
8 Standby. The router is a candidate to become the next active router and sends periodic Hello messages. Excluding transient conditions, there MUST be at most one router in the group in Standby state.
16 Active. The router is currently forwarding packets that are sent to the group's virtual MAC address. The router sends periodic Hello messages. Excluding transient conditions, there MUST be at most one router in Active state in the group.

Hellotime. 8 bits. Default = 3 seconds.
This field is only meaningful in Hello messages. It contains the approximate period between the Hello messages that the router sends. The time is given in seconds. If the Hellotime is not configured on a router, then it MAY be learned from the Hello message from the active router. The Hellotime SHOULD only be learned if no Hellotime is configured and the Hello message is authenticated. A router that sends a Hello message MUST insert the Hellotime that it is using in the Hellotime field in the Hello message.

Holdtime. 8 bits. Default = 10 seconds.
This field is only meaningful in Hello messages. It contains the amount of time that the current Hello message should be considered valid. The time is given in seconds. If a router sends a Hello message, then receivers should consider that Hello message to be valid for one Holdtime. The Holdtime SHOULD be at least three times the value of the Hellotime and MUST be greater than the Hellotime. If the Holdtime is not configured on a router, then it MAY be learned from the Hello message from the active router. The Holdtime SHOULD only be learned if the Hello message is authenticated. A router that sends a Hello message MUST insert the Holdtime that it is using in the Holdtime field in the Hello message. A router which is in active state MUST NOT learn new values for the Hellotime and the Holdtime from other routers, although it may continue to use values which it learned from the previous active router. It MAY also use the Hellotime and Holdtime values learned through manual configuration. The active router MUST NOT use one configured time and one learned time.

Priority. 8 bits.
This field is used to elect the active and standby routers. When comparing priorities of two different routers, the router with the numerically higher priority wins. In the case of routers with equal priority the router with the higher IP address wins.

Group. 8 bits.
This field identifies the standby group. For Token Ring, values between 0 and 2 inclusive are valid. For other media values between 0 and 255 inclusive are valid.

Reserved. 8 bits.

Authentication Data. 8 bytes.
This field contains a clear text 8 character reused password. If no authentication data is configured, the RECOMMENDED default value is 0x63 0x69 0x73 0x63 0x6F 0x00 0x00 0x00.

Virtual IP Address. 32 bits.
The virtual IP address used by this group. If the virtual IP address is not configured on a router, then it MAY be learned from the Hello message from the active router. An address SHOULD only be learned if no address was configured and the Hello message is authenticated.

Source : http://www.networksorcery.com

Sunday, May 20, 2007

GEPON For Today: Key Drivers for Deploying Ethernet at Gigabit Speeds

by Bill Huang, CTO and Senior VP of Engineering

Whatever happened to Token Ring? Archived e-mail list threads from the late '80s, when telcos adopted LAN technologies en masse, proclaim, "Token Ring is a clear technology winner....Token Ring will trickle down to eventually supplant Ethernet..." Many bet in favor of ARCnet, AppleTalk, FDDI, and other technologies against Ethernet also-and lost time and time again. It makes sense then that Ethernet should be the basis for the next wave of network technology-fiber optic access deployments that combine the benefits of Passive Optical Network (PON) architectures with the staying power of Ethernet. And Gigabit Ethernet PON (GEPON) is emerging as the technology of choice for carriers looking to deploy high-speed, high-density fiber networks today.

Asian Adoption

Ethernet's suitability in the access network, specifically via GEPON, is evidenced by its rapid and accelerating uptake in Asia. Asian carriers are skipping, or have already skipped, the intermediate-and expensive-step of deploying large-scale ATM-based networks. According to research firm Dell'Oro Group, worldwide Ethernet-based PON sales fell just short of $200 million dollars, or roughly 90 percent of total PON sales, in the second quarter of 2005. Japan currently has the largest PON subscriber base in the world. Carrier after carrier in Asia is moving directly to Ethernet, IP, and softswitch technologies to support Internet access, Voice Over IP (VOIP), and video applications. U.S.-based carriers can learn from their Asian counterparts' successes with GEPON, which is readily available and provides cost-effective bandwidth for fiber-to-the-home, fiber-to-the-node, or fiber-to-the-business applications.

It's Not a Horse Race

Everyone appreciates a good horse race-and that tends to affect much of today's analysis of PON technologies. However, those looking to directly compare GEPON to the nascent Gigabit PON (GPON) standard on a feature-by-feature basis are missing the point. GEPON was never intended to provide native TDM transport, native ATM transport, or overlay wavelengths for video. Rooted in the international standard 802.3ah for Ethernet in the last mile written by the IEEE, GEPON provides the simplicity of Ethernet at Gigabit speeds. The question of deploying GEPON is not one of "this versus that" but rather a question of which technology can most cost effectively support the demands of the applications that carriers need to deploy today and moving forward. Increasingly, carriers are finding that GEPON is that technology.

North American carriers determined years ago that PON architectures make sense. PONs combine the high-bandwidth capacity of fiber with the scalability of point-to-multipoint network topologies. Point-to-point models, like metro Ethernet, have proven to be too costly to scale due to the number of transceivers required in the network and the resulting complexity of management. In addition, PONs save service providers money on the metro/core side of the network. High densities of point-to-multipoint connections on the line side enable more consolidated aggregation on the trunk side by reducing the number of transceivers.

In addition, Ethernet-based PONs provide compound savings. The global adoption of Ethernet for the desktop, for the LAN, and increasingly for the metro area continues to drive Ethernet component costs down. Whereas early ATM-based PON deployments factored out several thousand dollars per subscriber, carriers can now deploy GEPON for well under $500 per subscriber. Lower costs, in turn, reduce the overall capital expenditure a carrier incurs when transitioning to a high-bandwidth PON architecture. Lower costs also accelerate return on investment.

VOIP and IPTV Are Key GEPON Drivers

GEPON technology also represents a forward-looking investment for today's carrier. Several factors are driving voice traffic away from traditional TDM networks and toward VoIP networks. Cable TV giant Comcast announced plans to introduce VoIP service to all of its customers by the end of 2005, prompting competitive responses from the other leading telephone carriers. The regulatory environment in the U.S. continues to be favorable for VoIP, and the traditional technical challenges of reliability and quality of service for VoIP have long been resolved.

Enterprise VoIP deployments are accelerating; Dell'Oro Group forecasts IP PBX shipments at 28 million lines in 2006, exceeding the number of TDM lines shipped. VoIP runs over IP and Ethernet in the metro and core-carriers have already made substantial investments in IP/Ethernet technology to support Internet services. Plus, VoIP originates in native IP and Ethernet at the subscriber terminal, so the need for carriers to deploy PON systems that support lower-layer technologies beyond Ethernet is diminishing. The cable operators' use of VoIP to enter the voice market is prompting traditional telephone operators to use IPTV to enter the video market. As the name implies, IPTV can share a common core, metro, and access network with VoIP. GEPON is thus a smart investment carriers need to make today in anticipation of the continued growth and acceptance of VoIP and IPTV services.

Fiber to the Home

Video, in the context of IPTV, is the premier application for Fiber to the Home (FTTH) and is well served by GEPON technology. Fiber optic local loops built around a PON architecture are currently the best access technology for providing homes with competitive video offerings, high-speed Internet access, and VoIP. This architecture also provides a path to higher per-home bit rates in the future.

A GEPON system with 32 splits can provide 30 Mbps of symmetric bandwidth to each subscriber-more than enough to support bandwidth-hungry video applications as well as voice and data. Even with three high-definition video streams per household, each 6-7 Mbps (or 18-21 Mbps for all three), 30 Mbps leaves plenty of headroom for VoIP and Internet access. The bandwidth budget for VoIP is typically 64K per stream, while high-speed Internet access is tiered at 128 Kbps, 384 Kbps, 512 Kbps, and 1 Mbps downstream (Internet video is migrating from the PC to the television, and thus is already accounted for in the video budget). The total bandwidth needed for all three services comes to less than 25 Mbps. GEPON then is the ideal access technology to support triple-play services today while allowing room for future growth.

Fiber to the Node

In some networks, rapid triple-play turn-up will demand the operator to make use of existing copper, rather than waiting for fiber to be trenched. GEPON has a role to play here as well. A new generation of DSLAM-an Outside Plant (OSP) DSLAM-is already being deployed by many carriers to shorten the length of the copper loop and thereby achieve higher bandwidth from in-situ copper. These OSP DSLAMs will take advantage of the cost savings of PON by implementing a GEPON Optical Network Unit (ONU) function on the DSLAM for the uplink.

This hybrid PON-and-copper Fiber to the Node (FTTN) architecture takes advantage of the cost benefits of PON and the rapid deployment of proven DSLAM technology using in-ground copper plant. Plus, the architecture can support all the same applications-IPTV, VoIP, and Internet access-as FTTH. In addition, FTTN makes an excellent transitional path; if at a later date bandwidth requirements exceed copper access capacity, the cost of further expanding fiber reach all the way to the subscriber's home is greatly reduced.

Fiber to the Business

With IP PBX line shipments expected to soon outstrip legacy TDM PBX shipments, enterprise VoIP represents the next frontier for GEPON-Fiber to the Business (FTTB). Corporate CIOs are discovering the cost savings of VOIP, which comes with little or no sacrifice in voice quality or reliability. Videoconferencing too has a natural synergy with GEPON, taking advantage of the 1 Gbps symmetric line rate. With GEPON, CIOs can finally collapse their individual T1s for PBX, corporate WAN, and Internet access, as well as their legacy ISDN BRIs for corporate videoconferencing, onto an optical Ethernet link provided by a cutting-edge service provider. Furthermore, with Ethernet-based access technologies, CIOs may finally get what they have longed for-scalable bandwidth in the 1.544 Mbps to 45 Mbps range at an affordable price, which is not available on TDM networks.

Ethernet Speed

In 1988, technology pundits were clamoring for 50 Mbps LAN connections to each workstation. Today, 100 Mbps to the desktop is the norm. A few years ago, the 10 Gbps fiber-only Ethernet standard was ratified; now chipmakers are busy at work on 40 Gbps Ethernet. Some would describe these advances as traveling at "Internet speed" but it would be more accurate to describe it as "Ethernet speed." The elegance and extensibility of Ethernet made these high bit rates, and the applications that ride on top, possible. PON access systems provide the ideal path for U.S. carriers to increase bandwidth and services to customers, and when they are combined with Ethernet, as in GEPON, they offer truly scalable, cost-effective service platforms with staying power. U.S. operators that deploy GEPON today will be well prepared for the future.