Faraday shield at a power plant in Heimbach, Germany
Faraday bags are a type of Faraday cage made of flexible metallic fabric. They are typically used to block remote wiping or alteration of wireless devices recovered in criminal investigations, but may also be used by the general public to protect against data theft or to enhance digital privacy.
A Faraday cage or Faraday shield is an enclosure used to block electromagnetic fields. A Faraday shield may be formed by a continuous covering of conductive material, or in the case of a Faraday cage, by a mesh of such materials. Faraday cages are named after the English scientist Michael Faraday, who invented them in 1836.
Video of a Faraday cage shielding a man from electricity
A Faraday cage operates because an external electrical field causes the electric charges within the cage's conducting material to be distributed so that they cancel the field's effect in the cage's interior. This phenomenon is used to protect sensitive electronic equipment from external radio frequency interference (RFI). Faraday cages are also used to enclose devices that produce RFI, such as radio transmitters, to prevent their radio waves from interfering with other nearby equipment. They are also used to protect people and equipment against actual electric currents such as lightning strikes and electrostatic discharges, since the enclosing cage conducts current around the outside of the enclosed space and none passes through the interior.
Faraday cages cannot block stable or slowly varying magnetic fields, such as the Earth's magnetic field (a compass will still work inside). To a large degree, though, they shield the interior from external electromagnetic radiation if the conductor is thick enough and any holes are significantly smaller than the wavelength of the radiation. For example, certain computer forensic test procedures of electronic systems that require an environment free of electromagnetic interference can be carried out within a screened room. These rooms are spaces that are completely enclosed by one or more layers of a fine metal mesh or perforated sheet metal. The metal layers are grounded to dissipate any electric currents generated from external or internal electromagnetic fields, and thus they block a large amount of the electromagnetic interference. See also electromagnetic shielding. They provide less attenuation of outgoing transmissions than incoming: they can block EMP waves from natural phenomena very effectively, but a tracking device, especially in upper frequencies, may be able to penetrate from within the cage (e.g., some cell phones operate at various radio frequencies so while one cell phone may not work, another one will).
A common misconception is that a Faraday cage provides full blockage or attenuation; this is not true. The reception or transmission of radio waves, a form of electromagnetic radiation, to or from an antenna within a Faraday cage is heavily attenuated or blocked by the cage; however, a Faraday cage has varied attenuation depending on wave form, frequency or distance from receiver/transmitter, and receiver/transmitter power. Near-field high-powered frequency transmissions like HF RFID are more likely to penetrate. Solid cages generally attenuate fields over a broader range of frequencies than mesh cages.
In 1836, Michael Faraday observed that the excess charge on a charged conductor resided only on its exterior and had no influence on anything enclosed within it. To demonstrate this fact, he built a room coated with metal foil and allowed high-voltage discharges from an electrostatic generator to strike the outside of the room. He used an electroscope to show that there was no electric charge present on the inside of the room's walls.
Although this cage effect has been attributed to Michael Faraday's famous ice pail experiments performed in 1843, it was Benjamin Franklin in 1755 who observed the effect by lowering an uncharged cork ball suspended on a silk thread through an opening in an electrically charged metal can. In his words, "the cork was not attracted to the inside of the can as it would have been to the outside, and though it touched the bottom, yet when drawn out it was not found to be electrified (charged) by that touch, as it would have been by touching the outside. The fact is singular." Franklin had discovered the behavior of what we now refer to as a Faraday cage or shield (based on Faraday's later experiments which duplicated Franklin's cork and can).
Additionally, the Abbe Nollet published an early account of an effect attributable to the cage effect in his Leçons de physique expérimentale.
Animation showing how a Faraday cage (box) works. When an external electrical field (arrows) is applied, the electrons(little balls) in the metal move to the left side of the cage, giving it a negative charge, while the remaining unbalanced charge of the nuclei give the right side a positive charge. These induced charges create an opposing electric field that cancels the external electric field throughout the box.
A continuous Faraday shield is a hollow conductor. Externally or internally applied electromagnetic fields produce forces on the charge carriers (usually electrons) within the conductor; the charges are redistributed accordingly due to electrostatic induction. The redistributed charges greatly reduce the voltage within the surface, to an extent depending on the capacitance, however, full cancellation does not occur.
If a charge is placed inside an ungrounded Faraday cage, the internal face of the cage becomes charged (in the same manner described for an external charge) to prevent the existence of a field inside the body of the cage, however, this charging of the inner face re-distributes the charges in the body of the cage. This charges the outer face of the cage with a charge equal in sign and magnitude to the one placed inside the cage. Since the internal charge and the inner face cancel each other out, the spread of charges on the outer face is not affected by the position of the internal charge inside the cage. So for all intents and purposes, the cage generates the same DC electric field that it would generate if it were simply affected by the charge placed inside. The same is not true for electromagnetic waves.
If the cage is grounded, the excess charges will be neutralized as the ground connection creates an between the outside of the cage and the environment, so there is no voltage between them and therefor also no field. The inner face and the inner charge will remain the same so the field is kept inside.
Skin depth vs. frequency for some materials at room temperature, red vertical line denotes 50 Hz frequency:
Effectiveness of shielding of a static electric field is largely independent of the geometry of the conductive material, however, static magnetic fields can penetrate the shield completely.
In the case of a varying electromagnetic fields, the faster the variations are (i.e., the higher the frequencies), the better the material resists magnetic field penetration. In this case the shielding also depends on the electrical conductivity, the magnetic properties of the conductive materials used in the cages, as well as their thicknesses.
A good idea of the effectiveness of a Faraday shield can be obtained from considerations of skin depth. With skin depth, the current flowing is mostly in the surface, and decays exponentially with depth through the material. Because a Faraday shield has finite thickness, this determines how well the shield works; a thicker shield can attenuate electromagnetic fields better, and to a lower frequency.
Faraday cages are Faraday shields which have holes in them and are therefore more complex to analyze. Whereas continuous shields essentially attenuate all wavelengths shorter than the skin depth, the holes in a cage may permit shorter wavelengths to pass through or set up "evanescent fields" (oscillating fields that do not propagate as EM waves) just beneath the surface. The shorter the wavelength, the better it passes through a mesh of given size. Thus to work well at short wavelengths (i.e., high frequencies), the holes in the cage must be smaller than the wavelength of the incident wave. Faraday cages may therefore be thought of as high pass filters.
Faraday cages are routinely used in analytical chemistry to reduce noise while making sensitive measurements.
Faraday cages, more specifically dual paired seam Faraday bags, are often used in digital forensics to prevent remote wiping and alteration of criminal digital evidence.
The U.S. and NATO Tempest standards, and similar standards in other countries, include Faraday cages as part of a broader effort to provide emission security for computers.
Automobile and airplane passenger compartments are essentially Faraday cages, protecting passengers from electric charges, such as lightning
Elevators and other rooms with metallic conducting frames and walls simulate a Faraday cage effect, leading to a loss of signal and "dead zones" for users of cellular phones, radios, and other electronic devices that require external electromagnetic signals. During training, firefighters, and other first responders are cautioned that their two-way radios will probably not work inside elevators and to make allowances for that. Small, physical Faraday cages are used by electronics engineers during equipment testing to simulate such an environment to make sure that the device gracefully handles these conditions.
Properly designed conductive clothing can also form a protective Faraday cage. Some electrical linemen wear Faraday suits, which allow them to work on live, high-voltage power lines without risk of electrocution. The suit prevents electric current from flowing through the body, and has no theoretical voltage limit. Linemen have successfully worked even the highest voltage (Kazakhstan's Ekibastuz–Kokshetau line 1150 kV) lines safely.
Austin Richards, a physicist in California, created a metal Faraday suit in 1997 that protects him from tesla coil discharges. In 1998, he named the character in the suit Doctor MegaVolt and has performed all over the world and at Burning Man nine different years.
The scan room of a magnetic resonance imaging (MRI) machine is designed as a Faraday cage. This prevents external RF (radio frequency) signals from being added to data collected from the patient, which would affect the resulting image. Radiographers are trained to identify the characteristic artifacts created on images should the Faraday cage be damaged during a thunderstorm.
A microwave oven utilizes a Faraday cage, which can be partly seen covering the transparent window, to contain the electromagnetic energy within the oven and to shield the exterior from radiation.
Plastic bags that are impregnated with metal are used to enclose electronic toll collection devices whenever tolls should not be charged to those devices, such as during transit or when the user is paying cash.
The shield of a screened cable, such as USB cables or the coaxial cable used for cable television, protects the internal conductors from external electrical noise and prevents the RF signals from leaking out.
Illustration of a partially connected mesh network. A fully connected mesh network is where each node is connected to every other node in the network.
A mesh network (or simply meshnet) is a local networktopology in which the infrastructure nodes (i.e. bridges, switches and other infrastructure devices) connect directly, dynamically and non-hierarchically to as many other nodes as possible and cooperate with one another to efficiently route data from/to clients. This lack of dependency on one node allows for every node to participate in the relay of information. Mesh networks dynamically self-organize and self-configure, which can reduce installation overhead. The ability to self-configure enables dynamic distribution of workloads, particularly in the event that a few nodes should fail. This in turn contributes to fault-tolerance and reduced maintenance costs.
Mesh topology may be contrasted with conventional star/tree local network topologies in which the bridges/switches are directly linked to only a small subset of other bridges/switches, and the links between these infrastructure neighbours are hierarchical. While star-and-tree topologies are very well established, highly standardized and vendor-neutral, vendors of mesh network devices have not yet all agreed on common standards, and interoperability between devices from different vendors is not yet assured.
Mesh networks can relay messages using either a flooding technique or a routing technique. With routing, the message is propagated along a path by hopping from node to node until it reaches its destination. To ensure that all its paths are available, the network must allow for continuous connections and must reconfigure itself around broken paths, using self-healing algorithms such as Shortest Path Bridging. Self-healing allows a routing-based network to operate when a node breaks down or when a connection becomes unreliable. As a result, the network is typically quite reliable, as there is often more than one path between a source and a destination in the network. Although mostly used in wireless situations, this concept can also apply to wired networks and to software interaction.
A mesh network whose nodes are all connected to each other is a fully connected network. Fully connected wired networks have the advantages of security and reliability: problems in a cable affect only the two nodes attached to it. However, in such networks, the number of cables, and therefore the cost, goes up rapidly as the number of nodes increases.
Wireless mesh radio networks were originally developed for military applications, such that every node could dynamically serve as a router for every other node. In that way, even in the event of a failure of some nodes, the remaining nodes could continue to communicate with each other, and, if necessary, to serve as uplinks for the other nodes.
Early wireless mesh network nodes had a single half-duplex radio that, at any one instant, could either transmit or receive, but not both at the same time. This was accompanied by the development of shared mesh networks. This was subsequently superseded by more complex radio hardware that could receive packets from an upstream node and transmit packets to a downstream node simultaneously (on a different frequency or a different CDMA channel). This allowed the development of switched mesh networks. As the size, cost, and power requirements of radios declined further, nodes could be cost-effectively equipped with multiple radios. This in turn permitted each radio to handle a different function, for instance one radio for client access, and another for backhaul services.
Work in this field has been aided by the use of game theory methods to analyze strategies for the allocation of resources and routing of packets.
Packet radio networks or ALOHA networks were first used in Hawaii to connect the islands. Given the bulk radios, and low data rate, the network is less useful than it was envisioned to be.
In 1998-1999, a field implementation of a campus wide wireless network using 802.11 WaveLAN 2.4 GHz wireless interface on several laptops was successfully completed. Several real applications, mobility and data transmissions were made.
Mesh networks were useful for the military market because of the radio capability, and because not all military missions have frequently moving nodes. The Pentagon launched the DoD JTRS program in 1997, with an ambition to use software to control radio functions - such as frequency, bandwidth, modulation and security previously baked into the hardware. This approach would allow the DoD to build a family of radios with a common software core, capable of handling functions that were previously split among separate hardware-based radios: VHF voice radios for infantry units; UHF voice radios for air-to-air and ground-to-air communications; long-range HF radios for ships and ground troops; and a wideband radio capable of transmitting data at megabit speeds across a battlefield. However, JTRS program was shut down in 2012 by US Army because the radios done by Boeing had a 75% failure rate.
Google Home, Google Wi-Fi, and Google OnHub all support Wi-Fi mesh networking.
In rural Catalonia, Guifi.net was developed in 2004 as a response to the lack of broadband Internet, where commercial Internet providers weren't providing a connection or a very poor one. Nowadays with more than 30,000 nodes it is only halfway a fully connected network, but following a peer to peer agreement it remained an open, free and neutral network with extensive redundancy.
In 2004, TRW Inc. engineers from Carson, California, successfully tested a multi-node mesh wireless network using 802.11a/b/g radios on several high speed laptops running Linux, with new features such as route precedence and preemption capability, adding different priorities to traffic service class during packet scheduling and routing, and quality of service. Their work concluded that data rate can be greatly enhanced using MIMO technology at the radio front end to provide multiple spatial paths.
ZigBee digital radios are incorporated into some consumer appliances, including battery-powered appliances. ZigBee radios spontaneously organize a mesh network, using specific routing algorithms; transmission and reception are synchronized. This means the radios can be off much of the time, and thus conserve power. ZigBee is for low power low bandwidth application scenarios.
Thread is a consumer wireless networking protocol built on open standards and IPv6/6LoWPAN protocols. Thread's features include a secure and reliable mesh network with no single point of failure, simple connectivity and low power. Thread networks are easy to set up and secure to use with banking-class encryption to close security holes that exist in other wireless protocols. In 2014 Google Inc's Nest Labs announced a working group with the companies Samsung, ARM Holdings, Freescale, Silicon Labs, Big Ass Fans and the lock company Yale to promote Thread.
In early 2007, the US-based firm Meraki launched a mini wireless mesh router. The 802.11 radio within the Meraki Mini has been optimized for long-distance communication, providing coverage over 250 metres. In contrast to multi-radio long range mesh networks with tree based topologies and their advantages in O(n) routing, the Maraki had only one radio, which it used for both client access as well as backhaul traffic.
The Naval Postgraduate School, Monterey CA, demonstrated such wireless mesh networks for border security. In a pilot system, aerial cameras kept aloft by balloons relayed real time high resolution video to ground personnel via a mesh network.
SPAWAR, a division of the US Navy, is prototyping and testing a scalable, secure Disruption Tolerant Mesh Network  to protect strategic military assets, both stationary and mobile. Machine control applications, running on the mesh nodes, "take over", when Internet connectivity is lost. Use cases include Internet of Things e.g. smart drone swarms.
An MIT Media Lab project has developed the XO-1 laptop or "OLPC" (One Laptop per Child) which is intended for disadvantaged schools in developing nations and uses mesh networking (based on the IEEE 802.11s standard) to create a robust and inexpensive infrastructure. The instantaneous connections made by the laptops are claimed by the project to reduce the need for an external infrastructure such as the Internet to reach all areas, because a connected node could share the connection with nodes nearby. A similar concept has also been implemented by Greenpacket with its application called SONbuddy.
In Cambridge, UK, on 3 June 2006, mesh networking was used at the “Strawberry Fair” to run mobile live television, radio and Internet services to an estimated 80,000 people.
Broadband-Hamnet , a mesh networking project used in amateur radio, is "a high-speed, self-discovering, self-configuring, fault-tolerant, wireless computer network" with very low power consumption and a focus on emergency communication.
FabFi is an open-source, city-scale, wireless mesh networking system originally developed in 2009 in Jalalabad, Afghanistan to provide high-speed Internet to parts of the city and designed for high performance across multiple hops. It is an inexpensive framework for sharing wireless Internet from a central provider across a town or city. A second larger implementation followed a year later near Nairobi, Kenya with a freemium pay model to support network growth. Both projects were undertaken by the Fablab users of the respective cities.
SMesh is an 802.11 multi-hop wireless mesh network developed by the Distributed System and Networks Lab at Johns Hopkins University. A fast handoff scheme allows mobile clients to roam in the network without interruption in connectivity, a feature suitable for real-time applications, such as VoIP.
Many mesh networks operate across multiple radio bands. For example, Firetide and Wave Relay mesh networks have the option to communicate node to node on 5.2 GHz or 5.8 GHz, but communicate node to client on 2.4 GHz (802.11). This is accomplished using software-defined radio (SDR).
The SolarMESH project examined the potential of powering 802.11-based mesh networks using solar power and rechargeable batteries. Legacy 802.11 access points were found to be inadequate due to the requirement that they be continuously powered. The IEEE 802.11s standardization efforts are considering power save options, but solar-powered applications might involve single radio nodes where relay-link power saving will be inapplicable.
The WING project (sponsored by the Italian Ministry of University and Research and led by CREATE-NET and Technion) developed a set of novel algorithms and protocols for enabling wireless mesh networks as the standard access architecture for next generation Internet. Particular focus has been given to interference and traffic aware channel assignment, multi-radio/multi-interface support, and opportunistic scheduling and traffic aggregation in highly volatile environments.
WiBACK Wireless Backhaul Technology has been developed by the Fraunhofer Institute for Open Communication Systems (FOKUS) in Berlin. Powered by solar cells and designed to support all existing wireless technologies, networks are due to be rolled out to several countries in sub-Saharan Africa in summer 2012.
Recent standards for wired communications have also incorporated concepts from Mesh Networking. An example is ITU-TG.hn, a standard that specifies a high-speed (up to 1 Gbit/s) local area network using existing home wiring (power lines, phone lines and coaxial cables). In noisy environments such as power lines (where signals can be heavily attenuated and corrupted by noise) it's common that mutual visibility between devices in a network is not complete. In those situations, one of the nodes has to act as a relay and forward messages between those nodes that cannot communicate directly, effectively creating a "relaying" network. In G.hn, relaying is performed at the Data Link Layer.
Multiprotocol Label Switching (MPLS) is a routing technique in telecommunications networks that directs data from one node to the next based on short path labels rather than long network addresses, thus avoiding complex lookups in a routing table and speeding traffic flows. The labels identify virtual links (paths) between distant nodes rather than endpoints. MPLS can encapsulate packets of various network protocols, hence the "multiprotocol" reference on its name. MPLS supports a range of access technologies, including T1/E1, ATM, Frame Relay, and DSL.
MPLS is scalable and protocol-independent. In an MPLS network, data packets are assigned labels. Packet-forwarding decisions are made solely on the contents of this label, without the need to examine the packet itself. This allows one to create end-to-end circuits across any type of transport medium, using any protocol. The primary benefit is to eliminate dependence on a particular OSI modeldata link layer (layer 2) technology, such as Asynchronous Transfer Mode (ATM), Frame Relay, Synchronous Optical Networking (SONET) or Ethernet, and eliminate the need for multiple layer-2 networks to satisfy different types of traffic. Multiprotocol label switching belongs to the family of packet-switched networks.
MPLS operates at a layer that is generally considered to lie between traditional definitions of OSI Layer 2 (data link layer) and Layer 3 (network layer), and thus is often referred to as a layer 2.5 protocol. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames.
A number of different technologies were previously deployed with essentially identical goals, such as Frame Relay and ATM. Frame Relay and ATM use "labels" to move frames or cells throughout a network. The header of the Frame Relay frame and the ATM cell refers to the virtual circuit that the frame or cell resides on. The similarity between Frame Relay, ATM, and MPLS is that at each hop throughout the network, the “label” value in the header is changed. This is different from the forwarding of IP packets.MPLS technologies have evolved with the strengths and weaknesses of ATM in mind. Many network engineers agree that ATM should be replaced with a protocol that requires less overhead while providing connection-oriented services for variable-length frames. MPLS is currently replacing some of these technologies in the marketplace. It is highly possible that MPLS will completely replace these technologies in the future, thus aligning these technologies with current and future technology needs.
In particular, MPLS dispenses with the cell-switching and signaling-protocol baggage of ATM. MPLS recognizes that small ATM cells are not needed in the core of modern networks, since modern optical networks are so fast (as of 2015, at 100 Gbit/s and beyond) that even full-length 1500 byte packets do not incur significant real-time queuing delays (the need to reduce such delays — e.g., to support voice traffic — was the motivation for the cell nature of ATM).
In 1996 a group from Ipsilon Networks proposed a "flow management protocol".
Their "IP Switching" technology, which was defined only to work over ATM, did not achieve market dominance. Cisco Systems introduced a related proposal, not restricted to ATM transmission, called "Tag Switching" (with its Tag Distribution Protocol TDP). It was a Cisco proprietary proposal, and was renamed "Label Switching". It was handed over to the Internet Engineering Task Force (IETF) for open standardization. The IETF work involved proposals from other vendors, and development of a consensus protocol that combined features from several vendors' work.[when?]
One original motivation was to allow the creation of simple high-speed switches since for a significant length of time it was impossible to forward IP packets entirely in hardware. However, advances in VLSI have made such devices possible. Therefore, the advantages of MPLS primarily revolve around the ability to support multiple service models and perform traffic management. MPLS also offers a robust recovery framework that goes beyond the simple protection rings of synchronous optical networking (SONET/SDH).
MPLS works by prefixing packets with an MPLS header, containing one or more labels. This is called a label stack.
Each entry in the label stack contains four fields:
These MPLS-labeled packets are switched after a label lookup/switch instead of a lookup into the IP table. As mentioned above, when MPLS was conceived, label lookup and label switching were faster than a routing table or RIB (Routing Information Base) lookup because they could take place directly within the switched fabric and avoid having to use the OS.
The presence of such a label, however, has to be indicated to the router/switch. In the case of Ethernet frames this is done through the use of EtherType values 0x8847 and 0x8848, for unicast and multicast connections respectively.
Label switch router
An MPLS router that performs routing based only on the label is called a label switch router (LSR) or transit router. This is a type of router located in the middle of an MPLS network. It is responsible for switching the labels used to route packets.
When an LSR receives a packet, it uses the label included in the packet header as an index to determine the next hop on the label-switched path (LSP) and a corresponding label for the packet from a lookup table. The old label is then removed from the header and replaced with the new label before the packet is routed forward.
Label edge router
A label edge router (LER, also known as edge LSR) is a router that operates at the edge of an MPLS network and acts as the entry and exit points for the network. LERs push an MPLS label onto an incoming packet[note 1] and pop it off an outgoing packet. Alternatively, under penultimate hop popping this function may instead be performed by the LSR directly connected to the LER.
When forwarding an IPdatagram into the MPLS domain, a LER uses routing information to determine the appropriate label to be affixed, labels the packet accordingly, and then forwards the labeled packet into the MPLS domain. Likewise, upon receiving a labeled packet which is destined to exit the MPLS domain, the LER strips off the label and forwards the resulting IP packet using normal IP forwarding rules.
In the specific context of an MPLS-based virtual private network (VPN), LERs that function as ingress and/or egress routers to the VPN are often called PE (Provider Edge) routers. Devices that function only as transit routers are similarly called P (Provider) routers. The job of a P router is significantly easier than that of a PE router, so they can be less complex and may be more dependable because of this.
Label Distribution Protocol
Labels are distributed between LERs and LSRs using the Label Distribution Protocol (LDP). LSRs in an MPLS network regularly exchange label and reachability information with each other using standardized procedures in order to build a complete picture of the network so they can then use to forward packets.
Label-switched paths (LSPs) are established by the network operator for a variety of purposes, such as to create network-based IP virtual private networks or to route traffic along specified paths through the network. In many respects, LSPs are not different from permanent virtual circuits (PVCs) in ATM or Frame Relay networks, except that they are not dependent on a particular layer-2 technology.
When an unlabeled packet enters the ingress router and needs to be passed on to an MPLStunnel, the router first determines the forwarding equivalence class (FEC) for the packet and then inserts one or more labels in the packet's newly created MPLS header. The packet is then passed on to the next hop router for this tunnel.
When a labeled packet is received by an MPLS router, the topmost label is examined. Based on the contents of the label a swap, push (impose) or pop (dispose) operation is performed on the packet's label stack. Routers can have prebuilt lookup tables that tell them which kind of operation to do based on the topmost label of the incoming packet so they can process the packet very quickly.
In a swap operation the label is swapped with a new label, and the packet is forwarded along the path associated with the new label.
In a push operation a new label is pushed on top of the existing label, effectively "encapsulating" the packet in another layer of MPLS. This allows hierarchical routing of MPLS packets. Notably, this is used by MPLS VPNs.
In a pop operation the label is removed from the packet, which may reveal an inner label below. This process is called "decapsulation". If the popped label was the last on the label stack, the packet "leaves" the MPLS tunnel. This can be done by the egress router, but see Penultimate Hop Popping (PHP) below.
During these operations, the contents of the packet below the MPLS Label stack are not examined. Indeed, transit routers typically need only to examine the topmost label on the stack. The forwarding of the packet is done based on the contents of the labels, which allows "protocol-independent packet forwarding" that does not need to look at a protocol-dependent routing table and avoids the expensive IP longest prefix match at each hop.
At the egress router, when the last label has been popped, only the payload remains. This can be an IP packet or any of a number of other kinds of payload packet. The egress router must, therefore, have routing information for the packet's payload since it must forward it without the help of label lookup tables. An MPLS transit router has no such requirement.
Usually (by default with only one label in the stack, accordingly to the MPLS specification), the last label is popped off at the penultimate hop (the hop before the egress router). This is called penultimate hop popping (PHP). This may be interesting in cases where the egress router has lots of packets leaving MPLS tunnels, and thus spends inordinate amounts of CPU time on this. By using PHP, transit routers connected directly to this egress router effectively offload it, by popping the last label themselves. In the label distribution protocols, this PHP label pop action is advertised as label value 3 « implicit-null» (which is never found in a label, since it means that the label is to be popped).
This optimisation is no longer that useful (like for initial rationales for MPLS – easier operations for the routers). Several MPLS services (including end-to-end QoS management, and ) imply to keep a label even between the penultimate and the last MPLS router, with a label disposition always done on the last MPLS router: the «Ultimate Hop Popping» (UHP). Some specific label values have been notably reserved for this use:
0: «explicit-null» for IPv4
2: «explicit-null» for IPv6
A label-switched path (LSP) is a path through an MPLS network, set up by the NMS or by a signaling protocol such as LDP, RSVP-TE, BGP (or the now deprecated CR-LDP). The path is set up based on criteria in the FEC.
The path begins at a label edge router (LER), which makes a decision on which label to prefix to a packet, based on the appropriate FEC. It then forwards the packet along to the next router in the path, which swaps the packet's outer label for another label, and forwards it to the next router. The last router in the path removes the label from the packet and forwards the packet based on the header of its next layer, for example IPv4. Due to the forwarding of packets through an LSP being opaque to higher network layers, an LSP is also sometimes referred to as an MPLS tunnel.
The router which first prefixes the MPLS header to a packet is called an ingress router. The last router in an LSP, which pops the label from the packet, is called an egress router. Routers in between, which need only swap labels, are called transit routers or label switch routers (LSRs).
Note that LSPs are unidirectional; they enable a packet to be label switched through the MPLS network from one endpoint to another. Since bidirectional communication is typically desired, the aforementioned dynamic signaling protocols can set up an LSP in the other direction to compensate for this.
When protection is considered, LSPs could be categorized as primary (working), secondary (backup) and tertiary (LSP of last resort). As described above, LSPs are normally P2P (point to point). A new concept of LSPs, which are known as P2MP (point to multi-point), was introduced recently.[when?] These are mainly used for multicasting purposes.
An MPLS header does not identify the type of data carried inside the MPLS path. If one wants to carry two different types of traffic between the same two routers, with different treatment by the core routers for each type, one has to establish a separate MPLS path for each type of traffic.
Multicast was, for the most part, an after-thought in MPLS design. It was introduced by point-to-multipoint RSVP-TE. It was driven by service provider requirements to transport broadband video over MPLS. Since the inception of RFC4875 there has been a tremendous surge in interest and deployment of MPLS multicast and this has led to several new developments both in the IETF and in shipping products.
The hub&spoke multipoint LSP is also introduced by IETF, short as HSMP LSP. HSMP LSP is mainly used for multicast, time synchronization, and other purposes.
Relationship to Internet Protocol
MPLS works in conjunction with the Internet Protocol (IP) and its routing protocols, such as the Interior Gateway Protocol (IGP). MPLS LSPs provide dynamic, transparent virtual networks with support for traffic engineering, the ability to transport layer-3 (IP) VPNs with overlapping address spaces, and support for layer-2 pseudowires using (PWE3) that are capable of transporting a variety of transport payloads (IPv4, IPv6, ATM, Frame Relay, etc.). MPLS-capable devices are referred to as LSRs. The paths an LSR knows can be defined using explicit hop-by-hop configuration, or are dynamically routed by the constrained shortest path first (CSPF) algorithm, or are configured as a loose route that avoids a particular IP address or that is partly explicit and partly dynamic.
In a pure IP network, the shortest path to a destination is chosen even when the path becomes congested. Meanwhile, in an IP network with MPLS Traffic Engineering CSPF routing, constraints such as the RSVP bandwidth of the traversed links can also be considered, such that the shortest path with available bandwidth will be chosen. MPLS Traffic Engineering relies upon the use of TE extensions to Open Shortest Path First (OSPF) or Intermediate System To Intermediate System (IS-IS) and RSVP. In addition to the constraint of RSVP bandwidth, users can also define their own constraints by specifying link attributes and special requirements for tunnels to route (or not to route) over links with certain attributes.
For end-users the use of MPLS is not visible directly, but can be assumed when doing a traceroute: only nodes that do full IP routing are shown as hops in the path, thus not the MPLS nodes used in between, therefore when you see that a packet hops between two very distant nodes and hardly any other 'hop' is seen in that provider's network (or AS) it is very likely that network uses MPLS.
In the event of a network element failure when recovery mechanisms are employed at the IP layer, restoration may take several seconds which may be unacceptable for real-time applications such as VoIP. In contrast, MPLS local protection meets the requirements of real-time applications with recovery times comparable to those of shortest path bridging networks or SONET rings of less than 50 ms.
MPLS can make use of existing ATM network or Frame Relay infrastructure, as its labeled flows can be mapped to ATM or Frame Relay virtual-circuit identifiers, and vice versa.
Frame Relay aimed to make more efficient use of existing physical resources, which allow for the underprovisioning of data services by telecommunications companies (telcos) to their customers, as clients were unlikely to be utilizing a data service 100 percent of the time. In more recent years, Frame Relay has acquired a bad reputation in some markets because of excessive bandwidth overbooking by these telcos.
Telcos often sell Frame Relay to businesses looking for a cheaper alternative to dedicated lines; its use in different geographic areas depended greatly on governmental and telecommunication companies' policies.
Many customers are likely to migrate from Frame Relay to MPLS over IP or Ethernet within the next two years[when?], which in many cases will reduce costs and improve manageability and performance of their wide area networks.
ATM (Asynchronous transfer mode)
While the underlying protocols and technologies are different, both MPLS and ATM provide a connection-oriented service for transporting data across computer networks. In both technologies, connections are signaled between endpoints, the connection state is maintained at each node in the path, and encapsulation techniques are used to carry data across the connection. Excluding differences in the signaling protocols (RSVP/LDP for MPLS and PNNI:Private Network-to-Network Interface for ATM) there still remain significant differences in the behavior of the technologies.
The most significant difference is in the transport and encapsulation methods. MPLS is able to work with variable length packets while ATM transports fixed-length (53 bytes) cells. Packets must be segmented, transported and re-assembled over an ATM network using an adaptation layer, which adds significant complexity and overhead to the data stream. MPLS, on the other hand, simply adds a label to the head of each packet and transmits it on the network.
Differences exist, as well, in the nature of the connections. An MPLS connection (LSP) is unidirectional—allowing data to flow in only one direction between two endpoints. Establishing two-way communications between endpoints requires a pair of LSPs to be established. Because 2 LSPs are required for connectivity, data flowing in the forward direction may use a different path from data flowing in the reverse direction. ATM point-to-point connections (virtual circuits), on the other hand, are bidirectional, allowing data to flow in both directions over the same path (Both SVC and PVC ATM connections are bidirectional. Check ITU-T 184.108.40.206).
Both ATM and MPLS support tunneling of connections inside connections. MPLS uses label stacking to accomplish this while ATM uses virtual paths. MPLS can stack multiple labels to form tunnels within tunnels. The ATM virtual path indicator (VPI) and virtual circuit indicator (VCI) are both carried together in the cell header, limiting ATM to a single level of tunneling.
The biggest advantage that MPLS has over ATM is that it was designed from the start to be complementary to IP. Modern routers are able to support both MPLS and IP natively across a common interface allowing network operators great flexibility in network design and operation. ATM's incompatibilities with IP require complex adaptation, making it comparatively less suitable for today's predominantly IP networks.
MPLS is currently (as of March 2012) in use in IP-only networks and is standardized by the IETF in RFC3031. It is deployed to connect as few as two facilities to very large deployments.
In practice, MPLS is mainly used to forward IP protocol data units (PDUs) and Virtual Private LAN Service (VPLS) Ethernet traffic. Major applications of MPLS are telecommunications traffic engineering, and MPLS VPN.
MPLS can exist in both an IPv4 and an IPv6 environment, using appropriate routing protocols. The major goal of MPLS development was the increase of routing speed. This goal is no longer relevant because of the usage of newer switching methods, such as ASIC, TCAM and CAM-based switching. Now, therefore, the main application of MPLS is to implement limited traffic engineering and layer 3 / layer 2 “service provider type” VPNs over IPv4 networks.
As an example of NPLC, consider two cities. An organization has an office in each city. The organization requires connectivity between these two offices. The ISP will have access to a PoP in each city and therefore has a link between the PoPs. To connect the offices to the PoPs, a connection via the local loop will be commissioned for each office. In this way, an NPLC is delivered.
Software-defined networking (SDN) technology is an approach to cloud computing that facilitates network management and enables programmatically efficient network configuration in order to improve network performance and monitoring. SDN is meant to address the fact that the static architecture of traditional networks is decentralized and complex while current networks require more flexibility and easy troubleshooting. SDN attempts to centralize network intelligence in one network component by disassociating the forwarding process of network packets (data plane) from the routing process (control plane). The control plane consists of one or more controllers which are considered as the brain of SDN network where the whole intelligence is incorporated. However, the intelligence centralization has its own drawbacks when it comes to security, scalability and elasticity and this is the main issue of SDN.
The history of SDN principles can be traced back to the separation of the control and data plane first used in the public switched telephone network as a way to simplify provisioning and management well before this architecture began to be used in data networks.
The Internet Engineering Task Force (IETF) began considering various ways to decouple the control and forwarding functions in a proposed interface standard published in 2004 appropriately named "Forwarding and Control Element Separation" (ForCES). The ForCES Working Group also proposed a companion SoftRouter Architecture. Additional early standards from the IETF that pursued separating control from data include the Linux Netlink as an IP Services Protocol and A Path Computation Element (PCE)-Based Architecture.
These early attempts failed to gain traction for two reasons. One is that many in the Internet community viewed separating control from data to be risky, especially owing to the potential for a failure in the control plane. The second is that vendors were concerned that creating standard application programming interfaces (APIs) between the control and data planes would result in increased competition.
The use of open source software in split control/data plane architectures traces its roots to the Ethane project at Stanford's computer sciences department. Ethane's simple switch design led to the creation of OpenFlow. An API for OpenFlow was first created in 2008. That same year witnessed the creation of NOX—an operating system for networks.
Work on OpenFlow continued at Stanford, including with the creation of testbeds to evaluate use of the protocol in a single campus network, as well as across the WAN as a backbone for connecting multiple campuses. In academic settings there were a few research and production networks based on OpenFlow switches from NEC and Hewlett-Packard; as well as based on Quanta Computer whiteboxes, starting from about 2009.
Beyond academia, the first deployments were by Nicira in 2010 to control OVS from Onix, co-developed with NTT and Google. A notable deployment was Google's B4 deployment in 2012. Later Google acknowledged their first OpenFlow with Onix deployments in their Datacenters at the same time. Another known large deployment is at China Mobile.
At the 2014 Interop and Tech Field Day, software-defined networking was demonstrated by Avaya using shortest path bridging (IEEE 802.1aq) and OpenStack as an automated campus, extending automation from the data center to the end device, removing manual provisioning from service delivery.
SDN architectures decouple network control and forwarding functions, enabling network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services.
The OpenFlow protocol can be used in SDN technologies. The SDN architecture is:
Directly programmable: Network control is directly programmable because it is decoupled from forwarding functions.
Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs.
Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software.
Open standards-based and vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols.
The need for a new network architecture
The explosion of mobile devices and content, server virtualization, and advent of cloud services are among the trends driving the networking industry to re-examine traditional network architectures. Many conventional networks are hierarchical, built with tiers of Ethernet switches arranged in a tree structure. This design made sense when client-server computing was dominant, but such a static architecture is ill-suited to the dynamic computing and storage needs of today's enterprise data centers, campuses, and carrier environments. Some of the key computing trends driving the need for a new network paradigm include:
Changing traffic patterns
Within the enterprise data center, traffic patterns have changed significantly. In contrast to client-server applications where the bulk of the communication occurs between one client and one server, today's applications access different databases and servers, creating a flurry of "east-west" machine-to-machine traffic before returning data to the end user device in the classic "north-south" traffic pattern. At the same time, users are changing network traffic patterns as they push for access to corporate content and applications from any type of device (including their own), connecting from anywhere, at any time. Finally, many enterprise data centers managers are contemplating a utility computing model, which might include a private cloud, public cloud, or some mix of both, resulting in additional traffic across the wide area network.
The "consumerization of IT"
Users are increasingly employing mobile personal devices such as smartphones, tablets, and notebooks to access the corporate network. IT is under pressure to accommodate these personal devices in a fine-grained manner while protecting corporate data and intellectual property and meeting compliance mandates.
The rise of cloud services
Enterprises have enthusiastically embraced both public and private cloud services, resulting in unprecedented growth of these services. Enterprise business units now want the agility to access applications, infrastructure, and other IT resources on demand and à la carte. To add to the complexity, IT's planning for cloud services must be done in an environment of increased security, compliance, and auditing requirements, along with business reorganizations, consolidations, and mergers that can change assumptions overnight. Providing self-service provisioning, whether in a private or public cloud, requires elastic scaling of computing, storage, and network resources, ideally from a common viewpoint and with a common suite of tools.
"Big data" means more bandwidth
Handling today's "big data" or mega datasets requires massive parallel processing on thousands of servers, all of which need direct connections to each other. The rise of mega datasets is fueling a constant demand for additional network capacity in the data center. Operators of hyperscale data center networks face the daunting task of scaling the network to previously unimaginable size, maintaining any-to-any connectivity without going broke.
A high-level overview of the software-defined networking architecture
The following list defines and explains the architectural components:
SDN Applications are programs that explicitly, directly, and programmatically communicate their network requirements and desired network behavior to the SDN Controller via a northbound interface (NBI). In addition they may consume an abstracted view of the network for their internal decision-making purposes. An SDN Application consists of one SDN Application Logic and one or more NBI Drivers. SDN Applications may themselves expose another layer of abstracted network control, thus offering one or more higher-level NBIs through respective NBI agents.
The SDN Controller is a logically centralized entity in charge of (i) translating the requirements from the SDN Application layer down to the SDN Datapaths and (ii) providing the SDN Applications with an abstract view of the network (which may include statistics and events). An SDN Controller consists of one or more NBI Agents, the SDN Control Logic, and the Control to Data-Plane Interface (CDPI) driver. Definition as a logically centralized entity neither prescribes nor precludes implementation details such as the federation of multiple controllers, the hierarchical connection of controllers, communication interfaces between controllers, nor virtualization or slicing of network resources.
The SDN Datapath is a logical network device that exposes visibility and uncontested control over its advertised forwarding and data processing capabilities. The logical representation may encompass all or a subset of the physical substrate resources. An SDN Datapath comprises a CDPI agent and a set of one or more traffic forwarding engines and zero or more traffic processing functions. These engines and functions may include simple forwarding between the datapath's external interfaces or internal traffic processing or termination functions. One or more SDN Datapaths may be contained in a single (physical) network element—an integrated physical combination of communications resources, managed as a unit. An SDN Datapath may also be defined across multiple physical network elements. This logical definition neither prescribes nor precludes implementation details such as the logical to physical mapping, management of shared physical resources, virtualization or slicing of the SDN Datapath, interoperability with non-SDN networking, nor the data processing functionality, which can include OSI layer 4-7 functions.
SDN Control to Data-Plane Interface (CDPI)
The SDN CDPI is the interface defined between an SDN Controller and an SDN Datapath, which provides at least (i) programmatic control of all forwarding operations, (ii) capabilities advertisement, (iii) statistics reporting, and (iv) event notification. One value of SDN lies in the expectation that the CDPI is implemented in an open, vendor-neutral and interoperable way.
SDN Northbound Interfaces (NBI)
SDN NBIs are interfaces between SDN Applications and SDN Controllers and typically provide abstract network views and enable direct expression of network behavior and requirements. This may occur at any level of abstraction (latitude) and across different sets of functionality (longitude). One value of SDN lies in the expectation that these interfaces are implemented in an open, vendor-neutral and interoperable way.
SDN Control Plane
Centralized - Hierarchical - Distributed
The implementation of the SDN control plane can follow a centralized, hierarchical, or decentralized design. Initial SDN control plane proposals focused on a centralized solution, where a single control entity has a global view of the network. While this simplifies the implementation of the control logic, it has scalability limitations as the size and dynamics of the network increase. To overcome these limitations, several approaches have been proposed in the literature that fall into two categories, hierarchical and fully distributed approaches. In hierarchical solutions, distributed controllers operate on a partitioned network view, while decisions that require network-wide knowledge are taken by a logically centralized root controller. In distributed approaches, controllers operate on their local view or they may exchange synchronization messages to enhance their knowledge. Distributed solutions are more suitable for supporting adaptive SDN applications.
A key issue when designing a distributed SDN control plane is to decide on the number and placement of control entities. An important parameter to consider while doing so is the propagation delay between the controllers and the network devices, especially in the context of large networks. Other objectives that have been considered involve control path reliability, fault tolerance, and application requirements.
OpenFlow uses TCAM tables to route packet sequences (flows). If flows arrive at a switch, a flow table lookup is performed. Depending on the flow table implementation this is done in a software flow table if a vSwitch is used or in an ASIC if it's implemented in hardware. In the case when no matching flow is found, a request to the controller for further instructions is sent. This is handled in one of three different modes. In reactive mode the controller acts after these requests and creates and installs a rule in the flow table for the corresponding packet if necessary. In proactive mode the controller populates flow table entries for all possible traffic matches possible for this switch in advance. This mode can be compared with typical routing table entries today, where all static entries are installed ahead of time. Following this no request is sent to the controller since all incoming flows will find a matching entry. A major advantage in proactive mode is that all packets are forwarded in line rate (considering all flow table entries in TCAM) and no delay is added. The third mode, hybrid mode, follows the flexibility of a reactive mode for a set of traffic and the low-latency forwarding (proactive mode) for the rest of the traffic.
Software-defined mobile networking (SDMN) is an approach to the design of mobile networks where all protocol-specific features are implemented in software, maximizing the use of generic and commodity hardware and software in both the core network and radio access network. It is proposed as an extension of SDN paradigm to incorporate mobile network specific functionalities. Since 3GPP Rel.14, a Control User Plane Separation was introduced in the Mobile Core Network architectures with the PFCP protocol.
An SD-WAN is a Wide Area Network (WAN) managed using the principles of software-defined networking. The main driver of SD-WAN is to lower WAN costs using more affordable and commercially available leased lines, as an alternative or partial replacement of more expensive MPLS lines. Control and management is administered separately from the hardware with central controllers allowing for easier configuration and administration.
An SD-LAN is a Local area network (LAN) built around the principles of software-defined networking, though there are key differences in topology, network security, application visibility and control, management and quality of service. SD-LAN decouples control management, and data planes to enable a policy driven architecture for wired and wireless LANs. SD-LANs are characterized by their use of a cloud management system and wireless connectivity without the presence of a physical controller.
Security using the SDN paradigm
SDN architecture may enable, facilitate or enhance network-related security applications due to the controller's central view of the network, and its capacity to reprogram the data plane at any time. While security of SDN architecture itself remains an open question that has already been studied a couple of times in the research community, the following paragraphs only focus on the security applications made possible or revisited using SDN.
Several research works on SDN have already investigated security applications built upon the SDN controller, with different aims in mind. Distributed Denial of Service (DDoS) detection and mitigation, as well as botnet and worm propagation, are some concrete use-cases of such applications: basically, the idea consists in periodically collecting network statistics from the forwarding plane of the network in a standardized manner (e.g. using Openflow), and then apply classification algorithms on those statistics in order to detect any network anomalies. If an anomaly is detected, the application instructs the controller how to reprogram the data plane in order to mitigate it.
Another kind of security application leverages the SDN controller by implementing some moving target defense (MTD) algorithms. MTD algorithms are typically used to make any attack on a given system or network more difficult than usual by periodically hiding or changing key properties of that system or network. In traditional networks, implementing MTD algorithms is not a trivial task since it is difficult to build a central authority able of determining - for each part of the system to be protected - which key properties are hid or changed. In an SDN network, such tasks become more straightforward thanks to the centrality of the controller. One application can for example periodically assign virtual IPs to hosts within the network, and the mapping virtual IP/real IP is then performed by the controller. Another application can simulate some fake opened/closed/filtered ports on random hosts in the network in order to add significant noise during reconnaissance phase (e.g. scanning) performed by an attacker.
Additional value regarding security in SDN enabled networks can also be gained using FlowVisor and FlowChecker respectively. The former tries to use a single hardware forwarding plane sharing multiple separated logical networks. Following this approach the same hardware resources can be used for production and development purposes as well as separating monitoring, configuration and internet traffic, where each scenario can have its own logical topology which is called slice. In conjunction with this approach FlowChecker realizes the validation of new OpenFlow rules that are deployed by users using their own slice.
SDN controller applications are mostly deployed in large-scale scenarios, which requires comprehensive checks of possible programming errors. A system to do this called NICE was described in 2012. Introducing an overarching security architecture requires a comprehensive and protracted approach to SDN. Since it was introduced, designers are looking at possible ways to secure SDN that do not compromise scalability. One architecture called SN-SECA (SDN+NFV) Security Architecture.
Group Data Delivery Using SDN
Distributed applications that run across datacenters usually replicate data for the purpose of synchronization, fault resiliency, load balancing and getting data closer to users (which reduces latency to users and increases their perceived throughput). Also, many applications, such as Hadoop, replicate data within a datacenter across multiple racks to increase fault tolerance and make data recovery easier. All of these operations require data delivery from one machine or datacenter to multiple machines or datacenters. The process of reliably delivering data from one machine to multiple machines is referred to as Reliable Group Data Delivery (RGDD).
SDN switches can be used for RGDD via installation of rules that allow forwarding to multiple outgoing ports. For example, OpenFlow provides support for Group Tables since version 1.1 which makes this possible. Using SDN, a central controller can carefully and intelligently setup forwarding trees for RGDD. Such trees can be built while paying attention to network congestion/load status to improve performance. For example, MCTCP is a scheme for delivery to many nodes inside datacenters that relies on regular and structured topologies of datacenter networks while DCCast and QuickCast are approaches for fast and efficient data and content replication across datacenters over private WANs.
Relationship to NFV
NFV Network Function Virtualization is a concept that complements SDN. Thus, NFV is not dependent on SDN or SDN concepts. NFV disunites software from hardware to enable flexible network deployment and dynamic operation. NFV deployments typically use commodity servers to run network services software versions that previously were hardware-based. These software-based services that run in an NFV environment are called Virtual Network Functions (VNF). SDN-NFV hybrid program was provided for high efficiency, elastic and scalable capabilities NFV aimed at accelerating service innovation and provisioning using standard IT virtualization technologies. SDN provides the agility of controlling the generic forwarding devices such as the routers and switches by using SDN controllers. On the other hand, NFV agility is provided for the network applications by using virtualized servers. It is entirely possible to implement a virtualized network function (VNF) as a standalone entity using existing networking and orchestration paradigms. However, there are inherent benefits in leveraging SDN concepts to implement and manage an NFV infrastructure, particularly when looking at the management and orchestration of VNFs, and that's why multivendor platforms are being defined that incorporate SDN and NFV in concerted ecosystems.
Relationship to DPI
DPI Deep Packet Inspection provides network with application-awareness, while SDN provides applications with network-awareness. Although SDN will make radically change in the generic network architectures, it should cope with working with traditional network architectures to offer high interoperability. The new SDN based network architecture should consider all the capabilities that are currently provided in separate devices or software other than the main forwarding devices (routers and switches) such as the DPI, security appliances 
^Sushant Jain, Alok Kumar, Subhasree Mandal, Joon Ong, Leon Poutievski, Arjun Singh, Subbaiah Venkata, Jim Wanderer, Junlan Zhou, Min Zhu, Jonathan Zolla, Urs Hölzle, Stephen Stuart and Amin Vahdat (Google) (August 12–16, 2013). "B4: Experience with a Globally-Deployed Software Defined WAN"(PDF).CS1 maint: Multiple names: authors list (link)
^S.H. Yeganeh, Y. Ganjali, "Kandoo: A Framework for Efficient and Scalable Offloading of Control Applications," proceedings of HotSDN, Helsinki, Finland, 2012.
^R. Ahmed, R. Boutaba, "Design considerations for managing wide area software defined networks," Communications Magazine, IEEE, vol. 52, no. 7, pp. 116–123, July 2014.
^T. Koponen et al, "Onix: A Distributed Control Platform for Large scale Production Networks," proceedings USENIX, ser. OSDI’10, Vancouver, Canada, 2010.
^D. Tuncer, M. Charalambides, S. Clayman, G. Pavlou, "Adaptive Resource Management and Control in Software Defined Networks," Network and Service Management, IEEE Transactions on, vol. 12, no. 1, pp. 18–33, March 2015.
^B. Heller, R. Sherwood, and N. McKeown, "The Controller Placement Problem," proceedings of HotSDN’12, 2012.
^Y.N. Hu, W.D. Wang, X.Y. Gong, X.R. Que, S.D. Cheng, "On the placement of controllers in software-defined networks," Journal of China Universities of Posts and Telecommunications, vol. 19, Supplement 2, no. 0, pp. 92 – 171, 2012.
^F.J. Ros, P.M. Ruiz, "Five nines of southbound reliability in software defined networks," proceedings of HotSDN’14, 2014.
^D. Tuncer, M. Charalambides, S. Clayman, G. Pavlou, "On the Placement of Management and Control Functionality in Software Defined Networks," proceedings of 2nd IEEE International Workshop on Management of SDN and NFV Systems (ManSDN/NFV), Barcelona, Spain, November 2015.
^Jose Costa-Requena, Jesús Llorente Santos, Vicent Ferrer Guasch, Kimmo Ahokas, Gopika Premsankar, Sakari Luukkainen, Ijaz Ahmed, Madhusanka Liyanage, Mika Ylianttila, Oscar López Pérez, Mikel Uriarte Itzazelaia, Edgardo Montes de Oca, SDN and NFV Integration in Generalized Mobile Network Architecture , in Proc. of European Conference on Networks and Communications (EUCNC), Paris, France. June 2015.
^Kreutz, Diego; Ramos, Fernando; Verissimo, Paulo (2013). "Towards secure and dependable software-defined networks". Proceedings of the second ACM SIGCOMM workshop on Hot topics in software defined networking. pp. 50–60.
^Scott-Hayward, Sandra; O'Callaghan, Gemma; Sezer, Sakir (2013). "SDN security: A survey". Future Networks and Services (SDN4FNS), 2013 IEEE SDN for. pp. 1–7.
^Benton, Kevin; Camp, L Jean; Small, Chris (2013). "Openflow vulnerability assessment". Proceedings of the second ACM SIGCOMM workshop on Hot topics in software defined networking. pp. 151–152.
^Abdou, AbdelRahman; van Oorschot, Paul; Wan, Tao (May 2018). "A Framework and Comparative Analysis of Control Plane Security of SDN and Conventional Networks". IEEE Communications Surveys and Tutorials. to appear.
^Giotis, K; Argyropoulos, Christos; Androulidakis, Georgios; Kalogeras, Dimitrios; Maglaris, Vasilis (2014). "Combining OpenFlow and sFlow for an effective and scalable anomaly detection and mitigation mechanism on SDN environments". Computer Networks. 62: 122–136. doi:10.1016/j.bjp.2013.10.014.
^Braga, Rodrigo; Mota, Edjard; Passito, Alexandre (2010). "Lightweight DDoS flooding attack detection using NOX/OpenFlow". Local Computer Networks (LCN), 2010 IEEE 35th Conference on. pp. 408–415.
^Feamster, Nick (2010). "Outsourcing home network security". Proceedings of the 2010 ACM SIGCOMM workshop on Home networks. pp. 37–42.
^Jin, Ruofan & Wang, Bing (2013). "Malware detection for mobile devices using software-defined networking". Research and Educational Experiment Workshop (GREE), 2013 Second GENI. 81-88.
^Jafarian, Jafar Haadi; Al-Shaer, Ehab; Duan, Qi (2012). "Openflow random host mutation: transparent moving target defense using software defined networking". Proceedings of the first workshop on Hot topics in software defined networks. pp. 127–132.
^Al-Shaer, Ehab & Al-Haj, Saeed (2010). "FlowChecker: Configuration analysis and verification of federated OpenFlow infrastructures". Proceedings of the 3rd ACM workshop on Assurable and usable security configuration. pp. 37–44.
^Canini, Marco; Venzano, Daniele; Peresini, Peter; Kostic, Dejan; Rexford, Jennifer; et al. (2012). A NICE Way to Test OpenFlow Applications. NSDI. pp. 127–140.
^Bernardo and Chua (2015). Introduction and Analysis of SDN and NFV Security Architecture (SA-SECA). 29th IEEE AINA 2015. pp. 796–801.
Wireless networking is a method by which homes, telecommunications networks and business installations avoid the costly process of introducing cables into a building, or as a connection between various equipment locations.Wireless telecommunications networks are generally implemented and administered using radio communication. This implementation takes place at the physical level (layer) of the OSI model network structure.
The first professional wireless network was developed under the brand ALOHAnet in 1969 at the University of Hawaii and became operational in June 1971. The first commercial wireless network was the WaveLAN product family, developed by NCR in 1986.
Computers are very often connected to networks using wireless links, e.g. WLANs
Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 48 km (30 mi) apart.
Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as .
Wireless personal area networks (WPANs) connect devices within a relatively small area, that is generally within a person's reach. For example, both Bluetooth radio and invisible infrared light provides a WPAN for interconnecting a headset to a laptop. ZigBee also supports WPAN applications.Wi-Fi PANs are becoming commonplace (2010) as equipment designers start to integrate Wi-Fi into a variety of consumer electronic devices. Intel "My WiFi" and Windows 7 "virtualWi-Fi" capabilities have made Wi-Fi PANs simpler and easier to set up and configure.
Wireless LANs are often used for connecting to local resources and to the Internet
A wireless local area network (WLAN) links two or more devices over a short distance using a wireless distribution method, usually providing a connection through an access point for internet access. The use of spread-spectrum or OFDM technologies may allow users to move around within a local coverage area, and still remain connected to the network.
Products using the IEEE 802.11 WLAN standards are marketed under the Wi-Fi brand name .
Fixed wireless technology implements point-to-point links between computers or networks at two distant locations, often using dedicated microwave or modulated laser light beams over line of sight paths. It is often used in cities to connect networks in two or more buildings without installing a wired link.
To connect to Wi-Fi, sometimes are used devices like a router or connecting HotSpot using mobile smartphones.
Wireless ad hoc network
A wireless ad hoc network, also known as a wireless mesh network or mobile ad hoc network (MANET), is a wireless network made up of radio nodes organized in a mesh topology. Each node forwards messages on behalf of the other nodes and each node performs routing. Ad hoc networks can "self-heal", automatically re-routing around a node that has lost power. Various network layer protocols are needed to realize ad hoc mobile networks, such as routing, Associativity-Based Routing, , and .
Wireless wide area networks are wireless networks that typically cover large areas, such as between neighbouring towns and cities, or city and suburb. These networks can be used to connect branch offices of business or as a public Internet access system. The wireless connections between access points are usually point to pointmicrowave links using parabolic dishes on the 2.4 GHz and 5.8Ghz band, rather than omnidirectional antennas used with smaller networks. A typical system contains base station gateways, access points and wireless bridging relays. Other configurations are mesh systems where each access point acts as a relay also. When combined with renewable energy systems such as photovoltaic solar panels or wind systems they can be stand alone systems.
A cellular network or mobile network is a radio network distributed over land areas called cells, each served by at least one fixed-location transceiver, known as a cell site or base station. In a cellular network, each cell characteristically uses a different set of radio frequencies from all their immediate neighbouring cells to avoid any interference.
When joined together these cells provide radio coverage over a wide geographic area. This enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission.
Global System for Mobile Communications (GSM): The GSM network is divided into three major systems: the switching system, the base station system, and the operation and support system. The cell phone connects to the base system station which then connects to the operation and support station; it then connects to the switching station where the call is transferred to where it needs to go. GSM is the most common standard and is used for a majority of cell phones.
Personal Communications Service (PCS): PCS is a radio band that can be used by mobile phones in North America and South Asia. Sprint happened to be the first service to set up a PCS.
D-AMPS: Digital Advanced Mobile Phone Service, an upgraded version of AMPS, is being phased out due to advancement in technology. The newer GSM networks are replacing the older system.
Global area network
A global area network (GAN) is a network used for supporting mobile across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.
Space networks are networks used for communication between spacecraft, usually in the vicinity of the Earth. The example of this is NASA's Space Network.
Some examples of usage include cellular phones which are part of everyday wireless networks, allowing easy personal communications. Another example, Intercontinental network systems, use radio satellites to communicate across the world. Emergency services such as the police utilize wireless networks to communicate effectively as well. Individuals and businesses use wireless networks to send and share data rapidly, whether it be in a small office building or across the world.
In a general sense, wireless networks offer a vast variety of uses by both business and home users.
"Now, the industry accepts a handful of different wireless technologies. Each wireless technology is defined by a standard that describes unique functions at both the Physical and the Data Link layers of the OSI model. These standards differ in their specified signaling methods, geographic ranges, and frequency usages, among other things. Such differences can make certain technologies better suited to home networks and others better suited to network larger organizations."
Each standard varies in geographical range, thus making one standard more ideal than the next depending on what it is one is trying to accomplish with a wireless network.
The performance of wireless networks satisfies a variety of applications such as voice and video. The use of this technology also gives room for expansions, such as from 2G to 3G and, 4G and 5G technologies, which stand for the fourth and fifth generation of cell phone mobile communications standards. As wireless networking has become commonplace, sophistication increases through configuration of network hardware and software, and greater capacity to send and receive larger amounts of data, faster, is achieved. Now the wireless network has been running on LTE, which is a 4G mobile communication standard. Users of an LTE network should have data speeds that are 10x faster than a 3G network. 
Space is another characteristic of wireless networking. Wireless networks offer many advantages when it comes to difficult-to-wire areas trying to communicate such as across a street or river, a warehouse on the other side of the premises or buildings that are physically separated but operate as one. Wireless networks allow for users to designate a certain space which the network will be able to communicate with other devices through that network.
Space is also created in homes as a result of eliminating clutters of wiring. This technology allows for an alternative to installing physical network mediums such as TPs, coaxes, or fiber-optics, which can also be expensive.
For homeowners, wireless technology is an effective option compared to Ethernet for sharing printers, scanners, and high-speed Internet connections. WLANs help save the cost of installation of cable mediums, save time from physical installation, and also creates mobility for devices connected to the network.
Wireless networks are simple and require as few as one single wireless access point connected directly to the Internet via a router.
Wireless Network Elements
The telecommunications network at the physical layer also consists of many interconnected wireline network elements (NEs). These NEs can be stand-alone systems or products that are either supplied by a single manufacturer or are assembled by the service provider (user) or system integrator with parts from several different manufacturers.
Wireless NEs are the products and devices used by a wireless carrier to provide support for the backhaul network as well as a mobile switching center (MSC).
Reliable wireless service depends on the network elements at the physical layer to be protected against all operational environments and applications (see GR-3171, Generic Requirements for Network Elements Used in Wireless Networks – Physical Layer Criteria).
What are especially important are the NEs that are located on the cell tower to the base station (BS) cabinet. The attachment hardware and the positioning of the antenna and associated closures and cables are required to have adequate strength, robustness, corrosion resistance, and resistance against wind, storms, icing, and other weather conditions. Requirements for individual components, such as hardware, cables, connectors, and closures, shall take into consideration the structure to which they are attached.
Compared to wired systems, wireless networks are frequently subject to electromagnetic interference. This can be caused by other networks or other types of equipment that generate radio waves that are within, or close, to the radio bands used for communication. Interference can degrade the signal or cause the system to fail.
Absorption and reflection
Some materials cause absorption of electromagnetic waves, preventing it from reaching the receiver, in other cases, particularly with metallic or conductive materials reflection occurs. This can cause dead zones where no reception is available. Aluminium foiled thermal isolation in modern homes can easily reduce indoor mobile signals by 10 dB frequently leading to complaints about the bad reception of long-distance rural cell signals.
In multipath fading two or more different routes taken by the signal, due to reflections, can cause the signal to cancel out at certain locations, and to be stronger in other places (upfade).
Hidden node problem
In a hidden node problem Station A can communicate with Station B. Station C can also communicate with Station B. However, Stations A and C cannot communicate with each other, but their signals can interfere at B.
The wireless spectrum is a limited resource and shared by all nodes in the range of its transmitters. Bandwidth allocation becomes complex with multiple participating users. Often users are not aware that advertised numbers (e.g., for IEEE 802.11 equipment or LTE networks) are not their capacity, but shared with all other users and thus the individual user rate is far lower. With increasing demand, the capacity crunch is more and more likely to happen. User-in-the-loop (UIL) may be an alternative solution to ever upgrading to newer technologies for over-provisioning.
Understanding of SISO, SIMO, MISO and MIMO. Using multiple antennas and transmitting in different frequency channels can reduce fading, and can greatly increase the system capacity.
Shannon's theorem can describe the maximum data rate of any single wireless link, which relates to the bandwidth in hertz and to the noise on the channel.
One can greatly increase channel capacity by using MIMO techniques, where multiple aerials or multiple frequencies can exploit multiple paths to the receiver to achieve much higher throughput – by a factor of the product of the frequency and aerial diversity at each end.
Under Linux, the Central Regulatory Domain Agent (CRDA) controls the setting of channels.
The total network bandwidth depends on how dispersive the medium is (more dispersive medium generally has better total bandwidth because it minimises interference), how many frequencies are available, how noisy those frequencies are, how many aerials are used and whether a directional antenna is in use, whether nodes employ power control and so on.
Cellular wireless networks generally have good capacity, due to their use of directional aerials, and their ability to reuse radio channels in non-adjacent cells. Additionally, cells can be made very small using low power transmitters this is used in cities to give network capacity that scales linearly with population density.
Wireless access points are also often close to humans, but the drop off in power over distance is fast, following the inverse-square law.
The position of the United Kingdom's Health Protection Agency (HPA) is that “...radio frequency (RF) exposures from WiFi are likely to be lower than those from mobile phones.” It also saw “...no reason why schools and others should not use WiFi equipment.” In October 2007, the HPA launched a new “systematic” study into the effects of WiFi networks on behalf of the UK government, in order to calm fears that had appeared in the media in a recent period up to that time". Dr Michael Clark, of the HPA, says published research on mobile phones and masts does not add up to an indictment of WiFi.
^Anadiotis, Angelos-Christos; et al. (2010). "Towards Maximising Wireless Testbed Utilization Using Spectrum Slicing". In Thomas Magedanz; Athanasius Gavras; Huu Thanh Nguyen; Jeffrey S. Chase. Testbeds and Research Infrastructures, Development of Networks and Communities: 6th International ICST Conference, TridentCom 2010, Berlin, Germany, May 18–20, 2010, Revised Selected Papers. 6th International ICST Conference, TridentCom 2010, Berlin, Germany, May 18–20, 2010. 46. Springer Science & Business Media. p. 302. Retrieved 2015-07-19. […] Central Regulatory Domain Agent (CRDA) […] controls the channels to be set on the system, based on the regulations of each country.
^Daniels, Nicki (11 December 2006). "Wi-fi: should we be worried?". The Times. London. Retrieved 16 September 2007. All the expert reviews done here and abroad indicate that there is unlikely to be a health risk from wireless networks. … When we have conducted measurements in schools, typical exposures from WiFi are around 20 millionths of the international guideline levels of exposure to radiation. As a comparison, a child on a mobile phone receives up to 50 percent of guideline levels. So a year sitting in a classroom near a wireless network is roughly equivalent to 20 minutes on a mobile. If WiFi should be taken out of schools, then the mobile phone network should be shut down, too—and FM radio and TV, as the strength of their signals is similar to that from WiFi in classrooms....