Knowledge Base

Faraday Cage

Wikipedia Entry

Faraday cage

Faraday cage demonstration on volunteers in the Palais de la Découverte in Paris
EMI shielding around an MRI machine room
Faraday shield at a power plant in Heimbach, Germany
Faraday bag
Faraday bags are a type of Faraday cage made of flexible metallic fabric. They are typically used to block remote wiping or alteration of wireless devices recovered in criminal investigations, but may also be used by the general public to protect against data theft or to enhance digital privacy.

A Faraday cage or Faraday shield is an enclosure used to block electromagnetic fields. A Faraday shield may be formed by a continuous covering of conductive material, or in the case of a Faraday cage, by a mesh of such materials. Faraday cages are named after the English scientist Michael Faraday, who invented them in 1836.[1]

Video of a Faraday cage shielding a man from electricity

A Faraday cage operates because an external electrical field causes the electric charges within the cage's conducting material to be distributed so that they cancel the field's effect in the cage's interior. This phenomenon is used to protect sensitive electronic equipment from external radio frequency interference (RFI). Faraday cages are also used to enclose devices that produce RFI, such as radio transmitters, to prevent their radio waves from interfering with other nearby equipment. They are also used to protect people and equipment against actual electric currents such as lightning strikes and electrostatic discharges, since the enclosing cage conducts current around the outside of the enclosed space and none passes through the interior.

Faraday cages cannot block stable or slowly varying magnetic fields, such as the Earth's magnetic field (a compass will still work inside). To a large degree, though, they shield the interior from external electromagnetic radiation if the conductor is thick enough and any holes are significantly smaller than the wavelength of the radiation. For example, certain computer forensic test procedures of electronic systems that require an environment free of electromagnetic interference can be carried out within a screened room. These rooms are spaces that are completely enclosed by one or more layers of a fine metal mesh or perforated sheet metal. The metal layers are grounded to dissipate any electric currents generated from external or internal electromagnetic fields, and thus they block a large amount of the electromagnetic interference. See also electromagnetic shielding. They provide less attenuation of outgoing transmissions than incoming: they can block EMP waves from natural phenomena very effectively, but a tracking device, especially in upper frequencies, may be able to penetrate from within the cage (e.g., some cell phones operate at various radio frequencies so while one cell phone may not work, another one will).

A common misconception is that a Faraday cage provides full blockage or attenuation; this is not true. The reception or transmission of radio waves, a form of electromagnetic radiation, to or from an antenna within a Faraday cage is heavily attenuated or blocked by the cage; however, a Faraday cage has varied attenuation depending on wave form, frequency or distance from receiver/transmitter, and receiver/transmitter power. Near-field high-powered frequency transmissions like HF RFID are more likely to penetrate. Solid cages generally attenuate fields over a broader range of frequencies than mesh cages.

History

In 1836, Michael Faraday observed that the excess charge on a charged conductor resided only on its exterior and had no influence on anything enclosed within it. To demonstrate this fact, he built a room coated with metal foil and allowed high-voltage discharges from an electrostatic generator to strike the outside of the room. He used an electroscope to show that there was no electric charge present on the inside of the room's walls.

Although this cage effect has been attributed to Michael Faraday's famous ice pail experiments performed in 1843, it was Benjamin Franklin in 1755 who observed the effect by lowering an uncharged cork ball suspended on a silk thread through an opening in an electrically charged metal can. In his words, "the cork was not attracted to the inside of the can as it would have been to the outside, and though it touched the bottom, yet when drawn out it was not found to be electrified (charged) by that touch, as it would have been by touching the outside. The fact is singular." Franklin had discovered the behavior of what we now refer to as a Faraday cage or shield (based on Faraday's later experiments which duplicated Franklin's cork and can).[2]

Additionally, the Abbe Nollet published an early account of an effect attributable to the cage effect in his Leçons de physique expérimentale.[3]

Operation

Animation showing how a Faraday cage (box) works. When an external electrical field (arrows) is applied, the electrons (little balls) in the metal move to the left side of the cage, giving it a negative charge, while the remaining unbalanced charge of the nuclei give the right side a positive charge. These induced charges create an opposing electric field that cancels the external electric field throughout the box.

Continuous

A continuous Faraday shield is a hollow conductor. Externally or internally applied electromagnetic fields produce forces on the charge carriers (usually electrons) within the conductor; the charges are redistributed accordingly due to electrostatic induction. The redistributed charges greatly reduce the voltage within the surface, to an extent depending on the capacitance, however, full cancellation does not occur.[4]

Interior charges

If a charge is placed inside an ungrounded Faraday cage, the internal face of the cage becomes charged (in the same manner described for an external charge) to prevent the existence of a field inside the body of the cage, however, this charging of the inner face re-distributes the charges in the body of the cage. This charges the outer face of the cage with a charge equal in sign and magnitude to the one placed inside the cage. Since the internal charge and the inner face cancel each other out, the spread of charges on the outer face is not affected by the position of the internal charge inside the cage. So for all intents and purposes, the cage generates the same DC electric field that it would generate if it were simply affected by the charge placed inside. The same is not true for electromagnetic waves.

If the cage is grounded, the excess charges will be neutralized as the ground connection creates an between the outside of the cage and the environment, so there is no voltage between them and therefor also no field. The inner face and the inner charge will remain the same so the field is kept inside.

Exterior fields

Skin depth vs. frequency for some materials at room temperature, red vertical line denotes 50 Hz frequency:

Effectiveness of shielding of a static electric field is largely independent of the geometry of the conductive material, however, static magnetic fields can penetrate the shield completely.

In the case of a varying electromagnetic fields, the faster the variations are (i.e., the higher the frequencies), the better the material resists magnetic field penetration. In this case the shielding also depends on the electrical conductivity, the magnetic properties of the conductive materials used in the cages, as well as their thicknesses.

A good idea of the effectiveness of a Faraday shield can be obtained from considerations of skin depth. With skin depth, the current flowing is mostly in the surface, and decays exponentially with depth through the material. Because a Faraday shield has finite thickness, this determines how well the shield works; a thicker shield can attenuate electromagnetic fields better, and to a lower frequency.

Faraday cage

Faraday cages are Faraday shields which have holes in them and are therefore more complex to analyze. Whereas continuous shields essentially attenuate all wavelengths shorter than the skin depth, the holes in a cage may permit shorter wavelengths to pass through or set up "evanescent fields" (oscillating fields that do not propagate as EM waves) just beneath the surface. The shorter the wavelength, the better it passes through a mesh of given size. Thus to work well at short wavelengths (i.e., high frequencies), the holes in the cage must be smaller than the wavelength of the incident wave. Faraday cages may therefore be thought of as high pass filters.

Examples

  • Faraday cages are routinely used in analytical chemistry to reduce noise while making sensitive measurements.
  • Faraday cages, more specifically dual paired seam Faraday bags, are often used in digital forensics to prevent remote wiping and alteration of criminal digital evidence.
  • The U.S. and NATO Tempest standards, and similar standards in other countries, include Faraday cages as part of a broader effort to provide emission security for computers.
  • Automobile and airplane passenger compartments are essentially Faraday cages, protecting passengers from electric charges, such as lightning
  • A booster bag (shopping bag lined with aluminium foil) acts as a Faraday cage. It is often used by shoplifters to steal RFID-tagged items.[5]
  • Elevators and other rooms with metallic conducting frames and walls simulate a Faraday cage effect, leading to a loss of signal and "dead zones" for users of cellular phones, radios, and other electronic devices that require external electromagnetic signals. During training, firefighters, and other first responders are cautioned that their two-way radios will probably not work inside elevators and to make allowances for that. Small, physical Faraday cages are used by electronics engineers during equipment testing to simulate such an environment to make sure that the device gracefully handles these conditions.[citation needed]
  • Properly designed conductive clothing can also form a protective Faraday cage. Some electrical linemen wear Faraday suits, which allow them to work on live, high-voltage power lines without risk of electrocution. The suit prevents electric current from flowing through the body, and has no theoretical voltage limit. Linemen have successfully worked even the highest voltage (Kazakhstan's Ekibastuz–Kokshetau line 1150 kV) lines safely.[citation needed]
    • Austin Richards, a physicist in California, created a metal Faraday suit in 1997 that protects him from tesla coil discharges. In 1998, he named the character in the suit Doctor MegaVolt and has performed all over the world and at Burning Man nine different years.
  • The scan room of a magnetic resonance imaging (MRI) machine is designed as a Faraday cage. This prevents external RF (radio frequency) signals from being added to data collected from the patient, which would affect the resulting image. Radiographers are trained to identify the characteristic artifacts created on images should the Faraday cage be damaged during a thunderstorm.
  • A microwave oven utilizes a Faraday cage, which can be partly seen covering the transparent window, to contain the electromagnetic energy within the oven and to shield the exterior from radiation.
  • Plastic bags that are impregnated with metal are used to enclose electronic toll collection devices whenever tolls should not be charged to those devices, such as during transit or when the user is paying cash.[citation needed]
  • The shield of a screened cable, such as USB cables or the coaxial cable used for cable television, protects the internal conductors from external electrical noise and prevents the RF signals from leaking out.

See also

References

  1. ^ "Michael Faraday". Encarta. Archived from the original on 31 October 2009. Retrieved 20 November 2008.
  2. ^ J. D. Krauss, Electromagnetics, 4Ed, McGraw-Hill, 1992, ISBN 0-07-035621-1
  3. ^ https://books.google.co.uk/books?id=j2GMJwc3h0kC&pg=PA95&dq=Faraday+Cage+Nollet&hl=en&sa=X&ved=0ahUKEwj_ycm7rp3hAhX0sHEKHeuHBsk4MhDoAQhTMAg#v=onepage&q=Faraday%20Cage%20Nollet&f=false
  4. ^ https://people.maths.ox.ac.uk/trefethen/chapman_hewett_trefethen.pdf Mathematics of the Faraday Cage- S. Jonathan Chapman David P. Hewett Lloyd N. Trefethen
  5. ^ Hamill, Sean (22 December 2008). "As Economy Dips, Arrests for Shoplifting Soar". The New York Times. Retrieved 12 August 2009.

External links


Fiber

The infinitely-scalable and highly customizable parallel peer-chain architecture of the Skycoin platform.


Fiber is the structural layer of Skycoin’s blockchain platform that is custom built to be adaptable to any blockchain application’s needs. Running on an innovative and extremely efficient code, Fiber enables the Skycoin platform to scale into and disrupt veritably every industry with blockchain solutions.

Fiber is as sophisticated as it is elegant, capable of expanding and adapting to the needs of numerous types of distributed applications.

https://www.skycoin.net/fiber/


FPGA

Field-programmable gate array

A Stratix IV FPGA from Altera

A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence the term "field-programmable". The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an Application-Specific Integrated Circuit (ASIC). Circuit diagrams were previously used to specify the configuration, but this is increasingly rare due to the advent of electronic design automation tools.

A Spartan FPGA from Xilinx

FPGAs contain an array of programmable logic blocks, and a hierarchy of "reconfigurable interconnects" that allow the blocks to be "wired together", like many logic gates that can be inter-wired in different configurations. Logic blocks can be configured to perform complex combinational functions, or merely simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory.[1] Many FPGAs can be reprogrammed to implement different logic functions,[2] allowing flexible reconfigurable computing as performed in computer software.

Technical design

Contemporary field-programmable gate arrays (FPGAs) have large resources of logic gates and RAM blocks to implement complex digital computations.[2] As FPGA designs employ very fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time.

Floor planning enables resource allocation within FPGAs to meet these time constraints. FPGAs can be used to implement any logical function that an ASIC can perform. The ability to update the functionality after shipping, partial re-configuration of a portion of the design[3] and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications.[1]

Some FPGAs have analog features in addition to digital functions. The most common analog feature is a programmable slew rate on each output pin, allowing the engineer to set low rates on lightly loaded pins that would otherwise ring or couple unacceptably, and to set higher rates on heavily loaded pins on high-speed channels that would otherwise run too slowly.[4][5] Also common are quartz-crystal oscillators, on-chip resistance-capacitance oscillators, and phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management and for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip (SoC).[6] Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric.

History

The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable). However, programmable logic was hard-wired between logic gates.[7]

Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.[8] In December 2015, Intel acquired Altera.

Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064.[9] The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market.[10] The XC2064 had 64 configurable logic blocks (CLBs), with two three-input lookup tables (LUTs).[11] More than 20 years later, Freeman was entered into the National Inventors Hall of Fame for his invention.[12][13]

In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.[7]

Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s, when competitors sprouted up, eroding significant market share. By 1993, Actel (now Microsemi) was serving about 18 percent of the market.[10] By 2013, Altera (31 percent), Actel (10 percent) and Xilinx (36 percent) together represented approximately 77 percent of the FPGA market.[14]

The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.[15]

21st century developments

A recent trend[when?] has been to take the coarse-grained architectural approach a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete "system on a programmable chip". This work mirrors the architecture created by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group in 1982 which combined a reconfigurable CPU architecture on a single chip called the SB24.[citation needed]

Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 All Programmable SoC,[16] which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric[17] or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore. The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Microsemi SmartFusion devices incorporate an ARM Cortex-M3 hard processor core (with up to 512 kB of flash and 64 kB of RAM) and analog peripherals such as a multi-channel analog-to-digital converters and digital-to-analog converters to their flash memory-based FPGA fabric.

A Xilinx Zynq-7000 All Programmable System on a Chip.

An alternate approach to using hard-macro processors is to make use of soft processor IP cores that are implemented within the FPGA logic. Nios II, MicroBlaze and Mico32 are examples of popular softcore processors. Many modern FPGAs are programmed at "run time", which has led to the idea of reconfigurable computing or reconfigurable systems – CPUs that reconfigure themselves to suit the task at hand. Additionally, new, non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip.

Companies like Microsoft have started to use FPGAs to accelerate high-performance, computationally intensive systems (like the data centers that operate their Bing search engine), due to the performance per watt advantage FPGAs deliver.[18] Microsoft began using FPGAs to accelerate Bing in 2014, and in 2018 began deploying FPGAs across other data center workloads for their Azure cloud computing platform.[19]

Timelines

Gates

  • 1987: 9,000 gates, Xilinx[10]
  • 1992: 600,000, Naval Surface Warfare Department[7]
  • Early 2000s: Millions[15]
  • 2013: 50 Million, Xilinx[20]

Market size

  • 1985: First commercial FPGA : Xilinx XC2064[9][10]
  • 1987: $14 million[10]
  • ≈1993: >$385 million[10]
  • 2005: $1.9 billion[21]
  • 2010 estimates: $2.75 billion[21]
  • 2013: $5.4 billion[22]
  • 2020 estimate: $9.8 billion[22]

Design starts

A design start is a new custom design for implementation on an FPGA.

Comparisons

To ASICs

Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. An older study[when?] showed that designs implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic power, and run at one third the speed of corresponding ASIC implementations.[citation needed]

More recently, FPGAs such as the Xilinx Virtex-7 or the Altera Stratix 5 have come to rival corresponding ASIC and ASSP ("Application-specific standard part", such as a standalone USB interface chip[25]) solutions by providing significantly reduced power usage, increased speed, lower materials cost, minimal implementation real-estate, and increased possibilities for re-configuration 'on-the-fly'. Where previously[when?] a design may have included 6 to 10 ASICs, the same design can now be achieved using only one FPGA.[26]

Advantages of FPGAs include the ability to re-program when already deployed (i.e. "in the field") to fix bugs, and often include shorter time to market and lower non-recurring engineering costs. Vendors can also take a middle road via FPGA prototyping: developing their prototype hardware on FPGAs, but manufacture their final version as an ASIC so that it can no longer be modified after the design has been committed.

Trends

Xilinx claimed that several market and technology dynamics are changing the ASIC/FPGA paradigm as of February 2009:[27]

  • Integrated circuit development costs were rising aggressively[citation needed]
  • ASIC complexity has lengthened development time
  • R&D resources and headcount were decreasing[why?]
  • Revenue losses for slow time-to-market were increasing[why?]
  • Financial constraints in a poor economy were driving low-cost technologies.[needs update]

These trends make FPGAs a better alternative than ASICs for a larger number of higher-volume applications than they have been historically used for, to which the company attributes the growing number of FPGA design starts (see § History).[27]

Some FPGAs have the capability of partial re-configuration that lets one portion of the device be re-programmed while other portions continue running.[28][29]

Complex Programmable Logic Devices (CPLD)

The primary differences between complex programmable logic devices (CPLDs) and FPGAs are architectural. A CPLD has a comparatively restrictive structure consisting of one or more programmable sum-of-products logic arrays feeding a relatively small number of clocked registers. As a result, CPLDs are less flexible, but have the advantage of more predictable timing delays and a higher logic-to-interconnect ratio.[citation needed] FPGA architectures, on the other hand, are dominated by interconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation on them) but also far more complex to design for, or at least requiring more complex electronic design automation (EDA) software.

In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only FPGAs contain more complex embedded functions such as adders, multipliers, memory, and serializer/deserializers. Another common distinction is that CPLDs contain embedded flash memory to store their configuration while FPGAs usually require external non-volatile memory (but not always).

When a design requires simple instant-on (logic is already configured at power-up) CPLDs are generally preferred. For most other applications FPGAs are generally preferred. Sometimes both CPLDs and FPGAs are used in a single system design. In those designs, CPLDs generally perform glue logic functions, and are responsible for “booting” the FPGA as well as controlling reset and boot sequence of the complete circuit board. Therefore, depending on the application it may be judicious to use both FPGAs and CPLDs in a single design.[30]

Security considerations

FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors, concerning hardware security. FPGAs' flexibility makes malicious modifications during fabrication a lower risk.[31] Previously, for many FPGAs, the design bitstream was exposed while the FPGA loads it from external memory (typically on every power-on). All major FPGA vendors now offer a spectrum of security solutions to designers such as bitstream encryption and authentication. For example, Altera and Xilinx offer AES encryption (up to 256-bit) for bitstreams stored in an external flash memory.

FPGAs that store their configuration internally in nonvolatile flash memory, such as Microsemi's ProAsic 3 or Lattice's XP2 programmable devices, do not expose the bitstream and do not need encryption. In addition, flash memory for a lookup table provides single event upset protection for space applications.[clarification needed] Customers wanting a higher guarantee of tamper resistance can use write-once, antifuse FPGAs from vendors such as Microsemi.

With its Stratix 10 FPGAs and SoCs, Altera introduced a Secure Device Manager and physically uncloneable functions to provide high levels of protection against physical attacks.[32]

In 2012 researchers Sergei Skorobogatov and Christopher Woods demonstrated that FPGAs can be vulnerable to hostile intent. They discovered a critical backdoor vulnerability had been manufactured in silicon as part of the Actel/Microsemi ProAsic 3 making it vulnerable on many levels such as reprogramming crypto and access keys, accessing unencrypted bitstream, modifying low-level silicon features, and extracting configuration data.[33]

Applications

An FPGA can be used to solve any problem which is computable. This is trivially proven by the fact that FPGAs can be used to implement a soft microprocessor, such as the Xilinx MicroBlaze or Altera Nios II. Their advantage lies in that they are significantly faster for some applications because of their parallel nature and optimality in terms of the number of gates used for certain processes.[34]

FPGAs originally began as competitors to CPLDs to implement glue logic for printed circuit boards. As their size, capabilities, and speed increased, FPGAs took over additional functions to the point where some are now marketed as full systems on chips (SoCs). Particularly with the introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications which had traditionally been the sole reserve of digital signal processor hardware (DSPs) began to incorporate FPGAs instead.[35][36]

Another trend in the use of FPGAs is hardware acceleration, where one can use the FPGA to accelerate certain parts of an algorithm and share part of the computation between the FPGA and a generic processor.[2] The search engine Bing is noted for adopting FPGA acceleration for its search algorithm in 2014.[37] As of 2018, FPGAs are seeing increased use as AI accelerators including Microsoft's so-termed "Project Catapult"[19] and for accelerating artificial neural networks for machine learning applications.

Traditionally,[when?] FPGAs have been reserved for specific vertical applications where the volume of production is small. For these low-volume applications, the premium that companies pay in hardware cost per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC. As of 2017, new cost and performance dynamics have broadened the range of viable applications.

Common applications

Architecture

Logic blocks

Simplified example illustration of a logic cell (LUT – Lookup table, FA – Full adder, DFF – D-type flip-flop)

The most common FPGA architecture consists of an array of logic blocks,[note 1]I/O pads, and routing channels.[1] Generally, all the routing channels have the same width (number of wires). Multiple I/O pads may fit into the height of one row or the width of one column in the array.

An application circuit must be mapped into an FPGA with adequate resources. While the number of CLBs/LABs and I/Os required is easily determined from the design, the number of routing tracks needed may vary considerably even among designs with the same amount of logic.[note 2]

For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing tracks increase the cost (and decrease the performance) of the part without providing any benefit, FPGA manufacturers try to provide just enough tracks so that most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed.[note 2] This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs. As of 2018, network-on-chip architectures for routing and interconnection are being developed.

In general, a logic block consists of a few logical cells (called ALM, LE, slice etc.). A typical cell consists of a 4-input LUT[timeframe?], a full adder (FA) and a D-type flip-flop, as shown above. The LUTs are in this figure split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT through the left multiplexer (mux). In arithmetic mode, their outputs are fed to the adder. The selection of mode is programmed into the middle MUX. The output can be either synchronous or asynchronous, depending on the programming of the mux to the right, in the figure example. In practice, entire or parts of the adder are stored as functions into the LUTs in order to save space.[40][41][42]

Hard blocks

Modern FPGA families expand upon the above capabilities to include higher level functionality fixed in silicon. Having these common functions embedded in the circuit reduces the area required and gives those functions increased speed compared to building them from logical primitives. Examples of these include multipliers, generic DSP blocks, embedded processors, high speed I/O logic and embedded memories.

Higher-end FPGAs can contain high speed multi-gigabit transceivers and hard IP cores such as processor cores, Ethernet medium access control units, PCI/PCI Express controllers, and external memory controllers. These cores exist alongside the programmable fabric, but they are built out of transistors instead of LUTs so they have ASIC-level performance and power consumption without consuming a significant amount of fabric resources, leaving more of the fabric free for the application-specific logic. The multi-gigabit transceivers also contain high performance analog input and output circuitry along with high-speed serializers and deserializers, components which cannot be built out of LUTs. Higher-level PHY[definition needed] layer functionality such as line coding may or may not be implemented alongside the serializers and deserializers in hard logic, depending on the FPGA.

Clocking

Most of the circuitry built inside of an FPGA is synchronous circuitry that requires a clock signal. FPGAs contain dedicated global and regional routing networks for clock and reset so they can be delivered with minimal skew. Also, FPGAs generally contain analog phase-locked loop and/or delay-locked loop components to synthesize new clock frequencies as well as attenuate jitter. Complex designs can use multiple clocks with different frequency and phase relationships, each forming separate clock domains. These clock signals can be generated locally by an oscillator or they can be recovered from a high speed serial data stream. Care must be taken when building clock domain crossing circuitry to avoid metastability. FPGAs generally contain block RAMs that are capable of working as dual port RAMs with different clocks, aiding in the construction of building FIFOs and dual port buffers that connect differing clock domains.

3D architectures

To shrink the size and power consumption of FPGAs, vendors such as Tabula and Xilinx have introduced 3D or stacked architectures.[43][44] Following the introduction of its 28 nm 7-series FPGAs, Xilinx said that several of the highest-density parts in those FPGA product lines will be constructed using multiple dies in one package, employing technology developed for 3D construction and stacked-die assemblies.

Xilinx's approach stacks several (three or four) active FPGA dies side-by-side on a silicon interposer – a single piece of silicon that carries passive interconnect.[44][45] The multi-die construction also allows different parts of the FPGA to be created with different process technologies, as the process requirements are different between the FPGA fabric itself and the very high speed 28 Gbit/s serial transceivers. An FPGA built in this way is called a heterogeneous FPGA.[46]

Altera's heterogeneous approach involves using a single monolithic FPGA die and connecting other die/technologies to the FPGA using Intel's embedded multi-die interconnect bridge (EMIB) technology.[47]

Design and programming

To define the behavior of the FPGA, the user provides a design in a hardware description language (HDL) or as a schematic design. The HDL form is more suited to work with large structures because it's possible to specify high-level functional behavior rather than drawing every piece by hand. However, schematic entry can allow for easier visualization of a design and its component modules.

Using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fit to the actual FPGA architecture using a process called place-and-route, usually performed by the FPGA company's proprietary place-and-route software. The user will validate the map, place and route results via timing analysis, simulation, and other verification and validation methodologies. Once the design and validation process is complete, the binary file generated, typically using the FPGA vendor's proprietary software, is used to (re-)configure the FPGA. This file is transferred to the FPGA/CPLD via a serial interface (JTAG) or to an external memory device like an EEPROM.

The most common HDLs are VHDL and Verilog as well as extensions such as SystemVerilog. However, in an attempt to reduce the complexity of designing in HDLs, which have been compared to the equivalent of assembly languages, there are moves[by whom?] to raise the abstraction level through the introduction of alternative languages.[when?]National Instruments' LabVIEW graphical programming language (sometimes referred to as "G") has an FPGA add-in module available to target and program FPGA hardware.

To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called intellectual property (IP) cores, and are available from FPGA vendors and third-party IP suppliers. They are rarely free, and typically released under proprietary licenses. Other predefined circuits are available from developer communities such as OpenCores (typically released under free and open source licenses such as the GPL, BSD or similar license), and other sources. Such designs are known as "open-source hardware."

In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially the RTL description in VHDL or Verilog is simulated by creating test benches to simulate the system and observe results. Then, after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate-level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally the design is laid out in the FPGA at which point propagation delays can be added and the simulation run again with these values back-annotated onto the netlist.

More recently, OpenCL (Open Computing Language) is being used by programmers to take advantage of the performance and power efficiencies that FPGAs provide. OpenCL allows programmers to develop code in the C programming language and target FPGA functions as OpenCL kernels using OpenCL constructs.[48] For further information, see high-level synthesis and C to HDL.

Basic process technology types

  • SRAM – based on static memory technology. In-system programmable and re-programmable. Requires external boot devices. CMOS. Currently in use.[when?] Notably, flash memory or EEPROM devices may often load contents into internal SRAM that controls routing and logic.
  • Fuse – One-time programmable. Bipolar. Obsolete.
  • Antifuse – One-time programmable. CMOS.
  • PROM – Programmable Read-Only Memory technology. One-time programmable because of plastic packaging. Obsolete.
  • EPROM – Erasable Programmable Read-Only Memory technology. One-time programmable but with window, can be erased with ultraviolet (UV) light. CMOS. Obsolete.
  • EEPROM – Electrically Erasable Programmable Read-Only Memory technology. Can be erased, even in plastic packages. Some but not all EEPROM devices can be in-system programmed. CMOS.
  • Flash – Flash-erase EPROM technology. Can be erased, even in plastic packages. Some but not all flash devices can be in-system programmed. Usually, a flash cell is smaller than an equivalent EEPROM cell and is therefore less expensive to manufacture. CMOS.

Major manufacturers

In 2016, long-time industry rivals Xilinx and Altera (now an Intel subsidiary) were the FPGA market leaders.[49] At that time, they controlled nearly 90 percent of the market.

Both Xilinx and Altera[note 3] provide proprietary electronic design automation software for Windows and Linux (ISE/Vivado and Quartus) which enables engineers to design, analyze, simulate, and synthesize (compile) their designs.[50][51]

Other manufacturers include:

In March 2010, Tabula announced their FPGA technology that uses time-multiplexed logic and interconnect that claims potential cost savings for high-density applications.[55] On March 24, 2015, Tabula officially shut down.[56]

On June 1, 2015, Intel announced it would acquire Altera for approximately $16.7 billion and completed the acquisition on December 30, 2015.[57]

See also

Notes

  1. ^ Called configurable logic block (CLB) or logic array block (LAB), depending on vendor
  2. ^ a b For more information, see routing in electronic design automation, as part of the place and route step of integrated circuit manufacturing.
  3. ^ now Intel

References

  1. ^ a b c "FPGA Architecture for the Challenge". toronto.edu. University of Toronto.
  2. ^ a b c "A Survey of FPGA-based Accelerators for Convolutional Neural Networks", S. Mittal, NCAA, 2018
  3. ^ Wisniewski, Remigiusz (2009). Synthesis of compositional microprogram control units for programmable devices. Zielona Góra: University of Zielona Góra. p. 153. ISBN 978-83-7481-293-1.
  4. ^ "FPGA Signal Integrity tutorial". altium.com.
  5. ^ NASA: FPGA drive strength Archived 2010-12-05 at the Wayback Machine
  6. ^ Mike Thompson. "Mixed-signal FPGAs provide GREEN POWER". EE Times, 2007-07-02.
  7. ^ a b c "History of FPGAs". Archived from the original on April 12, 2007. Retrieved 2013-07-11.
  8. ^ "In the Beginning". altera.com. 21 April 2015.
  9. ^ a b "XCELL issue 32" (PDF). Xilinx.
  10. ^ a b c d e f Funding Universe. “Xilinx, Inc.” Retrieved January 15, 2009.
  11. ^ Clive Maxfield, Programmable Logic DesignLine, "Xilinx unveil revolutionary 65nm FPGA architecture: the Virtex-5 family. May 15, 2006. Retrieved February 5, 2009.
  12. ^ Press Release, "Xilinx Co-Founder Ross Freeman Honored as 2009 National Inventors Hall of Fame Inductee for Invention of FPGA Archived 2016-10-06 at the Wayback Machine"
  13. ^ US 4870302, Freeman, Ross H., "Configurable electrical circuit having configurable logic elements and configurable interconnects", published 19 February 1988, issued 26 September 1989 
  14. ^ "Top FPGA Companies For 2013". sourcetech411.com. 2013-04-28.
  15. ^ a b Maxfield, Clive (2004). The Design Warrior's Guide to FPGAs: Devices, Tools and Flows. Elsevier. p. 4. ISBN 978-0-7506-7604-5.
  16. ^ "Xilinx Inc, Form 8-K, Current Report, Filing Date Oct 19, 2011". secdatabase.com. Retrieved May 6, 2018.
  17. ^ "Xilinx Inc, Form 10-K, Annual Report, Filing Date May 31, 2011". secdatabase.com. Retrieved May 6, 2018.
  18. ^ "Microsoft Supercharges Bing Search With Programmable Chips". WIRED. 16 June 2014.
  19. ^ a b "Project Catapult". Microsoft Research. July 2018.
  20. ^ Maxfield, Max. "Xilinx UltraScale FPGA Offers 50 Million Equivalent ASIC Gates". www.eetimes.com. EE Times.
  21. ^ a b Dylan McGrath, EE Times, "FPGA Market to Pass $2.7 Billion by '10, In-Stat Says". May 24, 2006. Retrieved February 5, 2009.
  22. ^ a b "Global FPGA Market Analysis And Segment Forecasts To 2020 – FPGA Industry, Outlook, Size, Application, Product, Share, Growth Prospects, Key Opportunities, Dynamics, Trends, Analysis, FPGA Report – Grand View Research Inc". grandviewresearch.com.
  23. ^ Dylan McGrath, EE Times, "Gartner Dataquest Analyst Gives ASIC, FPGA Markets Clean Bill of Health". June 13, 2005. Retrieved February 5, 2009.
  24. ^ "Virtex-4 Family Overview" (PDF). xilinx.com. Retrieved 14 April 2018.
  25. ^ "ASIC, ASSP, SoC, FPGA – What's the Difference?". eetimes.com.
  26. ^ Kuon, Ian; Rose, Jonathan (2006). "Measuring the gap between FPGAs and ASICs". Proceedings of the international symposium on Field programmable gate arrays – FPGA'06 (PDF). New York, NY: ACM. pp. 21–30. doi:10.1145/1117201.1117205. ISBN 1-59593-292-5.
  27. ^ a b Tim Erjavec, White Paper, "Introducing the Xilinx Targeted Design Platform: Fulfilling the Programmable Imperative Archived 2009-02-06 at the Wayback Machine." February 2, 2009. Retrieved February 2, 2009
  28. ^ "AN 818: Static Update Partial Reconfiguration Tutorial: for Intel Stratix 10 GX FPGA Development Board". www.intel.com. Retrieved 2018-12-01.
  29. ^ "Can FPGAs dynamically modify their logic?". Electrical Engineering Stack Exchange. Retrieved 2018-12-01.
  30. ^ "CPLD vs FPGA: Differences between them and which one to use? – Numato Lab Help Center". numato.com.
  31. ^ Huffmire Paper "Managing Security in FPGA-Based Embedded Systems." Nov–Dec 2008. Retrieved Sept 22, 2009
  32. ^ "EETimes on PUF: Security features for non-security experts – Intrinsic ID". Intrinsic ID. 2015-06-09.
  33. ^ Skorobogatov, Sergei; Woods, Christopher (2012). Breakthrough Silicon Scanning Discovers Backdoor in Military Chip. Digital Object Identifier: 10.1007/978-3-642-33027-8_2. Lecture Notes in Computer Science. 7428. pp. 23–40. doi:10.1007/978-3-642-33027-8_2. ISBN 978-3-642-33026-1.
  34. ^ "Xilinx Inc, Form 8-K, Current Report, Filing Date Apr 26, 2006". secdatabase.com. Retrieved May 6, 2018.
  35. ^ "Publications and Presentations". bdti.com. Archived from the original on 2010-08-21. Retrieved 2018-11-02.
  36. ^ LaPedus, Mark. "Xilinx aims 65-nm FPGAs at DSP applications". EETimes.
  37. ^ Morgan, Timothy Pricket (2014-09-03). "How Microsoft Is Using FPGAs To Speed Up Bing Search". Enterprise Tech. Retrieved 2018-09-18.
  38. ^ "FPGA development devices for radiation-hardened space applications introduced by Microsemi". www.militaryaerospace.com. Retrieved 2018-11-02.
  39. ^ a b "CrypTech: Building Transparency into Cryptography t" (PDF).
  40. ^ 2. CycloneII Architecture. Altera. February 2007
  41. ^ "Documentation: Stratix IV Devices" (PDF). Altera.com. 2008-06-11. Retrieved 2013-05-01.
  42. ^ Virtex-4 FPGA User Guide (December 1st, 2008). Xilinx, Inc.
  43. ^ Dean Takahashi, VentureBeat. "Intel connection helped chip startup Tabula raise $108M." May 2, 2011. Retrieved May 13, 2011.
  44. ^ a b Lawrence Latif, The Inquirer. "FPGA manufacturer claims to beat Moore's Law." October 27, 2010. Retrieved May 12, 2011.
  45. ^ EDN Europe. "Xilinx adopts stacked-die 3D packaging." November 1, 2010. Retrieved May 12, 2011.
  46. ^ Saban, Kirk (December 11, 2012). "Xilinx Stacked Silicon Interconnect Technology Delivers Breakthrough FPGA Capacity, Bandwidth, and Power Efficiency" (PDF). xilinx.com. Retrieved 2018-11-30.
  47. ^ "Intel Custom Foundry EMIB". Intel.
  48. ^ "Why use OpenCL on FPGAs?". StreamComputing. 2014-09-16.
  49. ^ Dillien, Paul (March 6, 2017). "And the Winner of Best FPGA of 2016 is..." EETimes. Retrieved September 7, 2017.
  50. ^ "Xilinx ISE Design Suite". www.xilinx.com. Retrieved 2018-12-01.
  51. ^ "FPGA Design Software - Intel® Quartus® Prime". Intel. Retrieved 2018-12-01.
  52. ^ "Top FPGA Companies For 2013". SourceTech411. 2013-04-28. Retrieved 2018-12-01.
  53. ^ "QuickLogic — Customizable Semiconductor Solutions for Mobile Devices". www.quicklogic.com. QuickLogic Corporation. Retrieved 2018-10-07.
  54. ^ "Achronix to Use Intel's 22nm Manufacturing". Intel Newsroom. 2010-11-01. Retrieved 2018-12-01.
  55. ^ "Tabula's Time Machine — Micro Processor Report" (PDF). Archived from the original (PDF) on 2011-04-10.
  56. ^ Tabula to shut down; 120 jobs lost at fabless chip company Silicon Valley Business Journal
  57. ^ "Intel to buy Altera for $16.7 billion in its biggest deal ever". Reuters. June 2015.

Further reading

  • Sadrozinski, Hartmut F.-W.; Wu, Jinyuan (2010). Applications of Field-Programmable Gate Arrays in Scientific Research. Taylor & Francis. ISBN 978-1-4398-4133-4.
  • Wirth, Niklaus (1995). Digital Circuit Design An Introduction Textbook. Springer. ISBN 978-3-540-58577-0.

External links