Tkool Electronics

Figure 4 shows a structure of an eight-tap linear-phaseFIR filter. You delete the shadowed register and adder for an oddFIR filter. To design filters with a number of taps higher thaneight, for example a 12-tap FIR filter, the structure comprises twoblocks. One block represents an eight-tap filter (with onlyfour-input LUTs) and the other a three-tap filter (with onlytwo-input LUTs).

DS32506N_Datasheet PDF

Figure 4 shows a structure of an eight-tap linear-phaseFIR filter. You delete the shadowed register and adder for an oddFIR filter. To design filters with a number of taps higher thaneight, for example a 12-tap FIR filter, the structure comprises twoblocks. One block represents an eight-tap filter (with onlyfour-input LUTs) and the other a three-tap filter (with onlytwo-input LUTs).

A traditional view

A typical edge switch switching system design consists of a number of network processors interconnected by a switch fabric (See Figure 1 ). Traffic typically enters the switchvia an ingress processor, traverses the fabric, and exits from an egress processor.

DS32506N_Datasheet PDF

Packets or cells arriving at the ingress ports are inspected by the processors to determine: (1) their intended destination; (2) the QoS they are to receive based on the traffictype or a service-level agreement (SLA); and (3) any local modification they may need, such as encapsulation, time to live (TTL) modification, or encryption/decryption.

The QoS required for a particular flow of packets determines when each packet needs to be transmitted from the egress network processor. Ideally, the packets would betransported from the ingress to egress processor without any incremental delay, and get queued at the egress processor until the traffic shaping algorithms determine theappropriate time to forward them into the next segment of the network. This is termed the output-queuing model since queuing occurs only at the outputs of the switch.

An implication of this model is that all incoming packets must be delivered to their intended egress network processor without any delay at the ingress network processor,even if all the incoming packets are intended for the same egress network processor. Each egress processor must accept packets from the fabric not just at the port line rate,but also at the full aggregate bandwidth of the switch.

DS32506N_Datasheet PDF

Sharing the load

The current generation of edge access switch fabrics employ a switching technique based on a shared-memory architecture. Developed when the capabilities of bus- andring-based switching architectures were exceeded, shared-memory switches contain a global memory into or out of which each line card can write or read. Thisimplementation fits the output-queuing model because all packets queued in the switch are accessible to any egress network processors as if they were in the processor's ownlocal memory.

DS32506N_Datasheet PDF

The performance of such shared-memory devices is scaled by increasing the bandwidth of global memory housed in the system. Consequently, the scaleability depends largelyon the ability of the semiconductor industry to continue to create faster memory while still maintaining bus widths with reasonable IC pin counts.

However, memory improvements, which double roughly every 18 months, are not keeping pace with the growing bandwidth demands on the edge network. Each generation ofedge switch – there is a new one approximately every 18 months – must boost its capacity by a factor of four, while memory solutions typically only performance by a factorof four, not two. Some of the difference can be made up by using memory devices with a wider array, but once the memory width reaches the size of the cells or packets beingtransmitted, increased width is no longer helpful. Also, pin counts go up as buses get wider, to the point where packaging and layout become impractical. As a result,shared-memory switch fabrics currently won't scale beyond 20 Gbps of total line-end bandwidth.

Soon thereafter, Future I/O was introduced to solve the same general class of interconnect issues. But the new spec came at it from a different angle. Future I/O was based on the principle of end-to-end data paths that share the same physical media using a prioritization scheme with features such as multispeed connections, virtual lanes to support quality of service, larger packet sizes, credit-based flow control and multicast capabilities.

Recognizing the need for new I/O interconnect devices, we began developing devices supporting the NGIO standard in early 1999. We were actually far along in the development process when we began to recognize the forces supporting Future I/O and many of the technical merits of this interconnect architecture.

Customers were being forced to choose between the two architectures, and we were faced with a dilemma on how to proceed with our development: Do we continue with our current path of NGIO only? Or do we change directions and work on a product for Future I/O as well?

QoS leads comms needs We were working with a large developer of communications systems for which quality of service was vital. A switched-I/O fabric solution that could support multiple, independent classes of traffic on the same physical fabric was extremely powerful, particularly because these virtual lanes” used independent link-level credit-based flow control. This prevents congestion in one class of traffic from creating higher-order head-of-line” blocking that could impact another class.

These I/O fabric characteristics proved ideal for developing multiprotocol systems. In addition, the ability of the fabric itself to perform traffic shaping created some novel opportunities. On the other hand, the simpler NGIO standard offered an attractive price/performance trade-off. Thus, both standards had strengths and weaknesses.

访客,请您发表评论:

Powered By Tkool Electronics

Copyright Your WebSite.sitemap