Tkool Electronics

The operation is simple; all of the inputs to the CPLD are used to control a counter. If any input is active, the counter is held on reset. When all of the inputs go inactive, the counter counts until a user-defined length of time has passed. If during this time all of the inputs are still inactive, a signal is sent to disable a metal-oxide-semiconductor field-effect transistor (MOSFET), which shuts off power to the device. When any input goes active again, the internal counter is reset, power is applied, and the CPLD powers up (See Figure 7).

900004471_Datasheet PDF

The operation is simple; all of the inputs to the CPLD are used to control a counter. If any input is active, the counter is held on reset. When all of the inputs go inactive, the counter counts until a user-defined length of time has passed. If during this time all of the inputs are still inactive, a signal is sent to disable a metal-oxide-semiconductor field-effect transistor (MOSFET), which shuts off power to the device. When any input goes active again, the internal counter is reset, power is applied, and the CPLD powers up (See Figure 7).

The ionosphere is responsible for long-distance communication in the high-frequency bands between 3 and 30 MHz. It is very dependent on time of day, season, longitude on the earth, and the multiyear cyclic production of sunspots on the sun. It makes possible long-range communication using very low power transmitters. Most short-range communication applications that we deal with in this chapter use VHF, UHF, and microwave bands, generally above 40 MHz. There are times when ionospheric reflection occurs at the low end of this range, and then sky wave propagation can be responsible for interference from signals originating hundreds of kilometers away. However, in general, sky wave propagation does not affect the short-range radio applications that we are interested in.

The most important propagation mechanism for short-range communication on the VHF and UHF bands is that which occurs in an open field, where the received signal is a vector sum of a direct line-of-sight signal and a signal from the same source that is reflected off the earth. Later we discuss the relationship between signal strength and range in line-of-sight and open field topographies.

900004471_Datasheet PDF

The range of line-of-sight signals, when there are no reflections from the earth or ionosphere, is a function of the dispersion of the waves from the transmitter antenna. In this free-space case the signal strength decreases in inverse proportion to the distance away from the transmitter antenna. When the radiated power is known, the field strength is given by equation (5.1):

where Pt is the transmitted power, Gt is the antenna gain, and d is the distance. When Pt is in watts and d is in meters, E is volts/meter. To find the power at the receiver (Pr ) when the power into the transmitter antenna is known, use (5.2):

900004471_Datasheet PDF

Gt and Gr are the transmitter and receiver antenna gains, and λ is the wavelength.

900004471_Datasheet PDF

Range can be calculated on this basis at high UHF and microwave frequencies when high-gain antennas are used, located many wavelengths above the ground. Signal strength between the earth and a satellite, and between satellites, also follows the inverse distance law, but this case isn't in the category of short-range communication! At microwave frequencies, signal strength is also reduced by atmospheric absorption caused by water vapor and other gases that constitute the air.

Open Field Propagation Although the formulas in the previous section are useful in some circumstances, the actual range of a VHF or UHF signal is affected by reflections from the ground and surrounding objects. The path lengths of the reflected signals differ from that of the line-of-sight signal, so the receiver sees a combined signal with components having different amplitudes and phases.

Surprisingly, the escalating challenges of high-density IC design are, in many ways, making that argument irrelevant. As ASIC designers migrate to each new process node, designs grow more complex, software content increases and verification runtimes lengthen. Moreover, recent research indicates that over 60 percent of respun ASICs fail not because of issues with timing or power, but because of logical or functional errors. For this reason, functional verification has become the single most critical phase of the ASIC development cycle, and often the most time-consuming. An increasing number of ASIC designers find that they can best meet their requirements by prototyping the functional equivalents of their designs as FPGAs. In fact, more than 90 percent of all ASICs today are either partially or completely prototyped as FPGAs before tape-out. Thus the question is no longer whether to implement an IC design as an ASIC, or as an FPGA. To meet the demands of today's markets, most design teams must do both.

Verification options

Given the critical need for first-pass silicon and the escalating possibilities for bugs as ASIC densities climb and design complexities increase, designers clearly need a verification methodology that can not only find all bugs in complex chip designs but also do so in reasonably short times. Traditional software simulation techniques can no longer support design teams who are racing to squeeze into tight time-to-market windows. Take a typical mobile phone chipset design. Although RTL simulation offers a high level of visibility into the design, the low performance associated with software simulation means that booting the phone chipset could take as long as 30 days, making it unfeasible and, therefore, significantly limiting the level, and amount, of verification possible. Hardware/software co-simulation approaches that use higher-level models can reduce the time required for this sort of OS boot to 10 days, but even that is still not very useful. Moreover, these approaches still require the development of complex testbenches, which, by their very nature, are always incomplete. A C” model simulation offers shorter runtimes, perhaps even of only 24 hours, but it can't deliver the level of detail typically required by ASIC designers.

What ASIC designers need is a verification strategy that offers speeds approaching that of the ASIC. They need a methodology to leverage real-world stimulus, not a testbench. They need a verification methodology that is highly affordable and easily deployable in order to support distribution for hardware and software debug within the whole design team. Furthermore, they need a verification strategy able not only to run operating system and application software at speed but also to easily integrate external system components and interfaces.

访客,请您发表评论:

Powered By Tkool Electronics

Copyright Your WebSite.sitemap