Tkool Electronics

Programmable state machine for flash cache Memory array maker, Violin Memory, Inc., outlines the following high-level attributes of a memory array that scales cost-effectively and addresses the needs of next-generation, 24×7 enterprise data centers:

special semiconductor devices

Programmable state machine for flash cache Memory array maker, Violin Memory, Inc., outlines the following high-level attributes of a memory array that scales cost-effectively and addresses the needs of next-generation, 24×7 enterprise data centers:

Earlier generations of sensors reported their data via basic analog or digital connections.These days they’re becoming Ethernet-aware, with all of the accompanying advantages.Their use of universal Ethernet communications protocols enables them to work with off-the-shelf technology, and their ability to communicate via Ethernet means that they can be placed just about anywhere.

But Ethernet technology has its roots in safe, climate-controlled office environments.The IT world thought in terms of structured cabling systems, mature protocol and transport standards, and hardware vendors who would provide standardized products with near-seamless interoperability.Who knew, back then, that networking would outgrow its tame, office-based beginnings and move out into the real world?

special semiconductor devices

Today networks must function reliably in increasingly harsh terrain–on factory floors, on gas and oil pipelines, in industries ranging from mining to transportation.And sensors tend to live way out on the network periphery, where conditions are the very worst.Design engineers must know what to do about that.

Getting to the Ethernet

Connecting sensors to Ethernet can be problematic in real world scenarios.The first issue is distance.Copper wire-based Ethernet has a practical range limitation of 100 meters.That’s adequate for a network in an office or a small building, but it won’t do the job when you need to monitor the turbines on a wind farm or the chlorine levels at a water treatment plant.To be useful outside the office, Ethernet must function at far greater ranges.

special semiconductor devices

One answer is a device called an Ethernet extender (Fig. 1). Ethernet extenders use DSL technology to create a long-distance Ethernet bridge over virtually any available copper pair.There’s a drop in bandwidth as the range increases, but you can reliably extend Ethernet over thousands of feet while maintaining a quite serviceable connection rate of several Mbps. Better yet, Ethernet extenders give the system designer the freedom to use existing wiring infrastructure, like any telephone cabling or legacy coaxial cable that may be present.As the labor and materials involved in cable installation are often the most expensive element in setting up a network, the flexibility provided by Ethernet extenders can represent an enormous savings.

special semiconductor devices

Figure 1. Ethernet extenders enable data to travel up to 6200 feet on copper wire

Fiber Optics

As Table 1 shows, the deeper the memory, the higher the sample rate will be as you move in to slower time/div settings.  Maintaining high sample rate is important as it allows the scope to function at its maximum capabilities.  There is a wide range of memory depths available today in scopes with 5GS/s sample rates, from 10,000 points (10Kpts) all the way up to 1,000,000,000 (1Gpts).

Deep memory is clearly beneficial when it comes to sample rate, but when would it not be advantageous?  When it makes your oscilloscope so slow that it is no longer helpful in debugging a problem.  Deep memory puts a larger strain on the system.  Some scopes are setup to handle that well and remain responsive with a fast update rate; others attempt to make it a banner specification when it isn’t really usable and slows the update rate by orders of magnitude (see What is Update Rate? on page 3 for a  discussion on update rate).  

Let’s look at those same two scopes from above.  At 20nS/div (a fast time base setting), both scopes are near their maximums for update rate.  And neither scope is using its full memory that it specifies in its data sheet.  But what happens when you look at another time base setting like 400nS/div?  The MegaZoom architecture oscilloscope automatically maximizes its memory depth to keep its sample rate maxed out – the scope will behave exactly as you would expect a deep memory scope to behave (it will keep its sample rate at 5GS/s and still have a fast update rate).  The CPU-based architecture scope is still using its default memory depth to keep the scope responsive and isn’t keeping its sample rate as high as it should (and still has a slower update rate).  What happens if we adjust the memory depth to keep the sample rate high?  You begin to see the trade-offs of a deep memory scope that isn’t designed to handle deep memory – the sample rate is now at its maximum (5GS/s), but the update rate is 1/3 the amount of the MegaZoom scope, and it only gets worse as you look at slower time base settings (e.g. at 4uS/div the MegaZoom scope has an update rate 20 times faster than the CPU-based scope).

What makes one scope designed” for deep memory while another has to default its memory to 10K to remain responsive?  A lot of it comes down to the oscilloscope architecture.  In some scopes, the CPU system is an integral block in the oscilloscope architecture (CPU-based architecture”), so much so that it is actually the gating item in how fast the scope can process the information and display it to the screen.  If the CPU system isn’t up to the task of handling deep memory acquisition records, it will lengthen the time it takes to process and display the data, therefore lowering the update rate of the scope (sometimes dramatically).  See Figure 1 for an example of this architecture.


Powered By Tkool Electronics

Copyright Your WebSite.sitemap