Network Emulators by GigaNet Systems
Find us on Google+ Questions?    +1 (512) 410-6784
Request A Quote

GigaNet Systems Home / About Us / How to Choose...

How To Choose The Right Network Emulator

Network Emulator - Software or Hardware

Network emulators (WAN emulators), are devices used to replicate real-world network conditions in a controlled lab environment. When added to a test setup, they are added in-line so that any variation in the emulator's performance will appear as a variation in the DUT's performance. For this reason alone, it is critical that engineers and test personnel are able to rely on the emulator to apply any and all user-specified network conditions with precision.

There are two fundamentally different kinds of network emulators on the market today with a significant difference in performance and, consequently, precision. In this discussion, we will describe the differences between them such that users can make a truly informed decision when it comes to selecting the right emulator for their needs.


Software-based or Appliance-based, network emulators are essentially built using a standard Off-The-Shelf (OTS) PC with two Network Interface Cards (NICs) and some emulation software running on the CPU. In fact, many of the offerings available in the market today are essentially rackmount or industrial PCs with new sheet metal to give a different external appearance.

Hardware-based emulators on the other hand, are designed and built from the ground up including customized Integrated Circuits (ICs) (a.k.a. dedicated FPGAs) to process the traffic and apply specified impairments. Put simply, there is dedicated hardware handling the traffic and impairments and the CPU is not involved in any way with packet processing.

Note: Some vendors lead customers to believe that an appliance-based emulator is the same as a hardware-based emulator. Clearly, this is not the case and, essentially, an appliance-based emulator is the same as a software-based one.

This design difference has significant performance and functionality implications including:

Hardware-based (HW-based) network emulators provide guaranteed line-rate processing of data under all configurations. The rate of frame processing does not slow down as more impairments are added, the packet size decreases, or if there are multiple users on the system simultaneously.

With a Software-based emulator however, CPU resources are shared for all traffic across all of the ports. As Ethernet packets or frames are received on the NIC, they are processed by the Software and then scheduled to be transmitted over the mate link and vice versa. At the same time as the CPU is trying to act as an emulator, it also has to manage all the other standard high-priority processes required by the Operating System. These processes (e.g. servicing interrupts, managing PC hardware) will override the priority of the emulator functions and negatively affect performance and test results. Add in the actual application of impairments, and things get worse.

The problem becomes quite acute as the number of impairments and ports rises. Every increase in required functionality puts a greater strain on the multi-tasking CPU. With no corresponding increase in CPU processing power, it is very easy for the CPU to get overwhelmed by the requirements imposed on it by the emulation software - making actual performance (and results) impossible to predict, or repeat. Since the traffic rate is a function of incoming frame sizes and the complexity of the specified configuration, it's easy to envision a scenario where the processing ability of the software-based emulator is exceeded. For example, if the user chose to corrupt packets and then re-compute the CRCs (so that corrupted packets are not simply dropped), the processing power required would bring a software-based emulator to its knees in terms of performance.

So what does this all really mean? When the processing ability of the software-based emulator is exceeded, not only is the user likely unaware that it has happened, but they are also unaware that the emulator has started dropping frames. Thus, even when a test setup has configured no drops, it is possible that the emulator may be SILENTLY inducing drops. When one considers that even a 1% drop in TCP traffic typically results in a 50% lower throughput, it is easy to see the significance and severity of this problem. Engineering teams end up wasting valuable time and effort chasing problems (ghosts) that were simply created by the emulator itself. To complicate matters further, even if the average traffic rate is low, Ethernet's "bursty" behavior by itself could cause unintended (and unknown) frame drops on these underpowered appliance emulators.

It is for these reasons that most software/appliance-based emulator systems either do not list any processing rate specifications or, if they do, list them with the word "Max". "Max" performance usually refers to either no or minimal impairment configurations - ideal conditions - which begs the question: "Aren't you trying to test under adverse conditions?" While comparing "Max." numbers between different emulators may provide some benefit, it does not provide a practical measure of actual performance.

GigaNet Systems guarantees that its network emulators will handle full line-rate traffic at all frame sizes and impairment conditions, regardless of the number of users or ports in operation. In fact, the GigaNet Systems multi-port design is such that each port pair is being processed by dedicated hardware. This ensures that one user CANNOT affect another.

The delay precision of a software-based emulator is usually very low (in the order of 0.1ms). Since Operating System tasks have higher priority than the emulation application running on the PC, it is physically impossible for software-based emulators to provide better specifications.

Additionally, bandwidth accuracy is highly dependent on precise control time between packets (Inter-Packet Gap). For example, to achieve 20% line rate at 1G Ethernet speed, an emulator should wait 672ns after sending out a 64 byte packet. However, given that the delay precision for most software-based emulators is 0.1 milliseconds or worse, if the emulator waits 0.1 ms, the achieved line rate would be 0.672% instead of the user-specified setting of 20%. Real networks, with routers that typically use ICs to process network traffic (similar to hardware-based emulators), do not have much variation in the achieved bandwidth. The variation in the bandwidth applied by software-based emulators can easily skew the results in unpredictable manner.

A hardware-based emulator on the other hand, will have a specified precision for all settings. At GigaNet Systems, we guarantee our network emulator and ensure that ALL emulation settings are faithful to the traffic stream with no unintended consequences. For example, our delay (latency) precision is 12.4ns (10G Ethernet) and our bandwidth control precision is 1Kbps or better - regardless of the traffic or impairments applied.

With dedicated hardware, a hardware-based network emulator is able to guarantee its performance. With performance guarantees comes precision. With precision comes one of the most critical aspects of any test environment - repeatability.

Without repeatability, how does the development or validation engineer work to optimize their system's performance? As an example, let's say one algorithm is 10% better than another, yet there is a 15% variation in the applied impairment settings. Under this scenario, it is entirely possible that a perfectly good optimization is discarded as the test results may show worse performance due to the variation in test conditions.

Without repeatability, how do you perform troubleshooting or debugging of an issue? If a bug is observed by the validation team, but can't be reproduced by the developer since the emulator's performance and applied impairments are not repeatable, the cost of lost productivity and time to market delays that would be experienced could easily outweigh the savings realized from a software-based emulation solution - on that particular issue alone.

These are what we call "The Big Three" because, to us, it doesn't matter what feature or function you offer if you can't deliver on Performance, Precision, and Repeatability.

That's not to say that there aren't some additional features or differences between emulators. Some of the more important ones include:

With a GigaNet Systems network emulator, when settings are changed, the configuration switches on a packet boundary without creating any intermediate (unspecified) conditions. With other systems, development and validation engineers may not have complete control over when the impairments get applied. There can be a variable in each instance or there can be an unintended overlap of impairment conditions. For example, a switch between Delay=10ms + Packet Drop=OFF to Delay=1ms + Packet Drop=ON could see a situation where a few packets experience Delay = 10ms + Packet Drop=ON OR Delay = 1ms + Packet Drop=OFF. Indeed, the user may not care about such subtle issues, but GigaNet's emulators are designed to faithfully execute user's specified impairment conditions.

The use of dedicated components also ensures the system has enough resources to properly address some of the more processor intensive or advanced functions such as:

    • Corruption
    • Modification
    • UDP/TCP Checksum correction
    • Physical Layer Bit Errors (Software emulators cannot introduce bit errors after the data has been encoded - physical layer errors only occur on encoded data)
    • Loss of Signal (Fail-over)

Support for Ethernet, Fiber Channel, and OTU - all on the same hardware.

Detailed real-time statistics for bi-directional Ingress/Egress traffic, impairments set and applied, as well as very detailed Layer 1 and 2 stats (some of these are better than an analyzer).


As we've demonstrated here, there are significant differences between appliance-based (software-based) network emulators and those that are hardware-based. There are even features that are unique to GigaNet's hardware-based emulators thanks to the implementation of the hardware architecture.

Network Emulator - Software or Hardware

For their part, software-based emulators do offer one advantage - cost. Since these software-based network or WAN emulators are essentially off-the-shelf PCs, they are typically lower priced than the customized hardware-based solutions. Unfortunately, this lower price point does come with some serious limitations. For those performing basic or ad-hoc testing at low data rates, these limitations will be perfectly acceptable.

However, for those who are faced with shrinking development cycles in an increasingly competitive marketplace, time-to-market and product robustness is critical. The significant gains in development and test productivity enabled by the repeatability, precision, and performance of a hardware-based network emulator will not only help guide the decision, but will also bring the actual total costs (product + engineering + opportunity) into much closer alignment than initially obvious.

Which begs the question: "Why would you risk your team's budget, time, and effort on a network or WAN emulator if you couldn't trust the results?".

If you need precision, performance, and repeatability in your testing, then a hardware-based network emulator from GigaNet Systems is the right choice.

Download - How to choose Download
How to choose...

VirtualNet GE - 1G Ethernet Network Emulator

VirtualNet XG - 10G Ethernet Network Emulator
Better Testing Better Reliability Better Products
Home | Products | Support | FAQs | About Us | Contact Us Sitemap
GigaNet Systems® is a registered trademark of GigaNet Systems Inc.
Copyright © 2012-2023 GigaNet Systems Inc. All rights reserved.