By John Rynearson, Technical Director, VITA
Data transfer speed is an important characteristic for all systems. When a system designer considers a VMEbus solution, the speed of the bus is an important consideration. Unfortunately, there is no single answer. The overall performance of a VMEbus system depends on the system hardware, peripherals, and software chosen for the application. Since the VMEbus is an asynchronous (handshaking) bus, data transfers proceed at the rate of the slowest board in any transaction. Thus in a data transfer cycle between a fast CPU module and a slow I/O board, the transfer will proceed at the rate of the I/O board. Hence it is important to pay attention to the speed of all modules in a system, not just the CPU modules.
The often quoted figure for VMEbus data transfers is 40 Mbytes per second at 32 bits per transfer and 80 Mbytes per second at 64 bits per transfer. This estimate is based on the following. Address strobe (AS*) must be asserted for a minimum of 40 nanoseconds (Ns) and must be deasserted between cycles for at least 40 Ns. Therefore a minimum address high - address low cycle will total 80 Ns. In addition bus driver and bus receiver delay times must be taken into account since we are looking at data transfer rate from the point of view of the on-board device sending or receiving data. Let's assume 10 Ns delay for the drivers and 10 Ns delay for the receivers giving a total of 20 Ns. Total minimum cycle time is then 100 Ns or 10 megatransfers per second. At four bytes per transfer we get 40 Mbytes per second. At eight bytes per transfer we get 80 Mbytes per second.
You might argue that a total of 20 Ns for driver and receiver delay is too long and could be less, but these numbers are a good estimate and give a reasonable upper limit to possible VMEbus performance.
These are just theoretical numbers, however. In practice how fast can we go? Again, it all depends. Fast bus interface circuits can provide 30-35 Mbytes per second 32 bit transfers and 50-55 Mbytes per second 64 bit transfers. However, these values are best case numbers and in practice are more likely to be burst rates rather than continuous rates. What is the different between burst and continuous? Well, the different can be significant and a source of frustration during application development. Burst rate refers to a board's instantaneous data transfer rate and does not take into account the amount of data being transferred. Burst rates can be quite high. For example, a board may have an on-board buffer that allows 1024 bytes to be sent out over the bus in 64 bit transfers at a data rate of 55 Mbytes per second. Once the 1024 byte transfer is complete, a significant delay can occur before the buffer is full again and another transfer can begin. Thus the 55 megabyte burst rate can turn into a 10 megabyte continuous rate very quickly. System developers need to be aware that board manufacturers usually publish burst rates. Burst rates are useful in determining overall system performance assuming that the system developer can include other factors such as and interrupt latency, bus arbitration, and software overhead.
In general hardware mechanisms for data transfers will always be faster than software mechanisms. Direct Memory Access (DMA) is the term that refers to a hardware circuit that given an address and a data count will then transfer data with no further software intervention. Because such circuits are optimized for data transfer only, they usually run much faster than transfers accomplished via software loops. Such software loops must initially set up a read address, a write address, and a counter. During the transfer numerous cycles are required to read data, to write data, to increment the counter, and to test to see if the transfer is done. While a single DMA transfer cycle might take 100-200 Ns a single software controlled transfer cycle might take one or more microseconds. Thus software controlled data transfers across the VMEbus rarely exceed 10 megabytes per second while DMA burst transfers can achieve 50 megabytes per second or more for 64 bit per word transfers.
What if you need more performance? The creators of the VMEbus specification defined 64 pins for user I/O on the P2/J2 connector. Rows A and C are not used for VMEbus signals and can be used to bring out I/O or to implement a secondary bus. The VSB bus was defined in the mid 1980's as a secondary bus to VME and is still in use today. During the past year several new interconnect schemes have been introduced for use on P2. These interconnects include QuickRing, Raceway Interlink, SkyChannel, Hetrogeneous Interconnect (HIC), and Signal Computing System Architecture (SCSA). In addition serial bus technologies such as P1394 (Firewire) and AutoBahn by PEP Modular Computers provide new uses for the serial bus lines on the VMEbus P1/J1 connector.
All of these technologies give the system developer a wide range of choices for meeting the performance needs of different application requirements.
This page last updated: Sep 19, 1999
Reprinted from the VITA Journal with permission from VITA.