Return to "Visible Storage"

*** Please note: This website (comp-hist) was completed before I found out about Wikipedia in 2002.
Since then I have added material occasionally.
Items are certainly not complete, and may be inaccurate.
Your information, comments, corrections, etc. are eagerly requested.
Send e-mail to Ed Thelen. Please include the URL under discussion. Thank you ***

Intel iPSC/860

Manufacturer INTEL Super Computers
Identification,ID iPSC/860
Date of first manufacture?
Number produced ?
Estimated price or cost-
location in museum -

Contents of this page:




MIMD hypercube.

MIMD is for Multiple Instruction Multiple Data
Hypercube _ "


Intel iPSC/860

128 nodes, 5.1 GFlops peak (MSR ORNL and CS UTK)

Intel iPSC2

64 nodes, (CS UTK)


The iPSC/860 is a high performance parallel computer system. The processing power of the iPSC/860 comes from its processing nodes. Each node in the iPSC/860 is either a CX or an RX processor. Each CX processor may also be accompanied by an SX scalar processor and/or a VX vector processor. Every iPSC/860 system contains at least one RX node. The CX node is based on the Intel386 microprocessor. An RX node consists of an Intel i860 microprocessor capable of a peak performance of 80 MFLOPS. The i860 has multiple arithmetic units: an integer unit, a floating point adder and a floating point multiplier.

Special features


The Interconnection Network

The processing nodes of the iPSC/860 are interconnected in a hypercube architecture. In a hypercube of dimension N, each node has N neighbors and the total number of nodes in the hypercube is 2^N. For example, the machine at Oak Ridge National Laboratory (ORNL) has 2^7 = 128 RX nodes. The nodes of a hypercube are assigned unique addresses so that the address of any two neighbors differs in only one binary digit. The maximum data transfer rate between adjacent nodes on the hypercube is 20Mbps.


The iPSC/860 has a distributed memory architecture. This means that each processor has its own primary memory. The size of memory varies with location. For instance, the machine at ORNL has 8 megabytes of memory at each node. Some of this memory is used by the operating system for internal purposes. Secondary storage is usually available in the form of disks in the Concurrent File System (CFS).

Historical Notes

This Specimen

Interesting Web Sites

Other information

If you have comments or suggestions, Send e-mail to Ed Thelen

Go to Antique Computer home page
Go to Visual Storage page
Go to top

Updated April 30, 2000