SGI Altix Ultraviolet 1000 - clustering at Xeon Speeds


irix detailer
Feb 8, 2019
This is a look at the insides of some much later (2009) SGI hardware, based on Intel's Xeon CPUs. Recently there was a rescue that I was a part of for an Altix UV1000 cluster and I had the chance to examine a few of the nodes that are in it. At some point some of the cluster may be restored but now nothing boots.

The UV1000 is based on the Nehalem / Beckton class Xenon 7500 6-core CPU, which does not do hyper-threading.

  • "Beckton" Nehalem CPU
  • 6 Core
  • 2.67 GHz/ 2.80 GHz/ 18 MB L3 Cache/ 5.86 GT/s / 130 W
  • LGA1567

The interesting part of this chip is the multiple QPI ports that were used for the basis of the NumaLink v5 interconnect for this system.

The Xeon 7500 and E7 chips have four QPI ports coming off each socket, and the original UV 1000 design used two QPI ports on the Xeon 75000 or E7 chips to cross-link the two sockets together, with one of the remaining two QPI ports going to the Boxboro chipset (which controls access to main memory and local I/O slots on the blade) and the other that links out to the NUMAlink 5 hub, which in turn has four links out to the NUMAlink 5 router. That router implements an 8x8 (paired node) 2D torus that can deliver up to 16TB of shared space across those 256 sockets.

Keeping shared state across 256 cpu sockets (1536 cores) must have been pretty amazing when it was going full-on.

Further Reading:

Here are some photos:







PIII, CoreDuo and Xeon 7500 cpus


Passive cooling? They could do this because the racks had liquid chiller loops built in to aborb all that heat from the nodes.



Chillers on back back of each section of nodes. Those are giant radiators that get a cold line loop attached to remove heat.

Screen Shot 2019-12-31 at 2.10.22 PM.png

The Intel® QuickPath Interconnect is a high- speed point-to-point interconnect. Though sometimes classified as a serial bus, it is more accurately considered a point-to-point link as data is sent in parallel across multiple lanes and packets are broken into multiple parallel transfers.​

Last edited:
  • Like
Reactions: Elf


Feb 4, 2019
Very cool writeup and pictures; I love the water cooling setup. I've worked in datacenters a fair amount but usually the only water is going to the CRACs and sometimes the fire suppression or humidifiers. Must be fun to have water going to the racks as well :)

Also the QPI information is intriguing. Never knew about that!

About us

  • Silicon Graphics User Group (SGUG) is a community for users, developers, and admirers of Silicon Graphics (SGI) products. We aim to be a friendly hobbyist community for discussing all aspects of SGIs, including use, software development, the IRIX Operating System, and troubleshooting, as well as facilitating hardware exchange.

User Menu