h1

Tyan Transport GT28 Overview, Part 2

January 14, 2009

As I indicated in the earlier posts, the GT28 uses narrow motherboards within a single 1U chassis to create a compact, dual-node HPC platform. Small as it is, the motherboard still packs two socket-F processors, sixteen 240-pin DDR2 DIMM slots, four GigabitEthernet ports, space for an embedded Mellanox Infiniband processor, on board video, a single low-profile PCIe riser slot (x8 slot/signal) and an SMDC slot.

Tyan S2935 Motherboard

Tyan S2935 Motherboard

The GT28 is not a quiet machine once powered-up, and with eight 15,000 RPM fans (3-per node plus 2 power supply fans) the system only gets louder under heavy load. Fortunately, the 45nm Opteron processor is easier on the thermal envelope than its predecessors so the fans stay around 9,000 RPM most of the time. That said, you do not want one of these systems in anything but a full rack enclosure as the tell-tale whine of the system fans is not conducive to office work. This is no appliance chassis and it was not designed to be one, but compared to an 8U AIC storage chassis, this thing is quiet.

The GT28 does provide more than your typical standard compact server motherboard in terms of I/O options – especially network. Although we were not testing the Infiniband variant (about $600 additional) this $1,600 barebones systems comes well equipped for a variety of tasks. As I indicated before, the test system is to become part of a four node storage and hypervisor system based on VMware ESXi and Nexenta NAS (using Solaris’ ZFS storage) . One node will run ESXi with Nexenta running in a virtual machine and three nodes will run ESXi using both NFS and iSCSI as the virtual shared storage medium (provided by Nexenta).

I do want to drift briefly into a dicsussion on cost. Although rock-bottom prices are not a focus, per-node costs are – especially where relative to non-commodity computing models. Again, the prospect of a commodity computing model in a narrowing of a robust hardware eco-system down to reliable, utility components at good, market driven prices where market forces apply consistent, downward pressure on resupply costs (i.e. it is predictably less costly to replace components than the initial costs). Let us then look at a simple cost analysis of our 4-node test system:

Item Part No.
Qty. Ref. Price Extended
Barebones System Transport GT28 (B2935-E) B2935G28V4H 1 $1,549.99 $1,549.99
Barebones System Transport GT28 (B2935-E) B2935G28V4H 1 $1,549.99 $1,549.99
RAID Controller On-Board Nvidia (4P, SATA) RAID Controller, 4P, MCP55Pro 4 $0.00 $0.00
RAID Controller LSI SAS3442E-R (8P, SAS/SATA, LP, PCIe) LSI00110 2 $350.00 $700.00
LAN Controller On-Board 10/100/1000 (4P, UTP) On-Board, Dual Intel 82571EB 4 $0.00 $0.00
OS Drive Sandisk Cruzer, 4GB, USB SDCZ6-4096RB 4 $13.32 $53.27
Storage Drive Segate 500GB, 7200 SATA II ST3500320NS 8 $90.00 $720.00
CPU Opteron 2376, 2.3GHz, 75W, Quad-core, 45nm OS2376WAL4DGI 4 $369.99 $1,479.96
Misc Tyan M3296 SMDC (IPMI) BMC w/KVMoIP Tyan M3296 2 $103.76 $207.52
Misc Tyan M3296 SMDC (IPMI) BMC w/KVMoIP Tyan M3296 2 $103.76 $207.52
Memory (RAID) None None 0 $0.00 $0.00
Memory (CPU) Kingston 4GB (1x4GB), ECC/REG/DDR2-667 KVR667D2D4P5/4G 16 $104.99 $1,679.84
Hypervisor Software Vmware ESXi (free edition) ESX 3i 3.5.0b-130755 4 $0.00 $0.00
TOTAL



$8,148.09


Total Nodes 4 Total Cost $8,148.09


Total Compute Nodes 2 Avg. Per Node $1,682.02


Total Storage Nodes 2 Avg. Per Node $2,392.02


Available vCores 44 Avg. Per vCore $185.18


Available 1GB VM’s 40 Avg. Per VM $203.70

This table shows the cost of four, single processor (4-cores) nodes (the maximum cost per node) with each node running VMware ESXi and two nodes with LSI storage controllers (one per chassis) and an initial storage array of 1TB each storage node (two 500GB RAID1 volumes each). As the table indicates, the cost per “pure compute node” is only $1,685 and each would be capable of running approximately 12 virtual workloads with 1GB RAM each (maximum). It is likely in a cost sensitive environment, ironically, that workload distribution would be very conservative and it is beyond the scope of this blog to argue those issues here.

Looking at the upgrade costs involved in “fully populating” the systems, more questions about when and how to scale can be raised. When do I upgrade existing systems versus add mode physical nodes? Let us assume that the prevailing costs of our processor and memory drop 15% by the time we upgrade, the upgrade costs to max-out the system will be according to the following table:

Item Part No.
Qty. Ref. Price Extended
CPU Opteron 2376, 2.3GHz, 75W, Quad-core, 45nm OS2376WAL4DGI 4 $314.49 $1,257.97
Memory (CPU) Kingston 4GB (1x4GB), ECC/REG/DDR2-667 KVR667D2D4P5/4G 48 $89.24 $4,283.59
TOTAL



$5,541.56


Total Nodes Upgraded 4 Upgrade Cost $5,541.56


Total Nodes Upgraded 4 Avg. Per Node $3,422.41


Additional vCores 108 Avg. Per vCore $126.76


Effective 2GB VM’s 114 Avg. Per VM $120.08

In this upgrade, we have brought each node to 16-cores and 64GB DDR2 RAM; total system storage remains the same. Our “baseline” virtual machine has moved from 1GB/VM to 2GB/VM and the effective cost/VM has dropped from $205/VM to $128/VM representing a theoretical per-workload cost reduction of 40%. As a check against Dell’s recommended ESXi platform, the R805 2U single-node rackmount server, our test platform weighs-in at $3,500 per node versus Dell’s $8,600 per node (simularly configured) for a savings of $5,100 per node (60%). So, yes, economies appear to be working for us in this test case. Another way to look at it, you could buy a 4-node pair of our test system and shelve a spare chassis/motherboard and power supply for less that a 2-node Dell system.

%d bloggers like this: