Tyan Transport GT28 Overview, Part 1

January 13, 2009

First, let me say that I have been using Tyan Barebones Server products for many years. I have found them both robust in features and reliable. Furthermore, I have focused on AMD Opteron Eco-System products since the products became available due to the I/O benefits of hypertransport and the overall better price/performance of Opteron vs. Xeon offerings.

While I have considerable experience with Xeon systems and have participated in many side-by-side comparisons, I am convinced – as a result of such testing – the I/O systems in Opteron-based platforms are far superior than comparable Xeon FSB systems (front-side bus dependent systems). For low TCO systems, the ability to load I/O elegantly is not just an advantage: it’s a must. This loading factors considerably into TCO where per-node utilization and efficiency are large factors.

That said, the GT28 has a Xeon-based counterpart – the Tank GT24 – to satisfy Xeon-based Eco-Systems. The Tank GT24 does not support quad Gigabit Ethernet nor does it support the built-in Mellanox Infiniband interface. The lack of support for these advanced I/O capabilities is further testament to the weak I/O support of the FSB paradigm.

Looking at the system block diagram for the GT24, we see the relation ship of the Opteron chip(s), the hypertransport bus(es) and the nForce Professional 3600 Northbridge connecting PCI and PCIe busses. The relationship between the CPU, memory and I/O subsystems are shown clearly within the provided block diagram (courtesy of Tyan’s excellent system documentation.)

S2935 Block diagram and bus topology

S2935 Block diagram and bus topology

Based on the NVidia nForce Professional 3600 specifications, the chipset can accommodate 28 lanes and 6 links on the PCIe bus. Including the Infiniband controller (not used on my version of the motherboard) the nForce 3600, as implemented by Tyan, is using only 4-of-6 links and 24-of-28 lanes. This under-utilization of resources means the I/O bus of the nForce northbridge should never be saturated – even under full load.
It is important to remember that in our commodity computing model, the more applications we can build around our core products, the more successful our economic strategy will be. In this case, the S2935 is applicable to hypervisor and SAN-head tasks due to its CPU capacity, memory bandwidth and I/O capabilities. Therefore, if the systems proves to deliver on its promise, it could become a useful building-block in our commodity computing Eco-System. We will look deeped into the Eco-System concept in later posts, but I wanted to introduce the reason for this product evaluation here.

Initial Observations

The 1U enclosure is well built and just this side of cramped. Build quality on the case is good-to-adequate, but if you are familiar with the Tyan Tank/Transport line of systems, you will not be disappointed in the build quality. Personally, I find it less “sturdy” than the 2U series, but there is not much structural to work with, and the center column (power supply bay) serves as a structural element much as the center hump of a unitbody automobile chassis. Both nodes have independent power and reset buttons, and the back-plane for the SAS/SATA drives is split-powered to correspond to the on/off state of the associated motherboard.

Support for the “Shanghai” series of Opteron processors was accomplished by simple BIOS upgrade (requiring an earlier version of the socket-F processor). The resulting initial boot-up screen (again, after a flash update using a 2212 Opteron) demonstrates the build-out of our test platform. In our lab we have two of these systems with each of the four nodes configured identically.

S2935 BIOS Boot Screen

S2935 BIOS Boot Screen

The exception to our “standardized” test platform is the “storage block” configuration which adds an LSI1068E-based PCIe host adapter (LSI 3442E). We are using a Y-splitter to power the SAS/SATA drives from a single node and are connecting all four SAS/SATA hot-swap ports to that same node’s LSI controller. This configuration allows us to have a wider range of drive compatibility for the storage nodes and cascade storage for additional capacity using the external SAS port. I will detail this configuration in later post(s).

The SMDC control not only allows for effortless command of the platform, but also provides an excellent way to make screen captures for this blog entry. The economics of the SMDC port – when compared to scalable KVM platforms – seem to be excellent once you consider the other features enabled by its addition.

Using the Tyan M3296 SMDC IP/KVM

An example of the utility enabled by the SMDC card is easy to make. In many test and deployment scenarios, boot from ISO becomes a common practice with virtualized systems. With the embedded web client of the SMDC card, it is easy to share a SMB/CIFS accessible ISO image with the managed node presented as a USB CD-ROM. However, DVD’s (or any media over 800MB) can not be shared this way and would require a physical drive attached to the system. Since most distributions and installation scenarios allow for CD or DVD this is not a problem, however in the future I would expect to see more DVD-only distro’s and things could change. We have been able to install VMware ESX, Oracle VM and Nexenta (Open Solaris) without [hardware] compatibility issues. Likewise, floppy images can be presented to the system node using the same process.

Sharing a SMB/CIFS cd-rom image (ISO) using SMDC

Sharing a SMB/CIFS cd-rom image (ISO) using SMDC

%d bloggers like this: