Tyan Transport GT28 Overview, Part 1January 13, 2009
First, let me say that I have been using Tyan Barebones Server products for many years. I have found them both robust in features and reliable. Furthermore, I have focused on AMD Opteron Eco-System products since the products became available due to the I/O benefits of hypertransport and the overall better price/performance of Opteron vs. Xeon offerings.
While I have considerable experience with Xeon systems and have participated in many side-by-side comparisons, I am convinced – as a result of such testing – the I/O systems in Opteron-based platforms are far superior than comparable Xeon FSB systems (front-side bus dependent systems). For low TCO systems, the ability to load I/O elegantly is not just an advantage: it’s a must. This loading factors considerably into TCO where per-node utilization and efficiency are large factors.
That said, the GT28 has a Xeon-based counterpart – the Tank GT24 – to satisfy Xeon-based Eco-Systems. The Tank GT24 does not support quad Gigabit Ethernet nor does it support the built-in Mellanox Infiniband interface. The lack of support for these advanced I/O capabilities is further testament to the weak I/O support of the FSB paradigm.
Looking at the system block diagram for the GT24, we see the relation ship of the Opteron chip(s), the hypertransport bus(es) and the nForce Professional 3600 Northbridge connecting PCI and PCIe busses. The relationship between the CPU, memory and I/O subsystems are shown clearly within the provided block diagram (courtesy of Tyan’s excellent system documentation.)
The 1U enclosure is well built and just this side of cramped. Build quality on the case is good-to-adequate, but if you are familiar with the Tyan Tank/Transport line of systems, you will not be disappointed in the build quality. Personally, I find it less “sturdy” than the 2U series, but there is not much structural to work with, and the center column (power supply bay) serves as a structural element much as the center hump of a unitbody automobile chassis. Both nodes have independent power and reset buttons, and the back-plane for the SAS/SATA drives is split-powered to correspond to the on/off state of the associated motherboard.
Support for the “Shanghai” series of Opteron processors was accomplished by simple BIOS upgrade (requiring an earlier version of the socket-F processor). The resulting initial boot-up screen (again, after a flash update using a 2212 Opteron) demonstrates the build-out of our test platform. In our lab we have two of these systems with each of the four nodes configured identically.
The exception to our “standardized” test platform is the “storage block” configuration which adds an LSI1068E-based PCIe host adapter (LSI 3442E). We are using a Y-splitter to power the SAS/SATA drives from a single node and are connecting all four SAS/SATA hot-swap ports to that same node’s LSI controller. This configuration allows us to have a wider range of drive compatibility for the storage nodes and cascade storage for additional capacity using the external SAS port. I will detail this configuration in later post(s).
The SMDC control not only allows for effortless command of the platform, but also provides an excellent way to make screen captures for this blog entry. The economics of the SMDC port – when compared to scalable KVM platforms – seem to be excellent once you consider the other features enabled by its addition.
Using the Tyan M3296 SMDC IP/KVM
An example of the utility enabled by the SMDC card is easy to make. In many test and deployment scenarios, boot from ISO becomes a common practice with virtualized systems. With the embedded web client of the SMDC card, it is easy to share a SMB/CIFS accessible ISO image with the managed node presented as a USB CD-ROM. However, DVD’s (or any media over 800MB) can not be shared this way and would require a physical drive attached to the system. Since most distributions and installation scenarios allow for CD or DVD this is not a problem, however in the future I would expect to see more DVD-only distro’s and things could change. We have been able to install VMware ESX, Oracle VM and Nexenta (Open Solaris) without [hardware] compatibility issues. Likewise, floppy images can be presented to the system node using the same process.