Posts Tagged ‘IOV’

h1

Micro-Cloud Anyone? Supermicro Unleashes a (tiny) Monster

May 29, 2009
Supermicro 2021TM-BTRF - 4-nodes in 2U with redundant power.

Supermicro 2021TM-BTRF - 4-nodes in 2U with redundant power.

Supermicro’s been holding the AS-2021TM 4-node-in-2U platform back for several weeks but finally it’s out from behind proprietary OEM’s. We’re talking about the Supermicro 2021TM-B “mini cluster” of course and we’ve been watching this platform for some time.

Why is this a great platform for right now? The H8DMT-F motherboard, supporting only 64GB of DDR2/800 memory, also supports HT3.0 links to enable the slightly higher HT bandwidth of the upcoming Istanbul 6-core processors. The on-board IPMI 2.0 management (WPCM450, supporting KVM/IP and serial-over-LAN) with dedicated LAN port and two inboard USB ports (supporting boot flash) make this an ideal platform for “cloud computing” operations with high-density needs and limited budgets.

The inclusion of the on-board Intel Zoar (82575) dual-port Gigabit Ethernet controller means VMDq support for 4 recieve queues per port using the “igb driver” as we’ve reported in a previous post on “cheap” IOV. An nVidia MCP55-Pro provides Southbridge functions for 6xUSB 2.0, 4x SATA and 1xPCI-express x16 (low-profile) slot. This is a VMware “vSphere ready” configuration.

Supermicro H8DMT-F Motherboard from 4-node-in-2U chassis

Supermicro H8DMT-F Motherboard from 4-node-in-2U chassis

Each motherboard is installed on a removable try allowing for hot-swapping of motherboard trays (similar to blade architectures). The available x16 PCI-express slot allows for a single, dual-port 10GE card to drive higher network densities per node. An optional 20Gbps Mellanox Infiniband controller (MT25408A0-FCC-DI) is available on-board (PCI-express x8 connected) for HPC applications.

Each node is connected to a bank of 3-SATA hot-swap drive bays supporting RAID 0, 1 or 5 modes of operation (MCP55 Pro NVRAID). This makes the 2021TM a good choice for dense Terminal Services applications, HPC cluster nodes or VMware ESX/ESXi service nodes.

Key Factors:

  • Redundant power supply with Gold Level 93% efficiency rating
  • up to 64GB DDR2/800 per node (256GB/2U) – Istanbul’s sweet-spot is 32-48GB
  • HT3.0 for best Socket-F I/O and memory performance
  • Modern i82575 1Gbps (dual-port) with IOV
  • Inboard USB flash for boot-from-flash (ESXi)
  • Low-profile PCI-express x16 (support for dual-port 1oGE & CNA’s)
  • Hot-swap motherboard trays for easy maintenance
  • Full KVM/IP with Media/IP per node (dedicated LAN port)
  • Available with on-board Mellanox Infiniband (AS-2021TM-BiBTRF) or without (AS-2021TM-BTRF)
h1

Discover IOV and VMware NetQueue on a Budget

April 28, 2009

While researching advancements in I/O virtualization (VMware) we uncovered a “low cost” way to explore the advantages of IOV without investing in 10GbE equipment: the Intel 82576 Gigabit Network Controller which supports 8-receive queues per port. This little gem comes in a 2-port by 1Gbps PCI-express package (E1G142ET) for around $170/ea on-line. It also comes in a 4-port by 1Gbps package (full or half-height, E1G144ET) for around $450/ea on-line.

Enabling VMDq/NetQueue is straightforward:

  1. Enable NetQueue in VMkernel using VMware Infrastructure 3 Client:
    1. Choose Configuration > Advanced Settings > VMkernel.
    2. Select VMkernel.Boot.netNetqueueEnabled.
  2. Enable the igb module in the service console of the ESX Server host:# esxcfg-module -e igb
  3. Set the required load option for igb to turn on VMDq:
    The option IntMode=3 must exist to indicate loading in VMDq mode. A value of 3 for the IntMode parameter specifies using MSI-X and automatically sets the number of receive queues to the maximum supported (devices based on the 82575 Controller enable 4 receive queues per port; devices based on the 82576 Controller enable 8 receive queues per port). The number of receive queues used by the igb driver in VMDq mode cannot be changed. Read the rest of this entry ?
h1

AMD and Intel I/O Virtualization

April 26, 2009

Virtualization now reaches an I/O barrier where consolidated applications must vie for increasingly more limited I/O resources. Early virtualization techniques – both software and hardware assisted – concentrated on process isolation and gross context switching to accelerate the “bulk” of the virtualization process: running multiple virtual machines without significant processing degradation.

As consolidation potentials are greatly enhanced by new processors with many more execution contexts (threads and cores) the limitations imposed on I/O – software translation and emulation of device communication – begin to degrade performance. This degradation further limits consolidation, especially where significant network traffic (over 3Gbps of non-storage VM traffic per virtual server) or specialized device access comes into play.

I/O Virtualization – The Next Step-Up

Intrinsic to AMD-V in revision “F” Opterons and newer AM2 processors is I/O virtualization enabling hardware assisted memory management in the form of a Graphics Aperture Remapping Table (GART) and the Device Exclusion Vector (DEV). These two facilities provide address translation of I/O device access to a limited range of the system physical address space and provide limited I/O device classification and memory protection.

Combined with specialized software GART and DEV provided primitive I/O virtualization but were limited to the confines of the memory map. Direct interaction with devices and virtualization of device contexts in hardware are efficiently possible in this approach as VMs need to rely on hypervisor control of device access. AMD defined its I/O virtualization strategy as AMD IOMMU in 2006 (now AMD-Vi) and has continued to improve it through 2009.

With the release of new motherboard chipsets (AMD SR5690) in 2009, significant performance gains in I/O will be brought to the platform with end-to-end I/O virtualization. Motherboard refreshes based on the SR5690 should enable Shanghai and Istanbul processors to take advantage of the full AMD IOMMU specification (now AMD-Vi).

Similarly, Intel’s VT-d approach combines chipset and CPU features to solve the problem in much the same way. Due to the architectural separation of memory controller from CPU, this meant earlier processors not only carry the additional instruction enhancements but they must also be coupled to northbridge chipsets that contained support. This feature was initially available in the Intel Q35 desktop chipset in Q3/2007. Read the rest of this entry ?