Archive for March, 2009


Quick Take: Microsoft’s Azure – Ctrl-Alt-Del

March 23, 2009

It is not surprising that Microsoft’s cloud computing “technology preview” platform called Azure took a nose-dive. What’s more, it’s no surprise that a automated maintenance service caused the platform to crumble. As indicated by the Azure team, the failure was due to:

“During a routine operating system upgrade on Friday (March 13th), the deployment service within Windows Azure began to slow down due to networking issues.  This caused a large number of servers to time out and fail…”

“…We have learned a lot from this experience.  We are addressing the network issues and we will be refining and tuning our recovery algorithm to ensure that it can handle malfunctions quickly and gracefully…”

“…For continued availability during upgrades, we recommend that application owners deploy their application with multiple instances of each role…”

Did they really think that “Friday the 13th” could buy them some sympathy? In any case, running two instances of the same image does not seem like a way to CONSERVE resources to me, and it seems to fly in the face of “green” practices. Given that graceful maintenance processes can be handled – on-line – by simple vMotion in VMware “clouds” – it makes me wonder if Azure is nothing but a bunch of Windows-on-Hyper-V servers managed by untested PowerShell scripts…

Oddly, the cure-all of running multiple instances is proven not to be 100% effective, as evidenced by their subsequent admission:

“Any application running only a single instance went down when its server went down.  Very few applications running multiple instances went down, although some were degraded due to one instance being down.”

But rest assured, there will be no charge for the kludge, er, fix to the problem:

“We will not count the second instance against quota limits, so CTP participants can feel comfortable running two instances of each application role.”

SOLORI’s take: This reminds me of the early VoIP days, when everyone released a VoIP product and very few were ready-for-prime-time. That made the entire industry – good and bad alike – take a black eye and probably delayed broader adoption of VoIP by a solid 5-years. Could this really be Microsoft’s strategy: to undermine the concept of cloud computing so that it is only fully-baked when they “proclaim” it to be?

SOLORI’s 2nd take: I question the sanity of anyone trusting Microsoft with their cloud computing initiative. Microsoft proclaims their product delivers: choice, low risk and fewer distractions. What about Uptime? Performance? Cost? Management? Responsiveness? Isn’t that the bread-and-butter of cloud computing? Too many questions – not enough answers…

Check-out TeckWorld’s article for their perspective…


Quick Take: Cisco Enters the Blade Market

March 20, 2009

NetworkWorld has a decent article on Cisco’s entry into the blade computing market with a virtualization-focused product Cisco calls the “Unified Computing System.” It’s not commodity stuff, building on “soon to be released” Nehalem processors, 10Gb interfaces and FCoE capability in a supposedly made-for-VMware design.

The response from the Dell VP made me reach for a handkerchief (somebody get him a crying towel!) Of course the Cisco solution is “very niche-focused” but it is also very forward thinking. However, with Cisco’s record on computing hardware in their embedded products, it’s hard to have a lot of confidence in their “first wave” of blades. Likewise, their QC on software needs to be “in the zone” if they are going to be successful in the “densely virtualized” market as this box would imply.

SOLORI’s take: Feature-for-feature, Cisco will be alone for a few months as others play catch-up, but catch-up to what market? Are there really lots of enterprise customers out there begging for 10Gb/FCoE-connected blade servers at price-be-damned rates? This approach is moving AWAY from commodity computing and, hence, is a “loser” based on the SOLORI enterprise model.

SOLORI’s 2nd take: This could become a juggernaut in the HPC setting with “price-be-damned” and “budgets” come from institutions and governments. It looks as if Cisco’s finally found a way to move its pricy 10G products into that space.

Read the entire article at… Read the rest of this entry ?


Quick Take: IBM moves to buy Sun

March 19, 2009

The Wall Street Journal is reporting that IBM is in talks with Sun that could result in an all-out purchase – possibly as soon as this week. Sun, seen as a hardware company by WSJ, could be a good fit for IBM since opening its go-to-market strategy from proprietary to commodity hardware. IBM, the middle-ware titan, is famous for turning otherwise commodity wares into cash flow.

Likewise, both IBM and Sun are open source supporters with proven records of cooperation within the community. Embedding Java and SunStorage into IBM’s arsenal could prove an effective strategy for Big Blue as systems migrate to the cloud and away from Microsoft’s control.

SOLORI’s take: IBM and Sun could be a very good thing with lots of valuable resources trickling to the open source community and SMB space.

See the entire WSJ article here.


Quick Take: Battle for the Presentation Layer

March 18, 2009

The battle lines are drawn in the war to determine who will control your desktop in the future – and it’s not about what operating system (OS) you’ll be running – it’s about who will pull the virtual strings behind the OS. Up to recently, Windows users had RDP and ICA as the main “enterprise” desktop remote access services along with a protocol soup of new alternatives.

Today, there is a foray of alternate access technologies flying the banner of Virtual Desktop Infrastructure (VDI) and a confusing mess of protocols, features and limitations. Most recently, this even includes traditionally security focused Symantec and its “Endpoint Virtualization” product. While this serves to bolster our prediction of a Microsoft/Citrix merger – based on the sheer number of vectors competing for the platform – it also presents a familiar case of “who’s approach will win” for end users and adopters.

Brian Madden’s blog recently touched on this competition and where the major players – in his opinion – stand to lose and gain. It’s worth the read as are the related posts from his blog on the topic.

SOLORI’s take: this war’s been brewing for some time now, and it’s only going to get ugly before things settle down. So far, it’s all packaging and management with no new vision towards an “innovative way” of application deployment – just better ways…


In-the-Lab: VSA Shoot-out

March 17, 2009

We’re cooking with gas over the next two weeks, making performance contrasts using similar resource build-outs of:

  1. FreeNAS (0.7 nightly build)
  2. OpenFiler 2.3
  3. NexentaStor (1.1.5 dev)
  4. SunStorage (VSA)

All configured as VSA’s under ESXi with pass-thru access to local attached storage. Why do this? ESX provides some supervisory analysis that makes quick lab analysis easy. Also, VSA storage is a 2009 killer-app and absolutely necessary for targeted micro-deployments.

These are not indepth storage studys, but an out-of-the-box analysis of what to expect and where to spend your time=money. Stay tuned…


Quick Take: Licensing Benefits of Virtualization

March 17, 2009

For some, the real licensing benefit of virtualization is a hidden entity. This is aided by some rather nebulous language in end user licenses from Microsoft and others.

Steve Kaplan, over at DABCC, has a brief and informative article on how licensing affects the deployment costs of virualized Microsoft products – sometimes offsetting the cost of the hypervisor costs in a VMware environment, for instance.

SOLORI’s 1st take: the virtual data center has new ways to increase costs with equal or better offsets to speed ROI – especially where new initiatives are concerned. When in doubt, talk to your software vendor and press for clear information about implementation licensing costs.

SOLORI’s 2nd take: Steve’s report relies on Gartner’s evaluation which is based on Microsoft policies that are outdated. For instance, Server 2003 R2 is NOT “the only edition that will allow the customer to run one instance in a physical operating system (OS) environment and up to four instances in virtual OS environments for one license fee.” This also applies to Server 2008… (see Microsoft links).

Check out Steve’s evaluation here. Also, see Microsoft’s updated policy here and their current Server 2003 policies here.


Quick Take: Cloud-ready Packaging

March 6, 2009

Robin Harris has a quick take on a cloud computing shortcut he calls “credit card and a cloud.” He names an enabling service that pre-packages your OSS apps in a “ready to deploy” format for numerous on-line and in-house virtualization platforms.

SOLORI’s take: As these services crop-up, differentiation between cloud vendors will tighten. In the mean time, use them to reduce your deployment time and cost.

Check-out the details and links at his storage blog.


SBS 2008 Panics, Needs IPv6

March 4, 2009

Remember how you were told to disable all unused applications and protocols when securing a compute environment? If you’ve been in networking for years – like I have – it’s almost a reflex action. This is also more recently codified in PCI/DSS Section 2.2.2, right? It also seems like a really basic, logical approach. Apparently Microsoft doesn’t think so. Apparently, there is a “somewhat artificial albeit deeply ingrained” dependency on IPv6 in Windows Server 2008.

2.2.2 Disable all unnecessary and insecure services and protocols (services and protocols not directly needed to perform the device’s specified function).

– PCI Security Standards Council

Considering the lackluster adoption rate of IPv6 in the Internet domain, it is hard to argue that IPv6 on the local network is a new requirement. Given that most system administrators have enough difficulty understanding IPv4 networks, a dependency on IPv6 seems both premature and an unnecessary complexity.

Corollary: Disabling IPv6 Kills SBS 2008

Simply disabling IPv6 at the network card level carries no dire warning. Services continue to function properly with no warnings or klaxon calls. However, a reboot tells a different story: the absence of IPv6 KILLS a myriad of service on reboot. Read the rest of this entry ?


StorMagic offers Free VSA

March 3, 2009

StorMagic (UK) is offering a “free” license for their new $1,000 virtual storage appliance (VSA) targeted speficically at VMware ESX users. This VSA – like all VSAs to date – targets the directly attached storage (DAS) of your ESX server as fodder for shared storage: by commandeering and redistributing the DAS as a network share for all ESX servers in your farm.

How is StorMagic’s VSA – the call it the StorMagic SvSAN – different from other VSA offerings?

  • First, it is being offered “free’ for the 2TB management license if you “qualify” by getting a “promo code” from a reseller. Fortunately, getting a promo code is as easy clicking the “help balloon” on the download form.
  • Second, it offers a commercially supported SAN platform – under ESX – that can be managed directly from vCenter. This allows direct management of the underlying RAID controller on the ESX hardware. Currently, all LSI MegaRAID SAS controllers are supported, as well as 3Ware’s 9500S, 9650SE and 9690 Series, plus support for Intel’s SRCSAS-RB/JV and SRCSATAWB controllers.
  • Thirdly, the VSA supports all of the basic functions needed in an ESX/HA environment: high availability and mirroring, snapshots and iSCSI. HA features are available hrough an additional license (now being offered 2-for-1) and 256 levels of snapshot – per VSA – that work with a VSS provider for Windows.

More importantly, StorMagic is a VMware Technology Alliance Partner, implying a depth of support that OpenSource “free” products can not offer. SvSAN requires ESX 3.5+, one vCPU, 2000MHz reservation, 1GB memory, Gigabit Ethernet connection(s), 500MB disk space and a supported RAID controller. Follow this link to try SvSAN.


SOLORI’s Laws of SMB Virtual Architecture

March 2, 2009

SOLORI’s Laws of SMB Virtual Architecture

  1. Single points of failure must be eliminated.
  2. Start simple, add complexity gradually.
  3. Improve stability and reliability first.
  4. Improve capacity only after achieving stability and reliability.
  5. Start with 50% more storage than you “need.”
  6. Start with 4GB of RAM per CPU core.
  7. Start with at least 3 CPU cores.
  8. Avoid front-side bus architectures.
  9. Use as many disks as possible to achieve your storage target.
  10. Secure your management network.

Law 1: Single Points of Failure Must Be Eliminated

This could have many interpretations, but here’s mine: Noah was right, everything must come in pairs. At the most basic level, this means two switches, two “hosts” and two Gigabit Ethernet ports per trunk, per “host” – minimum.

Redundant Host Power Supplies

At the very low-end, computer chassis do not come with redundant power supplies. Since undersizing, line transients and heat build-up are leading causes for power supply failure, risk can be mitigated This can be remedied by after market power supply replacements and will run under $250/server.

Redundant Switches

For switch redundancy, an alternative to switch stacking is 802.3ad link aggregation between two non-stacking switches. Today most “web managed” VLAN-capable switches include 802.3ad, LACP or at least static trunking between switches. This allows multiple ports to be “bonded together” to form a single “logical link” between switches. This class of switch in 24-port Gigabit Ethernet can be found for $300-500 each. Read the rest of this entry ?