Advanced Storage Connectivity for Virtual Servers: Why and How

The lecture focused on FC storage only, not iSCSI. In fact, they’re recommending against using iSCSI to store any virts — essentially, it’s too slow to effectively support a high number of virts.

What kinds of virtualization are there:

Virtualization 1.0 was MS V Server, VMWare server, or ESX 2.x.
Just virtualizing logical servers to a single physical. This uses local storage (SCSI — on board, MSA, PowerVault, etc).
Benefits:
  • Was good for development and test environments.
  • Was also good for power and cooling.
  • Many-to-one configurations.
  • Supports legacy virts.
Virtualization 2.0: Hyper-V, ESX 3.x
Virtualization of a corporate-wide server infrastructure. Requires network storage: FC recommended; SAN, NAS
Benefits:
  • Service standardization and extension (high-availability, etc)
Virtualization 3.0
End to end virtualization. Virtual storage, servers, fabrics, networks, etc.
Benefits:
  • Self-healing
Storage
Shared storage is essential, otherwise, you’re limited to V1.0.
FC is preferred because:
  • it’s higher performance
  • optimized for storage I/O
  • low CPU overhead (protocol offload, MSI-X interrupt management)
  • enables large numbers of VMs over a single link
iSCSI requires proportionally more CPU power as bandwidth increases.
Hyper-V and Storage
New virtualization, Hyper-V, is much like Xen. Hyper-V is currently in RC and is available for download (not sure from where though).
There are limitations with virtualized FC connectivity.
Solution is to use Virtual HBA Technology, a single endpoint (HBA) registers multiple fabric addresses. Each VM is assigned one or more virtual ports. It’s just a virtual fiber switch. Based on N-Port ID Virtualization (NPIV), ANSI T11 standard. (t11.org)
Cartoon in the presentation: “I have no idea how we virtualize something. I just enjoy telling people to do it.” (Cartoon: Ted Goff)
If you’re running apps on your VMs that require FC connectivity, you can run:
Midrange 4G HBA: 16 VMs
Enterprise 4G HBA: 100 VMs
Enterprise 8G HBA: 255 VMs
In Virt 1.0: use one large shared LUN. This limits performance and does not support virtual HBAs (or zoning, LUN masking, etc).
For Virt 2.0, you can use dedicated LUNs for each VM. Requires LUN assignment per VM (or per VM cluster, as needed), so your Storage Engineers need to be involved.
System Center VM Manager Storage (VMM) Integration
That was slick — this session just turned into a much more pronounced seminar for Emulex. Granted, we got some good information about practices for allocating storage for virtual servers, but not a lot of information regarding best practices for actually configuring VM Hosts to use FC for storage and allocation of the VMs themselves.
Next seminar will be Introducing MS Assessment and Planning Solution Accelerator for Windows Server 2008, Hyper-V and Virtual Server 2005 R2.

Posted from MS TechEd, 2008

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.