MS Virtualization Solutions

This should prove helpful if you’re looking for which products you’ll need to virtualize your systems, services, applications, and servers:

MS Virtualization Solutions

A comprehensive set of virtualization products, from the data center to desktop assets — both virtual and physical — are managed from a single platform
Management is through System Center
  • Profile Virtualization: Document redirection, offline files
  • Presentation Virtualization: Terminal Services (through Calista Technologies)
  • Desktop Virtualization: Virtual PC (through Kidaro)
  • Application Virtualization: Desktop Optimization Pack, Application Virtualization (formerly SoftGrid)
  • Server/Desktop Virtualization: Windows Server 2008, Hyper-V Server, Virtual Server 2005 R2, Windows Vista Enterprise, Centralized Desktop
That ought to help.
In the short term, it looks like we must push 2008 as much and as quickly as possible; all of this VM technology will require 2008 Server. Unless, of course, you just want to run some basic 32-bit standalone virtual servers — you can use Virtual Server 2005 R2 for that.

Posted from MS TechEd, 2008

Advanced Storage Connectivity for Virtual Servers: Why and How

The lecture focused on FC storage only, not iSCSI. In fact, they’re recommending against using iSCSI to store any virts — essentially, it’s too slow to effectively support a high number of virts.

What kinds of virtualization are there:

Virtualization 1.0 was MS V Server, VMWare server, or ESX 2.x.
Just virtualizing logical servers to a single physical. This uses local storage (SCSI — on board, MSA, PowerVault, etc).
Benefits:
  • Was good for development and test environments.
  • Was also good for power and cooling.
  • Many-to-one configurations.
  • Supports legacy virts.
Virtualization 2.0: Hyper-V, ESX 3.x
Virtualization of a corporate-wide server infrastructure. Requires network storage: FC recommended; SAN, NAS
Benefits:
  • Service standardization and extension (high-availability, etc)
Virtualization 3.0
End to end virtualization. Virtual storage, servers, fabrics, networks, etc.
Benefits:
  • Self-healing
Storage
Shared storage is essential, otherwise, you’re limited to V1.0.
FC is preferred because:
  • it’s higher performance
  • optimized for storage I/O
  • low CPU overhead (protocol offload, MSI-X interrupt management)
  • enables large numbers of VMs over a single link
iSCSI requires proportionally more CPU power as bandwidth increases.
Hyper-V and Storage
New virtualization, Hyper-V, is much like Xen. Hyper-V is currently in RC and is available for download (not sure from where though).
There are limitations with virtualized FC connectivity.
Solution is to use Virtual HBA Technology, a single endpoint (HBA) registers multiple fabric addresses. Each VM is assigned one or more virtual ports. It’s just a virtual fiber switch. Based on N-Port ID Virtualization (NPIV), ANSI T11 standard. (t11.org)
Cartoon in the presentation: “I have no idea how we virtualize something. I just enjoy telling people to do it.” (Cartoon: Ted Goff)
If you’re running apps on your VMs that require FC connectivity, you can run:
Midrange 4G HBA: 16 VMs
Enterprise 4G HBA: 100 VMs
Enterprise 8G HBA: 255 VMs
In Virt 1.0: use one large shared LUN. This limits performance and does not support virtual HBAs (or zoning, LUN masking, etc).
For Virt 2.0, you can use dedicated LUNs for each VM. Requires LUN assignment per VM (or per VM cluster, as needed), so your Storage Engineers need to be involved.
System Center VM Manager Storage (VMM) Integration
That was slick — this session just turned into a much more pronounced seminar for Emulex. Granted, we got some good information about practices for allocating storage for virtual servers, but not a lot of information regarding best practices for actually configuring VM Hosts to use FC for storage and allocation of the VMs themselves.
Next seminar will be Introducing MS Assessment and Planning Solution Accelerator for Windows Server 2008, Hyper-V and Virtual Server 2005 R2.

Posted from MS TechEd, 2008

Tech Ed, Day 2

Good tool for doing inventory of your hardware: http://microsoft.com/map
VM Migration
Quick Migration: Save state, move VM, restore state. Certainly not a HA solution, but is tolerable — looks like someone has removed its network cable.
Quick Migration can take about 8 sec for a 512MB RAM VM on 1GB iSCSI (clean pipe, I imagine); about 32 seconds for a 2GB RAM VM. FC is faster, of course.
There’s a wizard that can check your virtual host cluster to make sure it can sustain a node failure.
Required Features & Required Software
Appears we would need a suite of products to make VMs do what MS is promising. So, to get the backup, hardware and workload provisioning, patching, monitoring and disaster recovery, one would need MS System Center Data Protection Manager, Virtual Machine Manager, Operations Manager 2007, and Configuration Manager.
The Virtual machine Manager has the ability to review your hardware to determine virtualization candidates. It does not look at VM hosts or servers that have already been virtualized.

Reporting also looks at VM allocation and utilization as well as host utilization.
Monitoring
Provides discovery of hosts, VMs, Virtual machine Manager components, perf and health monitoring, Application Awareness, resource calibration and optimization.
Performance and Resource Optimization (PRO)
Can grab specific alerts from Operations Manager and can provide recommendations for corrective or optimization actions.
They illustrate a workload on a physical becoming degraded, so Operations Manager sees the degraded state then moves the virtual to a new host to take advantage of hardware resources.
I still keep coming back to: storage. How can we be sure that storage will be stable, reliable, and efficient enough to host a VM farm? That will be the single point of failure.

Posted from MS TechEd, 2008

Tech Ed, End of Day 1

I did a couple of virtualization seminars today and am intrigued. I
was considering moving entirely away from MS and over to VMWare’s ESX
with Virtual Center (as well as VCB, HA, etc) but would like to spend
some time with the new MS virtualization platform.

In another post, I think I mentioned that MS figures the VMWare
solution to be about $60K for a full software roll out for five hosts
— hardware not included, of course — while the Microsoft
Virtualization software would run about $20K. Substantially less, but
I don’t know yet if it would be optimal.

No doubt VMWare will be lowering their prices to stay completive with
Microsoft.

The MS virtualization concept consists, of course, of virtualizing
servers, which we already have started doing. They also have begun
promoting application virtualization as well as presentation, desktop,
and profile v.

For profiles, think: Roaming Profiles, but without all of the space
requirements. Rather, with the MS profile virtualization, you gather
the users’ settings, personalizations, etc, then replicate those
server-side whenever they’re connected to the network. If they should
lose their physical machine, you can give them a new one and all of
their settings from their last sync are automatically imported. If
you add application virtualization for those users with desktop
computers (or for apps that you want to stay on the LAN — be it
physically or through VPN) there will be nearly zero downtime for the
user.

I’m liking that and I’m not even on the corp admin team.

I still haven’t seen anything that would make patch management, or
updates any easier than GFI. Actually, haven’t found anything for that
at all, so we may be stuck with GFI. On the plus side, when it works,
it does very well — we’ve just been having some stability issues with
it not finishing scans of our domains.


Posted from MS TechEd, 2008

Virtualization

Application Virtualization

– Isolation of apps

– No interoperability issues with apps/eliminates apps conflicts

– Will ultimately be able to run apps for newer os levels

Presentation virtualization

Profile Virtualization

– Makes profiles/interface customization portable

System Center manages both physical and virtual for all types of virtual

Looks like reliable shared storage is our biggest issue to rolling
virtualization into our dev/qa environments.

Presenter noted that many desktop apps are moving back to the
server… Apparently, the mainframe is coming back. Methinks this is
an excellent idea.

On app virtualization:

A mobile user can be provioned through AD. All (everything) is
replicated back to a server — sobif they lose their machine, you give
them a new one and they’re up and running immediately.

You can make apps local (LAN) only or unique to their (v)machine.

Can also set up to allow a contract worker to use a localized vm
(hosted) or app only.

Demo!


TechEd, 2008
Via iPod Touch