Showing posts with label Hyper-V. Show all posts
Showing posts with label Hyper-V. Show all posts

Friday, April 30, 2010

Hyper-V Best Practices Analyzer

Microsoft released a best practices analyzer for Hyper-V on Windows 2008 R2 only which can be found on the following link :
http://support.microsoft.com/kb/977238/en-us

Monday, March 8, 2010

Hotfix : Windows 2008 R2 + Hyper-V + Intel Nehalem = Blue Screen

If you have Windows 2008 R2 + Hyper-V running on Nehalem CPU(Intel Xeon 5500 or Core-i) and getting blue screen messages with 0x00000101 - CLOCK_WATCHDOG_TIMEOUT stop code then you will definetly need to check the KB below :

http://support.microsoft.com/default.aspx/kb/975530

Before applying the Hotfix to your platform take into account the warning from MS :

Apply this hotfix only to systems that are experiencing the problem described in this article. This hotfix might receive additional testing. Therefore, if you are not severely affected by this problem, we recommend that you wait for the next software update that contains this hotfix.

Monday, March 1, 2010

Windows 2008 R2 Migration Utilities for Hyper-V

Windows 2008 R2 Migration utilities has been updated with support for Hyper-V. Now it's possible to migrate your Hyper-V setup including VMs,Virtual Switches,VMQ,Chimney Settings etc.. to a Windows 2008 R2 host. The support is from :

Windows 2008 x64 Full Only
Windows 2008 R2 x64 Core or Full Edition to :

Windows 2008 R2 x64 Core or Full Editions
. Also from documentation the scenarios that are not supported are  :
  • The saved state of a virtual machine under one of the following conditions:
    • When moving from Hyper-V in Windows Server 2008 to Hyper-V in Windows Server 2008 R2.
    • When moving between physical computers that have different processor steppings or vendors—for example, migrating from a computer with an Intel processor to a computer with an AMD processor.
  • Virtual machine configuration under one of the following conditions:
    • When the number of virtual processors configured for the virtual machine is more than the number of logical processors on the destination server.
    • When the memory configured for a virtual machine is greater than the available memory on the destination server.
  • Consolidation of physical servers to virtual machines, or consolidation of multiple instances of Hyper-V to one instance.

Documentation:

Friday, February 26, 2010

Hyper-V Live Migration Network Configuration Guide from Microsoft

Microsoft just released a network configuration guide for Windows 2008 R2 Hyper-V Live Migration feature. It's short but useful article at least for design perspective.
This guide describes how to configure your network to use the live migration feature of Hyper-V™. It provides a detailed list of the networking configuration requirements for optimal performance and reliability, as well as recommendations for scenarios that do not meet these requirements.
 http://technet.microsoft.com/en-us/library/ff428137(WS.10).aspx

Sunday, February 14, 2010

Microsoft recommends Increasing VMBus buffer size on Hyper-V for better network throughput

I read an article on Windows Server Performance team blog. Basically it's recommending increasing the VmBus buffer size from 1MB to 2MB to get a better network throughput and a less chance of packet loss for VMGuest NICs on Hyper-V. 
"Your workloads and networking traffic may not need increased buffers; however, these days, 4Mb of RAM isn’t a tremendous amount of memory to invest as an insurance policy against packet loss. Now, if only I could increase a few buffers and alleviate congestion on my daily commute!"
http://blogs.technet.com/winserverperformance/archive/2010/02/02/increase-vmbus-buffer-sizes-to-increase-network-throughput-to-guest-vms.aspx

In order to make the change (source above) :
On the Guest OS , In the Network Adapter Properties dialog, select the Details tab. Select Driver Key in the Property pull-down menu as shown in figure 1 (click the images to see a version that's actually readable):

 
Record the GUID\index found in the Value box, as shown in figure 1, above. Open regedit and navigate to:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{GUID}\{index} as shown in figure 2:
 
Right click the index number and create two new DWORD values, entitled ReceiveBufferSize and SendBufferSize (see figure 3). These values measure the memory allocated to buffers in 1Kb units. So, 0x400 equates to 1,024Kb buffer space (the default, 640 buffers). In this example, we’ve doubled the buffer size to 0x800, or 2,048Kb of memory, as shown in figure 3:
 

Tuesday, February 9, 2010

Vulnerability in Windows Server 2008 Hyper-V Could Allow Denial of Service - 977894

A new security bulletin by Microsoft has been published today. This DoS vulnerability effects the x64 editions of Windows 2008 and R2 including the Core installations.
http://www.microsoft.com/technet/security/Bulletin/MS10-010.mspx
This security update resolves a privately reported vulnerability in Windows Server 2008 Hyper-V and Windows Server 2008 R2 Hyper-V. The vulnerability could allow denial of service if a malformed sequence of machine instructions is run by an authenticated user in one of the guest virtual machines hosted by the Hyper-V server. An attacker must have valid logon credentials and be able to log on locally into a guest virtual machine to exploit this vulnerability. The vulnerability could not be exploited remotely or by anonymous users.

Wednesday, February 3, 2010

Hyper-V Memory Overcommitment in new Service Pack for Windows 2008 R2

One of the features in Vmware Infrastructre that was missing in Hyper-V was over-provisioning of memory resources which is also known as memory overcommitment. A leaked screenshot from Softpedia shows that the dynamic memory management features is about to be included in the next Windows 2008 build.

http://news.softpedia.com/news/The-Windows-8-Start-Post-RTM-Windows-7-Build-6-1-7700-0-100122-1900-133746.shtml

Windows 2008: Modifying Network Bindings from CLI

Microsoft internals just released a tool called nvsbind. For our mass deployments I was using a powershell script which I wrote in order to change network bindings on specific interfaces (disable IPv6,File and Printer sharing etc..)This really requires lots of effort.(fetching registry hive,modfying it making queries to Inetcfg classes etc..)

With this tool it is now possible to make this via CLI. It can also change NIC binding order for specific protocols.

http://code.msdn.microsoft.com/nvspbind

Parameters are as below:

C:\>nvspbind /?

Hyper-V Network VSP Bind Application 6.1.7690.0.

Copyright (c) Microsoft Corporation. All rights reserved.

Usage: nvspbind [option] [NIC|*] [protocol|*]

Options:

   /n   display NIC information only
   /u   unbind switch protocol from specified nic(s)
   /b   bind switch protocol to specified nic(s)
   /d   disable binding of specified protocol from specified nic(s)
   /e   enable binding of specified protocol to specified nic(s)
   /r   repair bindings on specified nic(s)
   /o   show NIC order for specified protocol
   /+   move specified NIC up in binding order for specified protocol
   /-   move specified NIC down in binding order for specified protocol

Most options are documented in the readme which downloads with the install.

The NIC connection order options (o, + and -) show the NIC connection order, move NICs up and move NICs down.

Monday, December 28, 2009

"Server manager has detected that the processor on this computer is not compatible with Hyper-V" Error While Enabling Hyper-V Role

If you receive the below error during the installation of Hyper-V role :

"Server manager has detected that the processor on this computer is not compatible with Hyper-V.  To install this roll, the processor must have a supported version of hardware-assisted virtualization and that feature must 
be turned on in the BIOS."


First make sure you have Hardware Assisted Virtualization and DEP turned on BIOS. If you still see this error message after enabling those roles make sure you didn't enable the role using :

start /w ocsetup Microsoft-Hyper-V

If so you need to first uninstall it (
start /w ocsetup Microsoft-Hyper-V /uninstall) then install using the GUI (Server Manager -> Add Roles)

Friday, October 30, 2009

SCVMM Unknown Error 0x80338104

If you get the below error message on SCVMM while connecting to your hosts :

VMM does not have appropriate permissions to access the WSMan resources on the vmtest server.
 (Unknown error (0x80338104))
Make sure you have the Service Account for VMM is under Local Administrator Group of each host you manage and this is not blocked by group policy enforcements.

Saturday, October 17, 2009

SCVMM Error 403 : %ComputerName; is not a valid network computer name

While adding your Hyper-V hosts to your VMM inventory. If you get the error below :

Error 403 : %ComputerName; is not a valid network computer name

please make sure you are resolving the DNS entry for that host correctly. Especially check /etc/hosts file for any incorrect static entries (that's was the problem in my case)

Friday, August 28, 2009

The battle of hypervisor footprints

Microsoft's stance against the statement in Vmware's official site saying Hyper-V has a bigger footprint then ESXi :

http://blogs.technet.com/virtualization/archive/2009/08/12/hypervisor-footprint-debate-part-1-microsoft-hyper-v-server-2008-vmware-esxi-3-5.aspx

http://blogs.technet.com/virtualization/archive/2009/08/14/hypervisor-footprint-debate-part-2-windows-server-2008-hyper-v-vmware-esx-3-5.aspx

http://blogs.technet.com/virtualization/archive/2009/08/17/hypervisor-footprint-debate-part-3-windows-server-2008-hyper-v-vmware-esxi-3-5.aspx

      Hyper-V Server 2008 vs ESXi 3.5 | June 2008 - June 2009
      Hyper-V: 82MB footprint increase with 26 patches
      ESXi: 2.7GB footprint increases with 13 patches
    Windows Server 2008 Hyper-V vs ESX 3.5 | January 2008 - June 2009
      Hyper-V: 408MB footprint increase with 32 patches
      ESX: 3GB footprint increases with 85 patches
    Windows Server 2008 Hyper-V vs ESXi 3.5 | January 2008 - June 2009
      Hyper-V: 408MB footprint increase with 32 patches
      ESX: 2.7GB footprint increases with 13 patches

and now Vmware's official reply :

I'm leaving the final decision to you :)

What's new in SCVMM 2008 R2

Support for new features of Windows Server 2008 R2

  • Live Migration: Seen through the VMM console, this enables administrators to move a virtual machine between clustered hosts in a way that is completely transparent to the users connected to the virtual machine. This allows administrators greater flexibility in responding to planned downtime and provides higher machine availability. The basic requirements for Live Migration are that all hosts must be part of a Windows Server 2008 R2 failover cluster and host processors must be from the same manufacturer. Additionally all hosts in the cluster must have access to shared storage. No changes are required to existing virtual machines, network, or storage devices in moving from Quick Migration to Live Migration other than upgrading to  Windows Server 2008 R2 and VMM 2008 R2.
  • Hot addition/removal of Storage: Allows the addition and removal of storage to virtualized infrastructure without interruption. Additionally, "live” management of virtual hard disk (VHDs) or iSCSI pass through disks, allows administrators to take advantage of additional backup scenarios and readily use mission critical and storage-intensive applications.
  • New optimized networking technologies: VMM 2008 R2 supports two new networking technologies – Virtual Machine Queue (VMQ) and TCP Chimney – providing increased network performance while creating less of a CPU burden. NICs that support VMQ, create a unique virtual network queue for each virtual machine on a host that can pass network packets directly from the hypervisor to the virtual machine. This increases throughput as it bypasses much of the processing normally required by the virtualization stack. With TCP Chimney, TCP/IP traffic can be offloaded to a physical NIC on the host computer reducing CPU load and improving network performance.

Enhanced storage and cluster support

  • Clustered Shared Volumes (CSV): Provides a single, consistent storage space that allows hosts in a cluster to concurrently access virtual machine files on a single shared logical unit number (LUN). CSV eliminates the previous one virtual machine per LUN restriction and coordinates the use of storage with much greater efficiency and higher performance. CSV enables the Live Migration of virtual machines without impacting other virtual machines sharing the same LUN. Enabling CSV on failover clusters is straightforward; many storage configuration complexities prior to CSV have now been eliminated.
  • SAN migration into and out of clustered hosts: This allows virtual machines to migrate into and out of clusters using a SAN transfer, which saves the time required for copying the virtual machine file over the network.
  • Expanded Support for iSCSI SANs: Previously, only one LUN could be bound to a single iSCSI target whereas now – with support now built into VMM 2008 R2 – multiple LUNS can be mapped to a single iSCSI target. This provides broader industry support for iSCSI SANs allowing customers more flexibility in choosing storage providers and iSCSI SAN options.
  • Storage Migration: Quick Storage Migration enables migration of a VM’s storage both within the same host and across hosts while the VM is running with a minimum of downtime, typically less than 2 minutes. VMM 2008 R2 also supports VMware storage vMotion which allows the storage of a VMware VM to be transferred while the VM remains on the same host with no downtime.
  • Rapid Provisioning:  Allows administrators to take advantage of SAN provider technologies to clone a LUN containing a VHD and present it to the host while still utilizing the VMM template so the OS customization and IC installation can be applied.
  • Support for third party CFS: For users requiring a true clustered file system, VMM 2008 R2 supports third party file systems by detecting CFS disks and allows for deploying multiple VMs per LUN.
  • Support for Veritas Volume Manager VMM 2008 R2 recognizes Veritas Volume Manager disks as a cluster disk resource.

Streamlined process for managing host upgrades:

  • Maintenance Mode: Allows administrators to apply updates or perform maintenance on a host server by safely evacuating all virtual machines to other hosts on a cluster. Maintenance mode can be configured to use Live Migration to move the virtual machines or can put the workloads into a saved state to be safely reactivated when maintenance or upgrades are complete. Maintenance mode is enabled for all supported hypervisor platforms on Windows Server 2008 R2.

Other VMM 2008 R2 enhancements

  • Support of disjoint domains: Reduces the complexity of reconciling host servers with differing domain names in Active Directory and DNS. In these situations, VMM 2008 R2 automatically creates a custom service principal name (SPN) configured in both AD and DNS allowing for successful authentication.
  • Use of defined port groups with VMware Virtual Center: On installation, VMM 2008 R2 will present available port groups for VMM’s use with VMware vCenter thus allowing administrators to maintain control over which port groups are used.
  • Queuing of Live migrations:  This feature enables users to do multiple Live Migrations without needing to keep track of other Live Migrations that are happening within the cluster. Detects when a Live Migration will fail due to another Live Migration already in progress and queues the request for later.
  • Host compatibility checks: VM migration requires host hardware to be compatible; this feature provides a deep check for compatibility using Hyper-V and VMware compatibility check APIs. Administrators can check if the source host is compatible with the destination host before performing a migration and finding out the VM cannot start on the new host.  A related feature makes a VM compatible by turning off certain CPU features which makes the VM compatible with the hosts in the cluster.

Thursday, August 20, 2009

Required Local OS Firewall Rules for SCVMM and Hyper-V Host Communication

Communication Details For HyperV&SCVMM

In order to manage HyperV Hosts using SCVMM below ports/protocols should be open on the firewall.

VMM Server

80 (HTTP, WS-MAN)
443 (HTTPS, BITS)
8100 (WCF Connections to PowerShell or Admin Console)

SQL Server

1433 (Remote SQL instance connection)
1434 (SQL browser service) - only needed for initial setup

Host / Library

80 (HTTP, WS-MAN)
443 (HTTPS, BITS)
3389 (RDP)
2179 (VMConnect on Hyper-V hosts for single-class console view)5900 (VMRC on Virtual Server hosts)

The list of all ports and protocols can be found in the official MS document :

http://technet.microsoft.com/en-us/library/cc764268.aspx

Most of the FW rules above has been created by the SCVMM Installer and the role setup wizard for IIS,HyperV.

Additionally during the deployment of the SCVMM agent on the HyperV host the SMB-IN 445 should be available on HyperV host because the Agent Installer file has been moved to the ADMIN$ share of the HyperV host.

Necessary Configuration For Remote Management

General Rule Groups You Must Enable in Windows Firewall to Allow Remote Management by an MMC Snap-in

clip_image002

In order to manage HyperV hosts remotely enable the below rule groups :

netsh advfirewall firewall set rule group="Windows Firewall Remote Management" new enable=yes

netsh advfirewall firewall set rule group=" Remote Administration" new enable=yes

For Device Manager apart from the rulegroups above you need to enable the GPO for :

Allow remote access to the PnP interface

For Disk Manager :

Make sure VDS service is running and enabled on startup. Also enable the below rule :

netsh advfirewall firewall set rule group=" Remote Volume Management" new enable=yes

Also in order to make HP System Management Homepage available enable TCP port 2381 on Hyper Host inbound rules.

Summary of Local Firewall Rules 

Below images shows all rules enabled on SCVMM and HyperV host to make remote management possible. The default Outbound rule for all profiles is “Allowed”. That’s why only INBOUND rules has been placed inside the document.

SCVMM Input :

scvmmfwin

Hyper-V Input :

 hypervfwin

Monday, August 17, 2009

Hyper-V Tagging & Teaming with HP NCU

Yes, finally the first real post of my blog :) This article summarize the NIC Teaming & Tagging support on Hyper-V. Scenarios has been tested on HP Blade systems with HP NCU utility. Windows 2008 Datacenter Core Edition has been used for the parent partition.

In order to check VLAN tagging with teaming 2 scenarios have been tested :

1. NIC Teaming and Tagging with HP NCU (Works OK)
2. NIC Teaming with NCU and Tagging at HyperV Level (NOK)

As stated above only the first scenario works. This scenario creates lots of adapter overhead on the OS level. For instance lets assume that you have 2 physical interfaces which are teamed and you create 4 VLANs on top. After making the necessary configurations you have :

2 Interface for the actual pNICs.
1 Interface for Teamed NIC
4 Interface For the VLANs
4 Interface For the Virtual Switches

This creates some management overhead for the interfaces but this is the only supported scenario by Hyper-V currently.

Also with this setup the parent partition always have L2 access to all VLANs because the virtual network adapter at parent partition level is connected to the Virtual Switch by default. In order to create a External network without parent partition attached you can use the Poweshell scripts mentioned on the below pages.

http://blogs.msdn.com/virtual_pc_guy/archive/2009/02/19/script-creating-an-external-only-virtual-network-with-hyper-v.aspx

http://blogs.msdn.com/robertvi/archive/2008/08/27/howto-create-a-virtual-swich-for-external-without-creating-a-virtual-nic-on-the-root.aspx

Also after creating a virtual network you can disable this virtual interface. On server Core :

netsh interface show interface
netsh interface set interface name=”Name of Interface” disabled

HYPER-V NETWORKING

In order to understand the networking logic in Hyper-V it’s strongly recommended to check the below document :

http://www.microsoft.com/DOWNLOADS/details.aspx?FamilyID=3fac6d40-d6b5-4658-bc54-62b925ed7eea&displaylang=en

clip_image004

As stated by the above diagram when you bind a virtual network to a physical interface, a Virtual Network Adapter has been created on OS level. This virtual adapter has all the network binding like TCP/IP. After this operation the existing Network Adapter for the pNIC has only a binding for the HyperV Virtual Switch protocol.

In order to make OS level application work over the new created virtual adapter make sure appropriate tagging has been created also on host level.

IMPORTAT NOTE: Make sure you don’t create any Virtual Switch on the pNIC that is used for communication between SCVMM and Hyper-V host. Leave at least one NIC or Teamed Interface for this communication.

NIC Teaming and Tagging with HP NCU

HyperV has NO teaming capability at Hypervisor level like VmWare ESX/ESXi as mentioned in KB968703 (http://support.microsoft.com/kb/968703) :

Since Network Adapter Teaming is only provided by Hardware Vendors, Microsoft does not provide any support for this technology thru Microsoft Product Support Services. As a result, Microsoft may ask that you temporarily disable or remove Network Adapter Teaming software when troubleshooting issues where the teaming software is suspect.

If the problem is resolved by the removal of Network Adapter Teaming software, then further assistance must be obtained thru the Hardware Vendor.

This support has to be maintained at Hardware Level. For HP we used HP NCU for teaming purpose.

IMPORTANT NOTE : HP NCU have to be installed AFTER enabling HyperV role.

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01663264/c01663264.pdf

http://support.microsoft.com/kb/950792

OS Level Settings For Teaming+Tagging

In order to check HyperV with Teaming + tagging :

1. Windows 2008 Datacenter Core installed
2. HyperV role activated with necessary KB Updates.

http://support.microsoft.com/?kbid=950050
http://support.microsoft.com/?kbid=956589
http://support.microsoft.com/?kbid=956774

3. Using HpTeam Utility NFT based teaming has been configured.

4. NCU Installed together with Broadcom and Intel Drivers.

clip_image006

clip_image008

5. VLAN1,1101,1102,1103,1104 has been setup on the teamed interface.

clip_image010

Hyper-V Level Settings For Teaming+Tagging

1. Create a Virtual Network on the Hyperv host for each VLAN and bind this network to each logical tagged interface that has been created by NCU.

clip_image012

NOTE : Access host through VLAN option enables the parent partition talk with that VLAN.

2. On the HOST created to VMs for testing. Each VM has been connected to different virtual switch as below

clip_image014

3. After setting tagging both on Host and VM level ping between different VLANs is possible. (The switch has been configured for interVLAN routing)

NIC Teaming with NCU and Tagging at HyperV Level

1. Windows 2008 Datacenter Core installed
2. HyperV role activated with necessary KB Updates

http://support.microsoft.com/?kbid=950050
http://support.microsoft.com/?kbid=956589
http://support.microsoft.com/?kbid=956774

3. NCU Installed together with Broadcom and Intel Drivers.
4. Only Teaming has been configured with NCU.
5. A virtual switch has been created at HyperV level and necessary tagging made for the Host Virtual Adapter.
6. Virtual guest machines has also configured with tagged vNICs.
7. Network connectivity between the VMs does NOT work.