Archive

Posts Tagged ‘Virtualisation’

Hyper-V : Network Design, Configuration and Prioritization : Guidance

July 8, 2011 9 comments

There is a lot of posts regarding Hyper-V and network, but I found a lot people still don’t get it.

1. Network Design. How many nic’s we need for production environment for High Availiability:

  • 1 for Management. Microsoft recommends a dedicated network adapter for Hyper-V server management.
  • 2 ( Teamed )  for Virtual machines.Virtual network configurations of the external type require a minimum of one network adapter.
  • 2 ( MPIO ) for SCSI.Microsoft recommends that IP storage communication have a dedicated network, so one adapter is required and two or more are necessary to support multipathing.
  • 1 for Failover cluster.Windows® failover cluster requires a private network.
  • 1 for Live migration.This new Hyper-V R2 feature supports the migration of running virtual machines between Hyper-V servers. Microsoft recommends configuring a dedicated physical network adapter for live migration traffic. This network should be separate from the network for private communication between the cluster nodes, from the network for the virtual machine, and from the network for storage
  • 1 for CSV. Microsoft recommends a dedicated network to support the communications traffic created by this new Hyper-V R2 feature. In the network adapter properties, Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks must be enabled to support SMB

But how about production environments when the blades have only 4 Physical NIC’s?

Option 1. If your vendor does support NPAR technology(Broadcom, QLogic), you will be able to create up to 4 “Virtual Logical NIC’s” per physical NIC ( VLAN/QoS ). Although this solution is not supported by MS, it’s the best solution in terms of performance and it is supported by the vendors. This solution will provide you 100% HA as you can have up to 16 Logical NIC’s.

Option 2. Supported by MS. Allocate 2(two) NIC’sfor the iSCSI using MPIO and then :

Host configuration Virtual machine access Management Cluster and Cluster Shared Volumes Live migration Comments
2 network adapters with 10 Gbps Virtual network adapter 1 Virtual network adapter 1 with bandwidth capped at 1% Network adapter 2 Network adapter 2 with bandwidth capped at 50% Supported

Note that the QoS configuration is based on “per port”  and Windows only allows you to cap specify caps – not reserves. This solution, although supported by MS, dos not give you 100% HA.

2. Network Configuration. What need to be enabled/disabled?

Usage Number of Network Cards Comments
Management Network(Parent Partition) 1 Network Card
  • Make sure this card is listed first in the Adapter and Bindings connection order.
  • In Failover Cluster Manager make sure that the NIC is configured to allow cluster network communication on this network. This will act as a secondary connection for the Heartbeat.
Storage ISCSI 2 Network Cards – Not Teamed
  • Enable MPIO.
  • Disable NetBIOS on these interfaces
  • Do not configure a Gateway
  • Do not configure a DNS server
  • Make sure that each NIC is NOT set to register its connection in DNS
  • Remove File and Printer sharing
  • Do not remove Client from Microsoft networks if using Netapp Snapdrive with RPC authentication
  • In Failover Cluster Manager select- Do not allow cluster network communication on this network
VM Network
(Parent Partition)
2 Network cards :
1 for Dynamic IP’s
1 for Reserved IP’s
  • Disable NetBIOS on these interfaces
  • Do not configure a Gateway
  • Do not configure a DNS server
  • Make sure that each NIC is NOT set to register its connection in DNS
  • Remove File and Printer sharing and Client from Microsoft networks
  • In Failover Cluster Manager select – Do not allow cluster network communication on this network.
Cluster Heartbeat 1 Network Card
  • Disable NetBIOS on this interface
  • Do not configure a Gateway
  • Do not configure a DNS server
  • Make sure that this NIC is NOT set to register its connection in DNS
  • Make sure that Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks are enabled to support Server Message Block (SMB), which is required for CSV.
  • In Failover Cluster Manager make sure that the NIC is configured to allow cluster network communication on this network.
  • In Failover Cluster Manager remove the tick box for Allow Clients Connect through this network. This setting has nothing to do with the host/parent partition. This setting is used to control over what NICs the Cluster Resources can be accessed.
Cluster Shared Volume (CSV) 1 Network Card
  • Disable NetBIOS on this interface
  • Make sure that this NIC is NOT set to register its connection in DNS
  • Make sure that Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks are enabled to support Server Message Block (SMB), which is required for CSV.
  • In Failover Cluster Manager remove the tick box for Allow Clients Connect through this network. This setting has nothing to do with the host/parent partition. This setting is used to control over what NICs the Cluster Resources can be accessed. This is more relevant for other workloads e.g. File Cluster. It has no impact on the communication with the host partition or for the VM’s themselves.
  • By default the cluster will automatically choose the NIC to be used for CSV communication. We will change this later.
  • This traffic is not routable and has to be on the same subnet for all nodes.
Live Migration 1 Network Card
  • Disable NetBIOS on this interface
  • Make sure that this NIC is NOT set to register its connection in DNS.
  • In Failover Cluster Manager remove the tick box for Allow Clients Connect through this network. This setting has nothing to do with the host/parent partition. This setting is used to control over what NICs the Cluster Resources can be accessed. This is more relevant for other workloads e.g. File Cluster. It has no impact on the communication with the host partition or for the VM’s themselves.
  • By default the cluster will automatically choose the NIC to be used for Live-Migration. You can select multiple networks for LM and give them a preference.

 

2. Network Prioritization. What need to be enabled/disabled?

By default, all internal cluster network have a metric value starting at 1000 and incrementing by 100.  The first internal network which the cluster sees when it first comes online has a metric of 1000, the second has a metric of 1100, etc.

When you create CSV’s,  the failover cluster automatically chooses the network that appears to be the best for CSV communication. The lowest Metric value designates the network for Cluster and CSV traffic. The second lowest value designates the network for live migration. Additional networks with a metric below 10000 will be used as backup networks if the “Cluster & CSV Traffic” or “Live Migration Traffic” networks fail.  The lowest network with a value of at least 10000 will be used for “Public Traffic”. Consider giving the highest possible values to the networks which you do not want any cluster or public traffic to go through, such as for “ISCSI Traffic”, so that they are never used, or only used when no other networks at all are available.

To view the networks, their metric values, and if they were automatically or manually configured, run the clustering PowerShell cmdlet:
PS > Get-ClusterNetwork | ft Name, Metric, AutoMetric

To change the value of a network metric, run:
PS >Get-ClusterNetwork “Live Migration” ).Metric =800

If you want the cluster to start automatically assigning the Metric setting again for the network named “Live Migration”:
PS > Get-ClusterNetwork “Live Migration” ).AutoMetric = $true

How to override Network Prioritization Behavior?

Option 1. Change the network’s properties. If you select “Do not allow cluster network communication on this network”, then it will not be possible to send any “Cluster & CSV Traffic” or “Live Migration Traffic” through this network, even if the network has the lowest metric values.  The cluster will honor this override and find the network with the next lowest value to send this type of traffic :

  1. In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
  2.  Select Properties
  3. Change the radio buttons or checkboxes.

Option 2 (exclusively for “Live Migration Traffic”) :

To configure a cluster network for live migration:

  1. In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
  2. Expand Services and applications.
  3. In the console tree (on the left), select the clustered virtual machine for which you want to configure the network for live migration.
  4. Right-click the virtual machine resource displayed in the center pane (not on the left), and then click Properties.
  5. Click the Network for live migration tab, and select one or more cluster networks to use for live migration. Use the buttons on the right to move the cluster networks up or down to ensure that a private cluster network is the most preferred. The default preference order is as follows: networks that have no default gateway should be located first; networks that are used by cluster shared volumes and cluster traffic should be located last.Live migration will be attempted in the order of the networks specified in the list of cluster networks. If the connection to the destination node using the first network is not successful, the next network in the list is used until the complete list is exhausted, or there is a successful connection to the destination node using one of the networks.

Note : You don’t need to perform this action as per VM basis. When you configure a network for live migration for a specific virtual machine, the setting is global and therefore applies to all virtual machines.

Some other interesting articles:

http://technet.microsoft.com/en-us/library/dd446679(WS.10).aspx

http://www.hyper-v.nu/archives/hvredevoort/2011/03/windows-server-2008-r2-sp1-and-hp-network-teaming-testing-results/

http://blogs.technet.com/b/vishwa/archive/2011/02/01/tuning-scvmm-for-vdi-deployments.aspx

http://blogs.msdn.com/b/clustering/archive/2011/06/17/10176338.aspx

http://technet.microsoft.com/en-us/library/dd446679.aspx

Advertisements

I am Speaking at Teched Australia 2011

June 29, 2011 Leave a comment

I am absolutely thrilled to announce I will be presenting the following two sessions at Tech.Ed Australia 2011 :

SCVMM 2012: Deployment, Planning, Upgrade

This session provides a scenario rich detailed walk through of VMM 2012 deployment, planning, and upgrade scenarios. Come and learn how to best plan your next VMM rollout

SCVMM 2012 Fabric Lifecycle: Networking and Storage

This session provides a scenario rich detailed walk through of  new and more robust networking and storage features in VMM 2012. In this session you will learn how to discover, configure, and provision networking
and storage fabric for use with the private cloud

Came along! It will be an excellent session.

Tech.Ed Australia 2011 is on the Gold Coast between the 30th August and the 2nd September, registrations are now open. Find out more at http://australia.msteched.com/

Hyper-V Backup software : Altaro

June 29, 2011 Leave a comment

In January I was contacted by David Vella, CEO of Altaro to provide some feedback about a new Hyper-V backup software.

Altaro Hyper-V Backups works on Windows 2008 R2 (all editions, including core installation) and should be installed on the Hyper-V Host, not within the guest.

Yesterday, I receive a beta copy to test and I will post here my feedback, later. Anyway, my collegue MVP Hans Vredevoort post a good review in his blog with Femi Adegoke help.

For Hans Vredevoort review

http://www.hyper-v.nu/archives/hvredevoort/2011/05/altaro-hyper-v-backup-review/

Interested ? Here http://www.altaro.com/hyper-v-backup/ you can download the installation.  The install size is only 14 Mb in size.

 

Validate SCSI Device Vital Product Data (VPD) test fails after you install W2008 R2 SP1

June 22, 2011 Leave a comment

If you found this error :” Failed to get SCSI page 83h VPD descriptors for cluster disk
<number> from <node name> status 2″ after applying SP1 to your W2008R2 cluster, Microsoft has released a fix for it.

The List Potential Cluster Disks storage validation test may display a warning message that resembles the following: “Disk with identifier <value> has a Persistent Reservation on it. The disk might be part of some other cluster. Removing the disk from validation set”

The  hotfix resolves an issue in which the storage test incorrectly runs on disks that are online and not in the Available Storage group.

More details:

You configure a failover cluster that has three or more nodes that are running Windows Server 2008 R2 Service Pack 1 (SP1).

You have cluster disks that are configured in groups other than the Available Storage group or that are used for Cluster Shared Volumes (CSV).

These disks are online when you run the Validate SCSI Device Vital Product Data (VPD) test or the List Potential Cluster Disks storage validation test

More info : http://support.microsoft.com/kb/2531907

 

Plan your organization’s migration to a private cloud with the Hyper-V Cloud Fast Track Assessment!

June 17, 2011 Leave a comment

Use the MAP Toolkit to plan your organization’s migration to a private cloud with the Hyper-V Cloud Fast Track Assessment!

New MAP features  to:

  • Build portfolios of web applications and databases to migrate to Windows Azure and SQL Azure.
  • Assess your environment’s readiness for Office 365 or Internet Explorer 9
  • Identify and migrate databases from competing platforms like Oracle and MySQL to Microsoft SQL Server.
  • Consolidate your servers on to Hyper-V Cloud Fast Track Infrastructures

The beta of the MAP Toolkit v6.0 is now available. To get involved in the beta program

https://connect.microsoft.com/

 

 

SCVMM 2008 Ports and Protocols

June 7, 2011 Leave a comment

SCVMM 2008, SCVMM 2008 R2 and SCVMM 2008 R2 SP1 default ports :

Connection type Protocol Default port Where to change the port setting
VMM server to VMM agent on Windows Server–based host (control) WS-Management 80 at VMM setup, registry
VMM server to VMM agent on Windows Server–based host (file
transfers)
HTTPS (using BITS) 443 (Maximum value: 32768) Registry
VMM server to remote Microsoft SQL Server database TDS 1433 Registry
VMM server to P2V source agent DCOM 135 Registry
VMM Administrator Console to VMM server WCF 8100 at VMM setup, registry
VMM Self-Service Portal Web server to VMM server WCF 8100 at VMM setup
VMM Self-Service Portal to VMM self-service Web server HTTPS 443 at VMM setup
VMM library server to hosts BITS 443 (Maximum value: 32768) at VMM setup, registry
VMM host-to-host file transfer BITS 443* (Maximum value: 32768)

* VMM 2008 R2 : port 30443 (http://support.microsoft.com/kb/971816)

Registry
VMRC connection to Virtual Server host VMRC 5900 VMM Administrator Console, registry
VMConnect (RDP) to Hyper-V hosts RDP 2179 VMM Administrator Console, registry
Remote Desktop to virtual machines RDP 3389 Registry
VMware Web Services communication HTTPS 443 VMM Administrator Console, registry
SFTP file transfer from VMWare ESX Server 3.0 and VMware ESX Server 3.5
hosts
SFTP 22 Registry
SFTP file transfer from VMM server to VMWare ESX Server 3i hosts HTTPS 443 Registry

More info  : http://technet.microsoft.com/en-us/library/cc764268.aspx

 

SCVMM 2012 Management ports and protocols. Detailed

June 7, 2011 Leave a comment

Here are the list of ports/protocols for the new SCVMM 2012.

From To Protocol Default
port
Where to change port setting
VMM management server P2V
source agent (control channel)
DCOM 135
Load Balancer HTTP/HTTPS 80/443 Load balancer configuration provider
WSUS server (data channel) HTTP/HTTPS 80/8530
(non-SSL), 443/8531 (with SSL)
These ports are the IIS port binding with WSUS. They cannot be changed from VMM.
WSUS server (control channel) HTTP/HTTPS 80/8530
(non-SSL), 443/8531 (with SSL)
These ports are the IIS port binding with WSUS. They cannot be changed from VMM.
VMM agent on Windows Server–based host (data
channel for file transfers)
HTTPS
(using BITS)
443
(Maximum value: 32768)
Citrix XenServer host (customization data
channel)
iSCSI 3260 On XenServer in transfer VM
XenServer host (control channel) HTTPS 5989 On XenServer host in: /opt/cimserver/cimserver_planned.conf
remote Microsoft SQL Server database TDS 1433
VMM agent on Windows Server–based host (control
channel)
WS-Management 5985 VMM setup
VMM agent on Windows Server–based host (control
channel – SSL)
WS-Management 5986
in-guest agent (VMM to virtual machine control
channel)
WS-Management 5985
Storage Management Service WMI Local
call
Cluster PowerShell interface PowerShell n/a
P2V source agent (data channel) BITS User-Defined P2V cmdlet option
VMM library server hosts
file transfer
BITS 443
(Maximum value: 32768)
VMM setup
VMM host-to-host file transfer BITS 443
(Maximum value: 32768)
VMM Self-Service Portal VMM
Self-Service Portal web server
HTTPS 443 VMM setup
VMM Self-Service Portal web server VMM
management server
WCF 8100 VMM setup
Console connections (RDP) virtual
machines through Hyper-V hosts (VMConnect)
RDP 2179 VMM console
Remote Desktop virtual
machines
RDP 3389 On the virtual machine
VMM console VMM
management server
WCF 8100 VMM setup
VMM management server (HTTPS) WCF 8101 VMM setup
VMM management server (NET.TCP) WCF 8102 VMM setup
VMM management server (HTTP) WCF 8103 VMM setup
Windows PE agent VMM
management server (control channel)
WCF 8101 VMM setup
VMM management server (time sync) WCF 8103 VMM setup
WDS provider VMM
management server
WCF 8102 VMM setup
Storage Management Service  SMI-S Provider CIM-XML Provider-specific
port
VMM management server VMware
ESX Server 3i hosts
HTTPS 443

Others

Connection Type Protocol Default port Where to change port setting
OOB Connection – SMASH over WS-Man HTTPS 443 On BMC
OOB Connection IPMI IPMI 623 On BMC
BITS port for VMM transfers (data channel)
BITS 443 VMM setup
VMware ESX Server 3.0 and VMware ESX Server 3.5 hosts SFTP 22
VMware Web Services
communication
HTTPS 443 VMM console

Note: When you install the VMM management server you can assign some of the ports that it will use for communications and file transfers between the VMM components.