Hyper-V : Network Design, Configuration and Prioritization : Guidance
There is a lot of posts regarding Hyper-V and network, but I found a lot people still don’t get it.
1. Network Design. How many nic’s we need for production environment for High Availiability:
- 1 for Management. Microsoft recommends a dedicated network adapter for Hyper-V server management.
- 2 ( Teamed ) for Virtual machines.Virtual network configurations of the external type require a minimum of one network adapter.
- 2 ( MPIO ) for SCSI.Microsoft recommends that IP storage communication have a dedicated network, so one adapter is required and two or more are necessary to support multipathing.
- 1 for Failover cluster.Windows® failover cluster requires a private network.
- 1 for Live migration.This new Hyper-V R2 feature supports the migration of running virtual machines between Hyper-V servers. Microsoft recommends configuring a dedicated physical network adapter for live migration traffic. This network should be separate from the network for private communication between the cluster nodes, from the network for the virtual machine, and from the network for storage
- 1 for CSV. Microsoft recommends a dedicated network to support the communications traffic created by this new Hyper-V R2 feature. In the network adapter properties, Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks must be enabled to support SMB
But how about production environments when the blades have only 4 Physical NIC’s?
Option 1. If your vendor does support NPAR technology(Broadcom, QLogic), you will be able to create up to 4 “Virtual Logical NIC’s” per physical NIC ( VLAN/QoS ). Although this solution is not supported by MS, it’s the best solution in terms of performance and it is supported by the vendors. This solution will provide you 100% HA as you can have up to 16 Logical NIC’s.
Option 2. Supported by MS. Allocate 2(two) NIC’sfor the iSCSI using MPIO and then :
Host configuration | Virtual machine access | Management | Cluster and Cluster Shared Volumes | Live migration | Comments |
2 network adapters with 10 Gbps | Virtual network adapter 1 | Virtual network adapter 1 with bandwidth capped at 1% | Network adapter 2 | Network adapter 2 with bandwidth capped at 50% | Supported |
Note that the QoS configuration is based on “per port” and Windows only allows you to cap specify caps – not reserves. This solution, although supported by MS, dos not give you 100% HA.
2. Network Configuration. What need to be enabled/disabled?
Usage | Number of Network Cards | Comments |
Management Network(Parent Partition) | 1 Network Card |
|
Storage ISCSI | 2 Network Cards – Not Teamed |
|
VM Network (Parent Partition) |
2 Network cards : 1 for Dynamic IP’s 1 for Reserved IP’s |
|
Cluster Heartbeat | 1 Network Card |
|
Cluster Shared Volume (CSV) | 1 Network Card |
|
Live Migration | 1 Network Card |
|
2. Network Prioritization. What need to be enabled/disabled?
By default, all internal cluster network have a metric value starting at 1000 and incrementing by 100. The first internal network which the cluster sees when it first comes online has a metric of 1000, the second has a metric of 1100, etc.
When you create CSV’s, the failover cluster automatically chooses the network that appears to be the best for CSV communication. The lowest Metric value designates the network for Cluster and CSV traffic. The second lowest value designates the network for live migration. Additional networks with a metric below 10000 will be used as backup networks if the “Cluster & CSV Traffic” or “Live Migration Traffic” networks fail. The lowest network with a value of at least 10000 will be used for “Public Traffic”. Consider giving the highest possible values to the networks which you do not want any cluster or public traffic to go through, such as for “ISCSI Traffic”, so that they are never used, or only used when no other networks at all are available.
To view the networks, their metric values, and if they were automatically or manually configured, run the clustering PowerShell cmdlet:
PS > Get-ClusterNetwork | ft Name, Metric, AutoMetric
To change the value of a network metric, run:
PS >Get-ClusterNetwork “Live Migration” ).Metric =800
If you want the cluster to start automatically assigning the Metric setting again for the network named “Live Migration”:
PS > Get-ClusterNetwork “Live Migration” ).AutoMetric = $true
How to override Network Prioritization Behavior?
Option 1. Change the network’s properties. If you select “Do not allow cluster network communication on this network”, then it will not be possible to send any “Cluster & CSV Traffic” or “Live Migration Traffic” through this network, even if the network has the lowest metric values. The cluster will honor this override and find the network with the next lowest value to send this type of traffic :
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- Select Properties
- Change the radio buttons or checkboxes.
Option 2 (exclusively for “Live Migration Traffic”) :
To configure a cluster network for live migration:
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- Expand Services and applications.
- In the console tree (on the left), select the clustered virtual machine for which you want to configure the network for live migration.
- Right-click the virtual machine resource displayed in the center pane (not on the left), and then click Properties.
- Click the Network for live migration tab, and select one or more cluster networks to use for live migration. Use the buttons on the right to move the cluster networks up or down to ensure that a private cluster network is the most preferred. The default preference order is as follows: networks that have no default gateway should be located first; networks that are used by cluster shared volumes and cluster traffic should be located last.Live migration will be attempted in the order of the networks specified in the list of cluster networks. If the connection to the destination node using the first network is not successful, the next network in the list is used until the complete list is exhausted, or there is a successful connection to the destination node using one of the networks.
Note : You don’t need to perform this action as per VM basis. When you configure a network for live migration for a specific virtual machine, the setting is global and therefore applies to all virtual machines.
Some other interesting articles:
http://technet.microsoft.com/en-us/library/dd446679(WS.10).aspx
http://blogs.technet.com/b/vishwa/archive/2011/02/01/tuning-scvmm-for-vdi-deployments.aspx
http://blogs.msdn.com/b/clustering/archive/2011/06/17/10176338.aspx
Thank you very much for this precise guidance. In addition to the external links this proves to be the central Blog entry for HyperV-Cluter-Network-Configuration.
Very helpful.
Hi Alessandro,
very well done article. I writing a similar post about the networks in the hyper-v cluster and your post was very helpfull. I have one sugestion: in your PowerShell examples is a “(” missing in the front of the line.
Regards Carsten
(MVP Virtual Machine)
Hi Carsten
Thanks. I will fixing the script
Hello,
This is an interesting article, but leaves me confused about one point. I originally had set up our cluster to have a dedicated path, but various MS sources say that is now deprecated.
http://social.technet.microsoft.com/Forums/en-AU/winserverClustering/thread/2c7b7cd4-a281-442a-9d9f-287e300b8b98
Thoughts?
Hi Chris
I recommend, as a best practice to have a separated network for cluster, but not a cross over cable, which is deprecated. The cluster will them use this path and in case is path is not available it will use a second path.
Hi Alessandro,
Great article. Is there any way to add networks to the “Failover Cluster Manager”? I have only ever seen 3 networks listed. I currently have my Live Migration traffic going over the Cluster network. Would like to add a “teamed” network dedicated to LM to try and reduce migration time.
Hi Cameron
You can manually change which network is used by modifying the properties of the VM in Failover Cluster Manager and modifying the networks in the Network for live migration tab
Please have a look in this article : http://blogs.msdn.com/b/clustering/archive/2009/02/19/9433146.aspx