Archive
Hyper-V : Network Design, Configuration and Prioritization : Guidance
There is a lot of posts regarding Hyper-V and network, but I found a lot people still don’t get it.
1. Network Design. How many nic’s we need for production environment for High Availiability:
- 1 for Management. Microsoft recommends a dedicated network adapter for Hyper-V server management.
- 2 ( Teamed ) for Virtual machines.Virtual network configurations of the external type require a minimum of one network adapter.
- 2 ( MPIO ) for SCSI.Microsoft recommends that IP storage communication have a dedicated network, so one adapter is required and two or more are necessary to support multipathing.
- 1 for Failover cluster.Windows® failover cluster requires a private network.
- 1 for Live migration.This new Hyper-V R2 feature supports the migration of running virtual machines between Hyper-V servers. Microsoft recommends configuring a dedicated physical network adapter for live migration traffic. This network should be separate from the network for private communication between the cluster nodes, from the network for the virtual machine, and from the network for storage
- 1 for CSV. Microsoft recommends a dedicated network to support the communications traffic created by this new Hyper-V R2 feature. In the network adapter properties, Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks must be enabled to support SMB
But how about production environments when the blades have only 4 Physical NIC’s?
Option 1. If your vendor does support NPAR technology(Broadcom, QLogic), you will be able to create up to 4 “Virtual Logical NIC’s” per physical NIC ( VLAN/QoS ). Although this solution is not supported by MS, it’s the best solution in terms of performance and it is supported by the vendors. This solution will provide you 100% HA as you can have up to 16 Logical NIC’s.
Option 2. Supported by MS. Allocate 2(two) NIC’sfor the iSCSI using MPIO and then :
Host configuration | Virtual machine access | Management | Cluster and Cluster Shared Volumes | Live migration | Comments |
2 network adapters with 10 Gbps | Virtual network adapter 1 | Virtual network adapter 1 with bandwidth capped at 1% | Network adapter 2 | Network adapter 2 with bandwidth capped at 50% | Supported |
Note that the QoS configuration is based on “per port” and Windows only allows you to cap specify caps – not reserves. This solution, although supported by MS, dos not give you 100% HA.
2. Network Configuration. What need to be enabled/disabled?
Usage | Number of Network Cards | Comments |
Management Network(Parent Partition) | 1 Network Card |
|
Storage ISCSI | 2 Network Cards – Not Teamed |
|
VM Network (Parent Partition) |
2 Network cards : 1 for Dynamic IP’s 1 for Reserved IP’s |
|
Cluster Heartbeat | 1 Network Card |
|
Cluster Shared Volume (CSV) | 1 Network Card |
|
Live Migration | 1 Network Card |
|
2. Network Prioritization. What need to be enabled/disabled?
By default, all internal cluster network have a metric value starting at 1000 and incrementing by 100. The first internal network which the cluster sees when it first comes online has a metric of 1000, the second has a metric of 1100, etc.
When you create CSV’s, the failover cluster automatically chooses the network that appears to be the best for CSV communication. The lowest Metric value designates the network for Cluster and CSV traffic. The second lowest value designates the network for live migration. Additional networks with a metric below 10000 will be used as backup networks if the “Cluster & CSV Traffic” or “Live Migration Traffic” networks fail. The lowest network with a value of at least 10000 will be used for “Public Traffic”. Consider giving the highest possible values to the networks which you do not want any cluster or public traffic to go through, such as for “ISCSI Traffic”, so that they are never used, or only used when no other networks at all are available.
To view the networks, their metric values, and if they were automatically or manually configured, run the clustering PowerShell cmdlet:
PS > Get-ClusterNetwork | ft Name, Metric, AutoMetric
To change the value of a network metric, run:
PS >Get-ClusterNetwork “Live Migration” ).Metric =800
If you want the cluster to start automatically assigning the Metric setting again for the network named “Live Migration”:
PS > Get-ClusterNetwork “Live Migration” ).AutoMetric = $true
How to override Network Prioritization Behavior?
Option 1. Change the network’s properties. If you select “Do not allow cluster network communication on this network”, then it will not be possible to send any “Cluster & CSV Traffic” or “Live Migration Traffic” through this network, even if the network has the lowest metric values. The cluster will honor this override and find the network with the next lowest value to send this type of traffic :
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- Select Properties
- Change the radio buttons or checkboxes.
Option 2 (exclusively for “Live Migration Traffic”) :
To configure a cluster network for live migration:
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- Expand Services and applications.
- In the console tree (on the left), select the clustered virtual machine for which you want to configure the network for live migration.
- Right-click the virtual machine resource displayed in the center pane (not on the left), and then click Properties.
- Click the Network for live migration tab, and select one or more cluster networks to use for live migration. Use the buttons on the right to move the cluster networks up or down to ensure that a private cluster network is the most preferred. The default preference order is as follows: networks that have no default gateway should be located first; networks that are used by cluster shared volumes and cluster traffic should be located last.Live migration will be attempted in the order of the networks specified in the list of cluster networks. If the connection to the destination node using the first network is not successful, the next network in the list is used until the complete list is exhausted, or there is a successful connection to the destination node using one of the networks.
Note : You don’t need to perform this action as per VM basis. When you configure a network for live migration for a specific virtual machine, the setting is global and therefore applies to all virtual machines.
Some other interesting articles:
http://technet.microsoft.com/en-us/library/dd446679(WS.10).aspx
http://blogs.technet.com/b/vishwa/archive/2011/02/01/tuning-scvmm-for-vdi-deployments.aspx
http://blogs.msdn.com/b/clustering/archive/2011/06/17/10176338.aspx
24 Hours in the Cloud : Live on June 1st
The GITCA “24 Hours in the Cloud” round-the-world virtual event focusing on Cloud Computing is scheduled for June 1st. The speakers will be available via twitter to answer questions . Please visit http://sp.GITCA.org/sites/24Hours to find out more.
This is a very important community project and GITCA, supported by Microsoft, is acting as the enabler. So this is the community helping the community which is the way it should be. We have a great selection of presentations from experienced speakers from around the world. Please go to http://sp.gitca.org/sites/24hours/ugpages/FinalSpeakers.aspx to
view the list of speakers and http://sp.gitca.org/sites/24hours/ugpages/FinalSessions.aspx to view the list of sessions.
The first session, keynote by Doug Terry of Microsoft Research, will start at 9am Pacific Daylight
Time [UTC -7]. Please note the start time was incorrectly shown as UTC-8 in previous messages. The event can be accessed via http://vepexp.microsoft.com/24hitc which will go live on June 1st.
Hyper-V : Best Practices and Supported scenarios regarding Exchange Server 2010
The following are the supported scenarios for Exchange 2010 SP1 :
- The Unified Messaging server role is supported in a virtualized environment.
- Combined Exchange 2010 high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions that will move or automatically failover mailbox servers that are members of a DAG between clustered root servers
HyperV Guest Configuration
Keep in mind that because there are no routines within Exchange Server that test for a virtualized platform, Exchange Server behaves no differently programmatically on a virtualized platform than it does on a physical platform.
Determining Exchange Server Role Virtual Machine Locations
When determining Exchange Server Role virtual machine locations, consider the following general best practices:
- Deploy the same Exchange roles across multiple physical server roots (to allow for load balancing and high availability).
- Never deploy Mailbox servers that are members of the same Database Availability Groups (DAGs) on the same root.
- Never deploy all the Client Access Servers on the same root.
- Never deploy all the Hub Transport servers on the same root.
- Determine the workload requirements for each server and balance the workload across the HyperV guest virtual machines.
Guest Storage
Each Exchange guest virtual machine must be allocated sufficient storage space on the root virtual machine for the fixed disk that contains the guest’s operating system, any temporary memory storage files in use, and related virtual machine files that are hosted on the root machine.Consider the following best practices when configuring Hyper-V guests:
- Fixed VHDs are recommended for the virtual operating system.
- Allow for a minimum of a 15-GB disk for the operating system, allow additional space for the paging file, management software, and crash recovery (dump) files. Then add Exchange server role space requirements.
- Storage used by Exchange should be hosted in disk spindles that are separate from the storage that hosts the guest virtual machine’s operating system.
- For Hub Transport servers, correctly provision the necessary disk space needed for the message queue database, and logging operations.
- For Mailbox servers, correctly provision the necessary disk space for databases, transaction logs, the content index, and other logging operations. .
Guest Memory : Dynamic Memory should be disabled
Memory must be sized for guest virtual machines using the same methods as physical computer deployments. Exchange—like many server applications that have optimizations for performance that involve caching of data in memory—is susceptible to poor system performance and an unacceptable client experience if it doesn’t have full control over the memory allocated to the physical computer or virtual machine on which it is running.
Many of the performance gains in recent versions of Exchange, especially those related to reduction in input/output (I/O) are based on highly efficient usage of large amounts of memory. When that memory is no longer available, the expected performance of the system can’t be achieved. For this reason, memory oversubscription or dynamic adjustment of virtual machine memory must be disabled for production Exchange servers.
Deployment Recommendations
When designing an Exchange Server 2010 virtualized environment, the core Exchange design principles apply. The environment must be designed for the correct performance, reliability, and capacity requirements. Design considerations such as examining usage profiles, message profiles, and so on must still be taken into account.
See this article (Mailbox Storage Design Process) as a starting point when considering a high availability solution that uses DAGs.
Because virtualization provides the flexibility to make changes to the design of the environment later, some organizations might be tempted to spend less time on their design at the outset. As a best practice, spend adequate time designing the environment to avoid pitfalls later.
Group the Exchange Server roles in such a way that balances workloads on the root servers. Mixing both roles on the same HyperV root server can balance the workloads and prevent one physical resource from being unduly stressed, rather than if the same roles were put on the same hosts
The updated support guidance applies to any hardware virtualization vendor participating in the Windows Server Virtualization Validation Program (SVVP).
Best Practices for Virtualizing Exchange Server 2010 with Windows Server 2008 R2 Hyper-Vwhitepaper. This whitepaper is designed to provide technical guidance on Exchange server roles, capacity planning, sizing and performance, as well as high availability best practices.
Complete system requirements for Exchange Server 2010 running under hardware virtualization software can be found in Exchange 2010 System Requirements. Also, the support policy for Microsoft software running in non-Microsoft hardware virtualization software can be found here.
Microsoft Exchange Server 2010 with Service Pack : Solution Accelerator
Exchange Server 2010 supports a variety of infrastructure topologies that enable IT departments to deploy the messaging architecture that best suits their business needs. This guide will help organizations make informed decisions about the design of fault tolerance and scalability so that their overall requirements are met.
The guide covers these key steps in the Exchange Server 2010 infrastructure design process:
· Defining the project scope by identifying your individual business and IT requirements for a messaging infrastructure.
· Mapping features and functionality based on the defined scope to develop the appropriate Exchange Server 2010 design.
· Designing the infrastructure and role requirements for the proposed Exchange Server 2010 architecture.
· Determining the sizing, fault tolerance, and physical placement of Exchange Server 2010 roles.
The IPD Guide for Microsoft Exchange Server 2010 with Service Pack 1 can help you reduce planning time and costs, and ensure a successful rollout of Exchange Server 2010-helping your organization to more quickly benefit from this flexible and reliable platform.
Download the beta guide here.
Private Cloud Solutions : Hyper-V Cloud Deployment Guides
Private cloud is the implementation of cloud services on resources that are dedicated to your organization, whether they exist on-premises or off-premises with the benefits of public cloud computing—including self-service, scalability, and elasticity and the additional control and customization.
Build your own private cloud and you will have a dynamic, virtualized infrastructure with advantages including:
- Pools of compute resources
- Automated management
- High-availability
- Scale-out capabilities
- Multi-tenancy
- Self-service provisioning
To learn more how to build your own private cloud with Windows Server 2008 R2 Hyper-V, System Center, and the Virtual Machine Manager Self-Service Portal 2.0 using the Hyper-V Clould Deployment Guide:
Microsoft System Center Service Manager 2010 : Solution Accelerator
The Infrastructure Planning and Design (IPD) Guide for Microsoft System Center Service Manager 2010 takes the IT architect through an easy-to-follow process for successfully designing the servers and components for a System Center Service Manager implementation, resulting in a design that is sized, configured, and appropriately placed to deliver the stated business benefits, while also considering the performance, capacity, and fault tolerance of the system.
The guide covers these key steps in the System Center Service Manager infrastructure design process:
- Defining the project scope by identifying the necessary Service Manager features, the requirements of the process management packs, and the targeted population of the organization.
- Mapping the selected features and scope to determine the required server roles.
- Designing the fault tolerance, configuration, and placement of the management servers, portals, and supporting SQL Server databases.
The IPD Guide for Microsoft System Center Service Manager 2010 can help you reduce planning time and costs, and ensure a successful rollout of System Center Service Manager—helping your organization to more quickly benefit from this platform for automating and adapting IT Service Management best practices such as those found in Microsoft Operations Framework (MOF) and the IT Infrastructure Library (ITIL).
Join the IPD Beta for Microsoft System Center Service Manager 2010.
Windows Server 2008 R2 Security Baseline : Solution Accelerators
Elevate the security of Windows Server 2008 R2.
The Windows Server 2008 R2 Security Baseline, in combination with the Security Compliance Manager tool, is designed to help your organization plan, deploy, and monitor the security of Windows Server 2008 R2. This release also includes a Windows Server 2008 R2 settings pack, enabling you to define baselines that include settings outside the scope of the security baselines from Microsoft. To get the public release of this security baseline, download the Security Compliance Manager. |
New security baselines for SQL Server
New security baselines for SQL Server 2008 and SQL Server 2008 R2 now available for beta download.
The latest security baselines in this beta review program are designed to help you plan, deploy, and monitor the security of Microsoft SQL Server 2008 and SQL Server 2008 R2.
The baselines are formatted for import using the Security Compliance Manager, which provides guidance and tools to help you balance your organization’s needs for security and functionality.