Archive

Posts Tagged ‘Planning’

Infrastructure Planning and Design Guide for VMM 2012

July 25, 2012 Leave a comment

The new IPD Guide for System Center 2012 – Virtual Machine Manager is now available to download

Infrastructure Planning and Design streamlines the planning process by:

  • Defining the technical decision flow through the planning process.
  • Listing the decisions to be made and the commonly available options and considerations.
  • Relating the  decisions and options to the business in terms of cost, complexity, and other characteristics.
  • Framing decisions in terms of additional questions to the business to ensure a comprehensive alignment with the appropriate business landscape

Download the guide now: http://go.microsoft.com/fwlink/?LinkId=245473

SCVMM 2012 : Ports commnications for Firewall Configuration

August 27, 2011 Leave a comment

When you install the SCVMM 2012 you can assign some of the ports that it will use for communications and file transfers between the VMM components.

Note: Not all of the ports can be changed through VMM.

The default settings for the ports are listed in the following table:

Connection type Protocol Default port Where to change port setting
SFTP file transfer from VMware ESX Server 3.0 and VMware ESX Server 3.5 hosts SFTP 22
VMM management server to P2V source agent (control channel) DCOM 135
VMM management server to Load Balancer HTTP/HTTPS 80/443 Load balancer configuration provider
VMM management server to WSUS server (data channel) HTTP/HTTPS 80/8530
(non-SSL), 443/8531 (with SSL)
These ports are the IIS port binding with WSUS. They cannot be changed from VMM.
VMM management server to WSUS server (control channel) HTTP/HTTPS 80/8530  (non-SSL), 443/8531 (with SSL) These ports are  the IIS port binding with WSUS. They cannot be changed from VMM.
BITS port for VMM transfers (data channel) BITS 443 During VMM setup
VMM library server to hosts file transfer BITS 443 (Maximum value: 32768) During VMM setup
VMM  host-to-host file transfer BITS 443
(Maximum value: 32768)
VMM Self-Service Portal to VMM Self-Service Portal web server HTTPS 443 During VMM setup
VMware Web Services communication HTTPS 443 VMM console
SFTP file transfer from VMM management server to VMware ESX Server 3i hosts HTTPS 443
OOB Connection – SMASH over WS-Man HTTPS 443 On BMC
VMM management server to in-guest agent (VMM to virtual machine data channel) HTTPS
(using BITS)
443
VMM management server to VMM agent on Windows Server–based host (data channel for file transfers) HTTPS
(using BITS)
443
(Maximum value: 32768)
OOB Connection IPMI IPMI 623 On BMC
VMM management server to remote Microsoft SQL Server database TDS 1433
Console connections (RDP) to virtual machines through Hyper-V hosts (VMConnect) RDP 2179 VMM
console
VMM management server to Citrix XenServer host (customization data channel) iSCSI 3260 On XenServer in transfer VM
Remote Desktop to virtual machines RDP 3389 On the virtual machine
VMM management server to VMM agent on Windows Server–based host (control channel) WS-Management 5985 During VMM setup
VMM management server to in-guest agent (VMM to virtual machine control channel) WS-Management 5985
VMM management server to VMM agent on Windows Server–based host (control channel – SSL) WS-Management 5986
VMM management server to XenServer host (control channel) HTTPS 5989 On XenServer host in: /opt/cimserver/cimserver_planned.conf
VMM console to VMM management server WCF 8100 During VMM setup
VMM Self-Service Portal web server to VMM management server WCF 8100 During VMM setup
VMM console to VMM management server (HTTPS) WCF 8101 During VMM setup
Windows PE agent to VMM management server (control channel) WCF 8101 During VMM setup
VMM console to VMM management server (NET.TCP) WCF 8102 During VMM setup
WDS provider to VMM management server WCF 8102 During VMM setup
VMM console to VMM management server (HTTP) WCF 8103 During  VMM setup
Windows PE agent to VMM management server (time sync) WCF 8103 During VMM setup
VMM management server to Storage Management Service WMI Local
call
VMM management server to Cluster PowerShell interface PowerShell n/a
Storage Management Service to SMI-S Provider CIM-XML Provider-specific
port
VMM management server to P2V source agent (data channel) BITS User-Defined P2V cmdlet option

Hyper-v: Detailed step by step installing RedHat 6.1 VM in expert mode with the new Linux Integration Services 3.1

August 18, 2011 16 comments

Microsoft released the a new Linux Integration Services, fully tested against RHEL 6.0, RHEL 6.1, and CentOS 6.0

http://www.microsoft.com/download/en/details.aspx?id=26837

To Create a RedHat 6 VM

1. Open Hyper-V Manager: Click Start, point to Administrative Tools, and then click
Hyper-V Manager.
2. Create a new virtual machine where you will install Linux: In the Actions menu, click New, and then click Virtual Machine.

Note: if you do not Add a legacy network adapter a this point, the virtual machine will not have network support, until you install the Linux Integration Services.

3. Specify the Linux installation media: Right-click the virtual machine that you created, and then click Settings. In IDE Controller, specify one of the following:
a. An image file in ISO format that contains the files required for installation
b. A physical CD/DVD drive that contains the installation media
4. Turn on the virtual machine: Right-click the virtual machine that you created, and then click Connect.
To Install Redhat Linux 6.1

1. After a short delay, the Welcome to Red Hat Linux 6.1! screen appears. Press <Tab>

2.At the prompt, add the text: append expert and then press <Enter>

3. Press <OK> to check the installation media or <SKIP> to not test check in the next screen

4. Click Next to continue

5. The Choose a Language screen appears. This screen asks you to select the lan­guage to be used during the installation process.  Use the up-or down-arrow key to select alanguage (the system highlights your choice). Click Next

6.The Keyboard Type screen appears asking you to select a keyboard type.  Use the up- or down-arrow key to select a keyboard type (the system highlights your choice). Click Next

7. At the “Devices” screen select Basic Storage Devices to install Red Hat Enterprise Linux on the following storage devices: hard drives or solid-state drives connected directly to the local system

8. As you selected Basic Storage Devices, anaconda automatically detects the local storage attached to the system and does not require further input.Click Next.

9. Enter the Hostname for your server, select OK

10 If you added the Legacy Network at the creation of the VM, then click Configure Network . At the “Network Configuration” window,  Specify an IP address/gateway. Otherwise, skip this task. You can setup the network later, after installing the Linux Integration Services


Use the IPv4 Settings tab to configure the IPv4 parameters for the previously selected network connection.  Select Start automatically to start the connection automatically when the system boots.

11.Click Next

12. At the “Time Zone Selection” window, highlight the correct time zone. Click Next

13.For Root Password, type and confirm the password. Click Next

14. If no readable partition tables are found on existing hard disks, the installation program asks to initialize the hard disk. This operation makes any existing data on the hard disk unreadable. If your system has a brand new hard disk with no operating system installed, or you have removed all partitions on the hard disk, click Re-initialize drive

15. Select the type of installation would you like and then click Next.

Note: If you chose one of the automatic partitioning options (first 4 options) and selected Review, you can either accept the current partition settings (click Next), or modify the setup manually in the partitioning screen. To review and make any necessary changes to the partitions created by automatic partitioning, select the Review option. After selecting Review and clicking Next to move forward, the partitions created for you by anaconda appear. You can make modifications to these partitions if they do not meet your needs.

If you chose to create a custom layout, you must tell the installation program where to install Red Hat Enterprise Linux. This is done by defining mount points for one or more disk partitions in which Red Hat Enterprise Linux is installed. You may also need to create and/or delete partitions at this time

Unless you have a reason for doing otherwise, I recommend that you create the following partitions for x86, AMD64, and Intel
64 systems:

swap partition

/boot partition

/ partition

Advice on Partitions:

  • A swap partition (at least 256 MB) — swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing. In years past, the recommended amount of swap space increased linearly with the amount of RAM in the system. But because the amount of memory in modern systems has increased into the hundreds of gigabytes, it is now recognized that the amount of swap space that a system needs is a function of the memory workload running on that system. However, given that swap space is usually designated at install time, and that it can be difficult to determine beforehand the memory workload of a system, use the  recommended:
 Amount of RAM in the System  Recommended Amount of Swap Space
4GB of RAM or less a minimum of 2GB of swap space
4GB to 16GB of RAM a minimum of 4GB of swap space
16GB to 64GB of RAM a minimum of 8GB of swap space
64GB to 256GB of RAM a minimum of 16GB of swap space
  • The /var directory holds content for a number of applications. It also is used to store downloaded update packages on a temporary basis. Ensure that the partition containing the /var directory has enough space to download pending updates and hold your other content.
  • The /usr directory holds the majority of software content on a Red Hat Enterprise Linux system. For an installation of the default set of software, allocate at least 4 GB of space.
    If you are a software developer or plan to use your Red Hat Enterprise Linux system to learn software development skills, you may want to at least double this allocation.
  • Consider leaving a portion of the space in an LVM volume group unallocated. This unallocated space gives you flexibility if your space requirements change but you do not wish to remove data from other partitions to reallocate storage

16. After finishing creating the partitions, Click Next. The installer prompts you to confirm the  partitioning options that you selected. Click Write changes to disk to allow the installer to partition your hard drive and install Red Hat Enterprise Linux

17.Allow the  installation process to complete. The Package Installation Defaults screen appears and details the default package set for the Red Hat Enterprise Linux installation

If you select Basic Server, this option will provide a basic installation of Red Hat Enterprise Linux for use on a server.

18. Select Customize now to specify the software packages for your final system in more detail. This option causes the installation process to display an additional customization screen when you select Next.  The following screens shows the customized packages selected

Note : The packages that you select are not permanent. After you boot your system, use the Add/Remove Software tool to either install new
software or remove installed packages. To run this tool, from the main menu, select System -> Administration -> Add/Remove Software

19. Click Next to continue the installation. The installer checks your selection, and automatically adds any extra packages required to use the software you selected. The installation process will start. At this point there is nothing left for you to do until all the packages have been installed.

20. Installation Complete: Red Hat Enterprise Linux installation is now complete. select Reboot to restart your Virtual Machine


Now it’s time for the first-boot configuration.

21. First Boot lets you configure your environment at the beginning. Click Forward to proceed

22. Accept the License and Click Forward to proceed

23. Setting up software updates.  Select whether to register the system immediately with Red Hat Network. To register the system, select Yes, I’d like to register now, and click Forward.
Note : it can be registered with the RedHat Entitlement Service later using the Red Hat Subscription Manager tools

24. Create User to use as a regular non-administrative use. Enter a user name and your full name, and then enter your chosen password. Type your password once more in the Confirm Password box to ensure that it is correct.
Note: If you do not create at least one user account in this step, you will not be able to log in to the RedHat Enterprise Linux graphical environment

25. Click Forward to proceed

26. Date and Time. Use this screen to adjust the date and time of the system clock.

27. Click Forward to proceed

28. Kdump. Use this screen to select whether or not to use the Kdump kernel crash dumping mechanism on this system. Note that if you
select this option, you will need to reserve memory for Kdump and that this memory will not be available for any other purpose.

29 Click Finish to proceed.
Done installation and configuration of RedHat Linux 6.1 completed. Not let’s configure the Linux Integration Services.

To install Linux Integration Services Version 3.1

Important Note:  There is an issue where the SCVMM 2008 Service can crash with VMs running Linux Integration Components v3.1 for Hyper-V.
Resolution:
Disabling the KVP daemon on the Linux virtual machine will prevent the SCVMM service crash. The command to make this change must be run as root.

#/sbin/chkconfig –level 35 hv_kvp_daemon off

This will prevent the KVP service from auto starting while retaining all other functionality of hv_utils. hv_utils provides integrated shutdown, key value pair data exchange, and heartbeat features. More info : http://blogs.technet.com/b/scvmm/archive/2011/07/28/new-kb-the-scvmm-2008-virtual-machine-manager-service-crashes-with-vms-running-linux-integration-components-v3-1-for-hyper-v.aspx

1. Log on to the virtual machine.
2. In Hyper-V Manager, configure LinuxIC v30.ISO (located in the directory where you extracted the downloaded files) as a physical CD/DVD drive on the virtual machine.


3. Open a Terminal Console ( command line )

4. As the root user, mount the CD in the virtual machine by issuing the following command at a shell prompt:
 #mount /dev/cdrom  /media


4. As the root user, run the following command to install the synthetic drivers. A reboot is required after installation.

For 64-bit versions:
# yum install /media/x86_64/kmod-microsoft-hyper-v-rhel6-60.1.x86_64
# yum install /media/x86_64/microsoft-hyper-v-rhel6-60.1.x86_64
# reboot

or if you prefer to use rpm:

# rpm –ivh /media/x86_64/kmod-microsoft-hyper-v-rhel6-60.1.x86_64.rpm
# rpm –ivh /media/x86_64/microsoft-hyper-v-rhel6-60.1.x86_64.rpm
# reboot

For 32-bit versions:
# yum install /media/x86/kmod-microsoft-hyper-v-rhel6-60.1.i686
 #yum install /media/x86/microsoft-hyper-v-rhel6-60.1.i686
# reboot

or

# rpm –ivh /media/x86/kmod-microsoft-hyper-v-rhel6-60.1.i686.rpm
# rpm –ivh /media/x86/microsoft-hyper-v-rhel6-60.1.i686.rpm
# reboot

 

DONE! You should now have RedHat 6.1 running as VM on Hyper-V.

Note:

After Linux Integration Services are installed on the virtual machine, Key Value Pair exchange  functionality is activated. This allows the virtual machine to provide the following information  to the virtualization server:

  •  Fully Qualified Domain Name of the virtual machine
  •  Version of the Linux Integration Services that are installed
  •  IP Addresses (both IPv4 and IPv6) for all Ethernet adapters in the virtual machine
  •  OS Build information, including the distribution and kernel version
  •  Processor architecture (x86 or x86-64)

The data can be viewed using the Hyper-V WMI provider, and accessed via Windows  PowerShell. Instructions for viewing Key Value Pair exchange data are available at these  websites:
http://social.technet.microsoft.com/wiki/contents/articles/hyper-v-script-to-check-icversion.aspx
http://blogs.msdn.com/b/virtual_pc_guy/archive/2008/11/18/hyper-v-script-looking-at-kvpguestintrinsicexchangeitems.aspx

Hyper-V : Network Design, Configuration and Prioritization : Guidance

July 8, 2011 9 comments

There is a lot of posts regarding Hyper-V and network, but I found a lot people still don’t get it.

1. Network Design. How many nic’s we need for production environment for High Availiability:

  • 1 for Management. Microsoft recommends a dedicated network adapter for Hyper-V server management.
  • 2 ( Teamed )  for Virtual machines.Virtual network configurations of the external type require a minimum of one network adapter.
  • 2 ( MPIO ) for SCSI.Microsoft recommends that IP storage communication have a dedicated network, so one adapter is required and two or more are necessary to support multipathing.
  • 1 for Failover cluster.Windows® failover cluster requires a private network.
  • 1 for Live migration.This new Hyper-V R2 feature supports the migration of running virtual machines between Hyper-V servers. Microsoft recommends configuring a dedicated physical network adapter for live migration traffic. This network should be separate from the network for private communication between the cluster nodes, from the network for the virtual machine, and from the network for storage
  • 1 for CSV. Microsoft recommends a dedicated network to support the communications traffic created by this new Hyper-V R2 feature. In the network adapter properties, Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks must be enabled to support SMB

But how about production environments when the blades have only 4 Physical NIC’s?

Option 1. If your vendor does support NPAR technology(Broadcom, QLogic), you will be able to create up to 4 “Virtual Logical NIC’s” per physical NIC ( VLAN/QoS ). Although this solution is not supported by MS, it’s the best solution in terms of performance and it is supported by the vendors. This solution will provide you 100% HA as you can have up to 16 Logical NIC’s.

Option 2. Supported by MS. Allocate 2(two) NIC’sfor the iSCSI using MPIO and then :

Host configuration Virtual machine access Management Cluster and Cluster Shared Volumes Live migration Comments
2 network adapters with 10 Gbps Virtual network adapter 1 Virtual network adapter 1 with bandwidth capped at 1% Network adapter 2 Network adapter 2 with bandwidth capped at 50% Supported

Note that the QoS configuration is based on “per port”  and Windows only allows you to cap specify caps – not reserves. This solution, although supported by MS, dos not give you 100% HA.

2. Network Configuration. What need to be enabled/disabled?

Usage Number of Network Cards Comments
Management Network(Parent Partition) 1 Network Card
  • Make sure this card is listed first in the Adapter and Bindings connection order.
  • In Failover Cluster Manager make sure that the NIC is configured to allow cluster network communication on this network. This will act as a secondary connection for the Heartbeat.
Storage ISCSI 2 Network Cards – Not Teamed
  • Enable MPIO.
  • Disable NetBIOS on these interfaces
  • Do not configure a Gateway
  • Do not configure a DNS server
  • Make sure that each NIC is NOT set to register its connection in DNS
  • Remove File and Printer sharing
  • Do not remove Client from Microsoft networks if using Netapp Snapdrive with RPC authentication
  • In Failover Cluster Manager select- Do not allow cluster network communication on this network
VM Network
(Parent Partition)
2 Network cards :
1 for Dynamic IP’s
1 for Reserved IP’s
  • Disable NetBIOS on these interfaces
  • Do not configure a Gateway
  • Do not configure a DNS server
  • Make sure that each NIC is NOT set to register its connection in DNS
  • Remove File and Printer sharing and Client from Microsoft networks
  • In Failover Cluster Manager select – Do not allow cluster network communication on this network.
Cluster Heartbeat 1 Network Card
  • Disable NetBIOS on this interface
  • Do not configure a Gateway
  • Do not configure a DNS server
  • Make sure that this NIC is NOT set to register its connection in DNS
  • Make sure that Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks are enabled to support Server Message Block (SMB), which is required for CSV.
  • In Failover Cluster Manager make sure that the NIC is configured to allow cluster network communication on this network.
  • In Failover Cluster Manager remove the tick box for Allow Clients Connect through this network. This setting has nothing to do with the host/parent partition. This setting is used to control over what NICs the Cluster Resources can be accessed.
Cluster Shared Volume (CSV) 1 Network Card
  • Disable NetBIOS on this interface
  • Make sure that this NIC is NOT set to register its connection in DNS
  • Make sure that Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks are enabled to support Server Message Block (SMB), which is required for CSV.
  • In Failover Cluster Manager remove the tick box for Allow Clients Connect through this network. This setting has nothing to do with the host/parent partition. This setting is used to control over what NICs the Cluster Resources can be accessed. This is more relevant for other workloads e.g. File Cluster. It has no impact on the communication with the host partition or for the VM’s themselves.
  • By default the cluster will automatically choose the NIC to be used for CSV communication. We will change this later.
  • This traffic is not routable and has to be on the same subnet for all nodes.
Live Migration 1 Network Card
  • Disable NetBIOS on this interface
  • Make sure that this NIC is NOT set to register its connection in DNS.
  • In Failover Cluster Manager remove the tick box for Allow Clients Connect through this network. This setting has nothing to do with the host/parent partition. This setting is used to control over what NICs the Cluster Resources can be accessed. This is more relevant for other workloads e.g. File Cluster. It has no impact on the communication with the host partition or for the VM’s themselves.
  • By default the cluster will automatically choose the NIC to be used for Live-Migration. You can select multiple networks for LM and give them a preference.

 

2. Network Prioritization. What need to be enabled/disabled?

By default, all internal cluster network have a metric value starting at 1000 and incrementing by 100.  The first internal network which the cluster sees when it first comes online has a metric of 1000, the second has a metric of 1100, etc.

When you create CSV’s,  the failover cluster automatically chooses the network that appears to be the best for CSV communication. The lowest Metric value designates the network for Cluster and CSV traffic. The second lowest value designates the network for live migration. Additional networks with a metric below 10000 will be used as backup networks if the “Cluster & CSV Traffic” or “Live Migration Traffic” networks fail.  The lowest network with a value of at least 10000 will be used for “Public Traffic”. Consider giving the highest possible values to the networks which you do not want any cluster or public traffic to go through, such as for “ISCSI Traffic”, so that they are never used, or only used when no other networks at all are available.

To view the networks, their metric values, and if they were automatically or manually configured, run the clustering PowerShell cmdlet:
PS > Get-ClusterNetwork | ft Name, Metric, AutoMetric

To change the value of a network metric, run:
PS >Get-ClusterNetwork “Live Migration” ).Metric =800

If you want the cluster to start automatically assigning the Metric setting again for the network named “Live Migration”:
PS > Get-ClusterNetwork “Live Migration” ).AutoMetric = $true

How to override Network Prioritization Behavior?

Option 1. Change the network’s properties. If you select “Do not allow cluster network communication on this network”, then it will not be possible to send any “Cluster & CSV Traffic” or “Live Migration Traffic” through this network, even if the network has the lowest metric values.  The cluster will honor this override and find the network with the next lowest value to send this type of traffic :

  1. In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
  2.  Select Properties
  3. Change the radio buttons or checkboxes.

Option 2 (exclusively for “Live Migration Traffic”) :

To configure a cluster network for live migration:

  1. In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
  2. Expand Services and applications.
  3. In the console tree (on the left), select the clustered virtual machine for which you want to configure the network for live migration.
  4. Right-click the virtual machine resource displayed in the center pane (not on the left), and then click Properties.
  5. Click the Network for live migration tab, and select one or more cluster networks to use for live migration. Use the buttons on the right to move the cluster networks up or down to ensure that a private cluster network is the most preferred. The default preference order is as follows: networks that have no default gateway should be located first; networks that are used by cluster shared volumes and cluster traffic should be located last.Live migration will be attempted in the order of the networks specified in the list of cluster networks. If the connection to the destination node using the first network is not successful, the next network in the list is used until the complete list is exhausted, or there is a successful connection to the destination node using one of the networks.

Note : You don’t need to perform this action as per VM basis. When you configure a network for live migration for a specific virtual machine, the setting is global and therefore applies to all virtual machines.

Some other interesting articles:

http://technet.microsoft.com/en-us/library/dd446679(WS.10).aspx

http://www.hyper-v.nu/archives/hvredevoort/2011/03/windows-server-2008-r2-sp1-and-hp-network-teaming-testing-results/

http://blogs.technet.com/b/vishwa/archive/2011/02/01/tuning-scvmm-for-vdi-deployments.aspx

http://blogs.msdn.com/b/clustering/archive/2011/06/17/10176338.aspx

http://technet.microsoft.com/en-us/library/dd446679.aspx

Hyper-V Backup software : Altaro

June 29, 2011 Leave a comment

In January I was contacted by David Vella, CEO of Altaro to provide some feedback about a new Hyper-V backup software.

Altaro Hyper-V Backups works on Windows 2008 R2 (all editions, including core installation) and should be installed on the Hyper-V Host, not within the guest.

Yesterday, I receive a beta copy to test and I will post here my feedback, later. Anyway, my collegue MVP Hans Vredevoort post a good review in his blog with Femi Adegoke help.

For Hans Vredevoort review

http://www.hyper-v.nu/archives/hvredevoort/2011/05/altaro-hyper-v-backup-review/

Interested ? Here http://www.altaro.com/hyper-v-backup/ you can download the installation.  The install size is only 14 Mb in size.

 

Plan your organization’s migration to a private cloud with the Hyper-V Cloud Fast Track Assessment!

June 17, 2011 Leave a comment

Use the MAP Toolkit to plan your organization’s migration to a private cloud with the Hyper-V Cloud Fast Track Assessment!

New MAP features  to:

  • Build portfolios of web applications and databases to migrate to Windows Azure and SQL Azure.
  • Assess your environment’s readiness for Office 365 or Internet Explorer 9
  • Identify and migrate databases from competing platforms like Oracle and MySQL to Microsoft SQL Server.
  • Consolidate your servers on to Hyper-V Cloud Fast Track Infrastructures

The beta of the MAP Toolkit v6.0 is now available. To get involved in the beta program

https://connect.microsoft.com/

 

 

Window 7 as Guest OS for VDI : Max Virtual Processors Supported

June 14, 2011 Leave a comment

Looking to implement a VDI scenario with Windows 7 as the guest with a 12:1 (VP:LP) ratio ? With the launch of the SP1 for W2008R2, Microsof increased the maximum number of running virtual processors (VP) per logical processor (LP) from 8:1 to 12:1 when running Windows 7 as the guest operating system for VDI deployments

Formula :  (Number of processors) * (Number of cores) * (Number of threads per core) * 12

 Virtual Processor to Logical Processor2 Ratio & Totals

Physical
Processors

Cores per
processor

Threads per
core

Max Virtual Processors
Supported

2

2

2

96

2

4

2

192

2

6

2

288

2

8

2

384

4

2

2

192

4

4

2

384

4

6

2

512
(576)1

4

8

2

512
(768)1

1Remember that Hyper-V R2 supports up to a maximum of up to 512 virtual processors per server so while the math exceeds 512, they hit the maximum of 512 running virtual processors per server.

2A logical processor can be a core or thread depending on the physical processor.

  • If a core provides a single thread (a 1:1 relationship), then a logical processor = core.
  • If a core provides two threads per core (a 2:1 relationship), then each thread is a logical
    processor.

More info :
http://technet.microsoft.com/en-us/library/ee405267%28WS.10%29.aspx
http://blogs.technet.com/b/virtualization/archive/2011/04/25/hyper-v-vm-density-vp-lp-ratio-cores-and-threads.aspx

SCVMM 2012 Management ports and protocols. Detailed

June 7, 2011 Leave a comment

Here are the list of ports/protocols for the new SCVMM 2012.

From To Protocol Default
port
Where to change port setting
VMM management server P2V
source agent (control channel)
DCOM 135
Load Balancer HTTP/HTTPS 80/443 Load balancer configuration provider
WSUS server (data channel) HTTP/HTTPS 80/8530
(non-SSL), 443/8531 (with SSL)
These ports are the IIS port binding with WSUS. They cannot be changed from VMM.
WSUS server (control channel) HTTP/HTTPS 80/8530
(non-SSL), 443/8531 (with SSL)
These ports are the IIS port binding with WSUS. They cannot be changed from VMM.
VMM agent on Windows Server–based host (data
channel for file transfers)
HTTPS
(using BITS)
443
(Maximum value: 32768)
Citrix XenServer host (customization data
channel)
iSCSI 3260 On XenServer in transfer VM
XenServer host (control channel) HTTPS 5989 On XenServer host in: /opt/cimserver/cimserver_planned.conf
remote Microsoft SQL Server database TDS 1433
VMM agent on Windows Server–based host (control
channel)
WS-Management 5985 VMM setup
VMM agent on Windows Server–based host (control
channel – SSL)
WS-Management 5986
in-guest agent (VMM to virtual machine control
channel)
WS-Management 5985
Storage Management Service WMI Local
call
Cluster PowerShell interface PowerShell n/a
P2V source agent (data channel) BITS User-Defined P2V cmdlet option
VMM library server hosts
file transfer
BITS 443
(Maximum value: 32768)
VMM setup
VMM host-to-host file transfer BITS 443
(Maximum value: 32768)
VMM Self-Service Portal VMM
Self-Service Portal web server
HTTPS 443 VMM setup
VMM Self-Service Portal web server VMM
management server
WCF 8100 VMM setup
Console connections (RDP) virtual
machines through Hyper-V hosts (VMConnect)
RDP 2179 VMM console
Remote Desktop virtual
machines
RDP 3389 On the virtual machine
VMM console VMM
management server
WCF 8100 VMM setup
VMM management server (HTTPS) WCF 8101 VMM setup
VMM management server (NET.TCP) WCF 8102 VMM setup
VMM management server (HTTP) WCF 8103 VMM setup
Windows PE agent VMM
management server (control channel)
WCF 8101 VMM setup
VMM management server (time sync) WCF 8103 VMM setup
WDS provider VMM
management server
WCF 8102 VMM setup
Storage Management Service  SMI-S Provider CIM-XML Provider-specific
port
VMM management server VMware
ESX Server 3i hosts
HTTPS 443

Others

Connection Type Protocol Default port Where to change port setting
OOB Connection – SMASH over WS-Man HTTPS 443 On BMC
OOB Connection IPMI IPMI 623 On BMC
BITS port for VMM transfers (data channel)
BITS 443 VMM setup
VMware ESX Server 3.0 and VMware ESX Server 3.5 hosts SFTP 22
VMware Web Services
communication
HTTPS 443 VMM console

Note: When you install the VMM management server you can assign some of the ports that it will use for communications and file transfers between the VMM components.

Hyper-V : Supported Server Guest Operating Systems. Updated May 2011

May 16, 2011 Leave a comment

 

 The following tables list the Server guest operating systems that are supported for use on a virtual machine as a guest operating system.

Server guest operating system Editions Virtual processors
Windows Server 2008 R2 with Service Pack 1 Standard, Enterprise, Datacenter, and Web editions 1, 2, or 4
Windows Server 2008 R2 Standard, Enterprise, Datacenter, and Windows Web Server 2008 R2 1, 2, or 4
Windows Server 2008 Standard, Standard without Hyper-V, Enterprise, Enterprise without Hyper-V, Datacenter, Datacenter without Hyper-V, Windows Web Server 2008, and HPC Edition 1, 2, or 4
Windows Server 2003 R2 with Service Pack 2 Standard, Enterprise, Datacenter, and Web 1 or 2
Windows Home Server 2011 Standard 1
Windows Storage Server 2008 R2 Essentials 1
Windows Small Business Server 2011 Essentials 1 or 2
Windows Small Business Server 2011 Standard 1, 2, or 4
Windows Server 2003 R2 x64 Edition with Service Pack 2 Standard, Enterprise, and Datacenter 1 or 2
Windows Server 2003 with Service Pack 2 Standard, Enterprise, Datacenter, and Web 1 or 2
Windows Server 2003 x64 Edition with Service Pack 2 Standard, Enterprise, and Datacenter 1 or 2
CentOS 5.2 through 5.6 (NEW)  x86 edition and x64 edition 1, 2, or 4
Red Hat Enterprise Linux 5.6 x86 edition and x64 edition 1, 2, or 4
Red Hat Enterprise Linux 5.5 x86 edition and x64 edition 1, 2, or 4
Red Hat Enterprise Linux 5.4 x86 edition and x64 edition 1, 2, or 4
Red Hat Enterprise Linux 5.3 x86 edition and x64 edition 1, 2, or 4
Red Hat Enterprise Linux 5.2 x86 edition and x64 edition 1, 2, or 4
SUSE Linux Enterprise Server 11 with Service Pack 1 x86 edition and x64 edition 1, 2, or 4
SUSE Linux Enterprise Server 10 with Service Pack 4 x86 edition and x64 edition 1, 2, or 4

 Note: Support for Windows 2000 Server and Windows XP with Service Pack 2 (x86) ended on July 13, 2010

Source : http://technet.microsoft.com/en-us/library/cc794868(WS.10).aspx

Hyper-V : Virtual Hard Disks. Benefits of Fixed disks

March 31, 2011 5 comments

 

When creating a Virtual Machine, you can select to use either virtual hard disks or physical disks that are directly attached to a virtual machine.

My personal advise and what I have seen from Microsoft folks is to always use FIXED DISK for production environment, even with the release of Windows Server 2008 R2, which one of the enhancements was the improved performance of dynamic VHD files.

The explanation and benetifts for that is simple:

 1. Almost the same performance as passthroug disks

2. Portability : you can move/copy the VHD

3. Backup : you can backup at the VHD level and better, using DPM you can restore at ITEM level ( how cools is that! )

 4.You can have Snapshots

 5. The fixed sized VHD performance has been on-par with the physical disk since Windows Server 2008/Hyper-V

 If you use pass-through disks you lose all of the benefits of VHD files such as portability, snap-shotting and thin provisioning. Considering these trade-offs, using pass-through disks should really only be considered if you require a disk that is greater than 2 TB in size or if your application is I/O bound and you really could benefit from another .1 ms shaved off your average response time.  

 Disks Summary table:

Storage Container Pros Cons
Pass-through DisK
  • Fastest performance
  • Simplest storage path because file system on host is not involved.
  • Better alignment under SAN.
  • Lower CPU utilization
  • Support very large disks
  • VM snapshot cannot be taken
  • Disk is being used exclusively and directly by a single virtual machine.
  • Pass-through disks cannot be backed up by the Hyper-V VSS writer and any backup program that uses the Hyper-V VSS writer.
  • Fixed sized VHD
    • Highest performance of all VHD types.
    • Simplest VHD file format to give the best I/O alignment.
    • More robust than dynamic or differencing VHD due to the lack of block allocation tables (i.e. redirection layer).
    • File-based storage container has more management advantages than pass-through disk.
    • Expanding is available to increase the capacity of VHD.
    • No risk of underlying volume running out of space during VM operations
    • Up front space allocation may increase the storage cost when large of number fixed VHD are deployed.
    • Large fixed VHD Creation is time-consuming.
    • Shrinking the virtual capacity (i.e. reducing the virtual size) is not possible.
    Dynamically expanding or                  

     
     

     

     

    Differencing VHD

       

    • Good performance
    • Quicker to create than fixed sized VHD
    • Grow dynamically to save disk space and provide efficient storage usage.
    • Smaller VHD file size makes it more nimble in terms of transporting across the network.
    • Blocks of full zeros will not get allocated and thus save the space under certain circumstances.
    • Compact operation is available to reduce the actual physical file size
    • Interleaving of meta-data and data blocks may cause I/O alignment issues.
    • Write performance may suffer during VHD expanding.
    • Dynamically expanding and differencing VHDs cannot exceed 2040GB
    • May get VM paused or VHD yanked out if disk space is running out due to the dynamic growth.
    • Shrinking the virtual capacity is not supported.
    • Expanding is not available for differencing VHDs due to the inherent size limitation of parent disk.
    • Defrag is not recommended due to inherent re-directional layer.