Posts Tagged ‘Links to Virtualisation’

Hyper-V : Best Practices and Supported scenarios regarding Exchange Server 2010

May 19, 2011 Leave a comment

 The following are the supported scenarios for Exchange 2010 SP1 :

  • The Unified Messaging server role is supported in a virtualized environment.
  • Combined Exchange 2010 high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions that will move or automatically failover mailbox servers that are members of a DAG between clustered root servers

HyperV Guest Configuration

Keep in mind that because there are no routines within Exchange Server that test for a virtualized platform, Exchange Server behaves no differently programmatically on a virtualized platform than it does on a physical platform.

Determining Exchange Server Role Virtual Machine Locations

When determining Exchange Server Role virtual machine locations, consider the following general best practices:

  • Deploy the same Exchange roles across multiple physical server roots (to allow for load balancing and high availability).
  • Never deploy Mailbox servers that are members of the same Database Availability Groups (DAGs) on the same root.
  • Never deploy all the Client Access Servers on the same root.
  • Never deploy all the Hub Transport servers on the same root.
  • Determine the workload requirements for each server and balance the workload across the HyperV guest virtual machines.

Guest Storage

Each Exchange guest virtual machine must be allocated sufficient storage space on the root virtual machine for the fixed disk that contains the guest’s operating system, any temporary memory storage files in use, and related virtual machine files that are hosted on the root machine.Consider the following best practices when configuring Hyper-V guests:

  • Fixed VHDs are recommended for the virtual operating system.
  • Allow for a minimum of a 15-GB disk for the operating system, allow additional space for the paging file, management software, and crash recovery (dump) files. Then add Exchange server role space requirements.
  • Storage used by Exchange should be hosted in disk spindles that are separate from the storage that hosts the guest virtual machine’s operating system.
  • For Hub Transport servers, correctly provision the necessary disk space needed for the message queue database, and logging operations.
  • For Mailbox servers, correctly provision the necessary disk space for databases, transaction logs, the content index, and other logging operations. .

Guest Memory : Dynamic Memory should be disabled
Memory must be sized for guest virtual machines using the same methods as physical computer deployments. Exchange—like many server applications that have optimizations for performance that involve caching of data in memory—is susceptible to poor system performance and an unacceptable client experience if it doesn’t have full control over the memory allocated to the physical computer or virtual machine on which it is running.
Many of the performance gains in recent versions of Exchange, especially those related to reduction in input/output (I/O) are based on highly efficient usage of large amounts of memory. When that memory is no longer available, the expected performance of the system can’t be achieved. For this reason, memory oversubscription or dynamic adjustment of virtual machine memory must be disabled for production Exchange servers.

Deployment Recommendations

When designing an Exchange Server 2010 virtualized environment, the core Exchange design principles apply. The environment must be designed for the correct performance, reliability, and capacity requirements. Design considerations such as examining usage profiles, message profiles, and so on must still be taken into account.

See this article (Mailbox Storage Design Process) as a starting point when considering a high availability solution that uses DAGs.

Because virtualization provides the flexibility to make changes to the design of the environment later, some organizations might be tempted to spend less time on their design at the outset. As a best practice, spend adequate time designing the environment to avoid pitfalls later.

Group the Exchange Server roles in such a way that balances workloads on the root servers. Mixing both roles on the same HyperV root server can balance the workloads and prevent one physical resource from being unduly stressed, rather than if the same roles were put on the same hosts

The updated support guidance applies to any hardware virtualization vendor participating in the Windows Server Virtualization Validation Program (SVVP).

Best Practices for Virtualizing Exchange Server 2010 with Windows Server 2008 R2 Hyper-Vwhitepaper. This whitepaper is designed to provide technical guidance on Exchange server roles, capacity planning, sizing and performance, as well as high availability best practices.

Complete system requirements for Exchange Server 2010 running under hardware virtualization software can be found in Exchange 2010 System Requirements. Also, the support policy for Microsoft software running in non-Microsoft hardware virtualization software can be found here.

CentOS now have official support as guest VM in Hyper-V

May 18, 2011 2 comments

Effective immediately, Microsoft will support Windows Server2008 R2 Hyper-V to run CentOS.  

CentOS is a popular Linux distribution for Hosters, and this was the number one requirement for interoperability that we heard from that community.

This development will enable MS Hosting partners to consolidate their mixed Windows + Linux infrastructure on Windows Server Hyper-V; reducing cost and complexity, while betting on an enterprise class virtualization platform. .

How will support work?
Call Microsoft CSS. Support will cover installation issues as well as configuration issues.

What version of the Linux Integration Services support CentOS?

 The existing Hyper-V Linux Integration Services for Linux Version 2.1 support CentOS. The following features are included in the Hyper-V Linux Integration Services 2.1 release:

· Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use up to 4 virtual processors (VP) per virtual machine.

· Driver support for synthetic devices: Linux Integration Services supports the synthetic network controller and the synthetic storage controller that were developed specifically for Hyper-V.

· Fastpath Boot Support for Hyper-V: Boot devices take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.

· Timesync: The clock inside the virtual machine will remain synchronized with the clock on the host.

· Integrated Shutdown: Virtual machines running Linux can be gracefully shut down from either Hyper-V Manager or System Center Virtual Machine Manager.

· Heartbeat: Allows the host to detect whether the guest is running and responsive.

· Pluggable Time Source: A pluggable clock source module is included to provide a more accurate time source to the guest.

The Linux Integration Services are available via the Microsoft Download Center here:

 From Wikipedia:

CentOS is a community-supported, mainly free software operating system based on Red Hat Enterprise Linux (RHEL). It exists to provide a free enterprise class computing platform and strives to maintain 100% binary compatibility with its upstream distribution. CentOS stands for Community ENTerprise Operating System.

Red Hat Enterprise Linux is available only through a paid subscription service that provides access to software updates and varying levels of technical support. The product is largely composed of software packages distributed under either an open source or a free software license and the source code for these packages is made public by Red Hat.

CentOS developers use Red Hat’s source code to create a final product very similar to Red Hat Enterprise Linux. Red Hat’s branding and logos are changed because Red Hat does not allow them to be redistributed.

CentOS is available free of charge. Technical support is primarily provided by the community via official mailing lists, web forums, and chat rooms. The project is not affiliated with Red Hat and thus receives no financial or logistical support from the company; instead, the CentOS Project relies on donations from users and organizational sponsors.

Hyper-V : Supported Server Guest Operating Systems. Updated May 2011

May 16, 2011 Leave a comment


 The following tables list the Server guest operating systems that are supported for use on a virtual machine as a guest operating system.

Server guest operating system Editions Virtual processors
Windows Server 2008 R2 with Service Pack 1 Standard, Enterprise, Datacenter, and Web editions 1, 2, or 4
Windows Server 2008 R2 Standard, Enterprise, Datacenter, and Windows Web Server 2008 R2 1, 2, or 4
Windows Server 2008 Standard, Standard without Hyper-V, Enterprise, Enterprise without Hyper-V, Datacenter, Datacenter without Hyper-V, Windows Web Server 2008, and HPC Edition 1, 2, or 4
Windows Server 2003 R2 with Service Pack 2 Standard, Enterprise, Datacenter, and Web 1 or 2
Windows Home Server 2011 Standard 1
Windows Storage Server 2008 R2 Essentials 1
Windows Small Business Server 2011 Essentials 1 or 2
Windows Small Business Server 2011 Standard 1, 2, or 4
Windows Server 2003 R2 x64 Edition with Service Pack 2 Standard, Enterprise, and Datacenter 1 or 2
Windows Server 2003 with Service Pack 2 Standard, Enterprise, Datacenter, and Web 1 or 2
Windows Server 2003 x64 Edition with Service Pack 2 Standard, Enterprise, and Datacenter 1 or 2
CentOS 5.2 through 5.6 (NEW)  x86 edition and x64 edition 1, 2, or 4
Red Hat Enterprise Linux 5.6 x86 edition and x64 edition 1, 2, or 4
Red Hat Enterprise Linux 5.5 x86 edition and x64 edition 1, 2, or 4
Red Hat Enterprise Linux 5.4 x86 edition and x64 edition 1, 2, or 4
Red Hat Enterprise Linux 5.3 x86 edition and x64 edition 1, 2, or 4
Red Hat Enterprise Linux 5.2 x86 edition and x64 edition 1, 2, or 4
SUSE Linux Enterprise Server 11 with Service Pack 1 x86 edition and x64 edition 1, 2, or 4
SUSE Linux Enterprise Server 10 with Service Pack 4 x86 edition and x64 edition 1, 2, or 4

 Note: Support for Windows 2000 Server and Windows XP with Service Pack 2 (x86) ended on July 13, 2010

Source :

MS Virtualization for VMware Pros : Jump Start

April 28, 2011 Leave a comment

Exclusive Jump Start virtual training event – “Microsoft Virtualization for VMware Professionals”  FREE – on TechNet Edge

Where do I go for this great training?

The HD-quality video recordings of this course are on TechNet Edge. If you’re interested in one specific topic, I’ve included links to each module as well.

 ·   Entire course on TechNet Edge: “Microsoft Virtualization for VMware Professionals” Jump Start

o   Virtualization Jump Start (01): Virtualization Overview

o   Virtualization Jump Start (02): Differentiating Microsoft & VMware

o   Virtualization Jump Start (03a): Hyper-V Deployment Options & Architecture | Part 1

o   Virtualization Jump Start (03b): Hyper-V Deployment Options & Architecture | Part 2

o   Virtualization Jump Start (04): High-Availability & Clustering

o   Virtualization Jump Start (05): System Center Suite Overview with focus on DPM

o   Virtualization Jump Start (06): Automation with Opalis, Service Manager & PowerShell

o   Virtualization Jump Start (07): System Center Virtual Machine Manager 2012

o   Virtualization Jump Start (08): Private Cloud Solutions, Architecture & VMM Self-Service Portal 2.0

o   Virtualization Jump Start (09): Virtual Desktop Infrastructure (VDI) Architecture | Part 1

o   Virtualization Jump Start (10): Virtual Desktop Infrastructure (VDI) Architecture | Part 2

o   Virtualization Jump Start (11): v-Alliance Solution Overview

o   Virtualization Jump Start (12): Application Delivery for VDI

·  Links to course materials on Born to Learn

Hyper-V R2 and right numbers of physical NIC’s

April 27, 2011 1 comment

When it comes to network configuration, be sure to provide the right number of physical network adapters on Hyper-V servers. Failure to configure enough network connections can make it appear as though you have a storage problem, particularly when using iSCSI.

Recommendation for network configuration ( number of dedicated Physical Nic’s ):

  • 1 for Management. Microsoft recommends a dedicated network adapter for Hyper-V server management.
  • At least 1  for Virtual machines. Virtual network configurations of the external type require a minimum of one network adapter.
  • 2 for SCSI. Microsoft recommends that IP storage communication have a dedicated network, so one adapter is required and two or more are necessary to support multipathing.
  • At least 1 for Failover cluster. Windows® failover cluster requires a private network.
  • 1 for Live migration. This new Hyper-V R2 feature supports the migration of running virtual machines between Hyper-V servers. Microsoft recommends configuring a dedicated physical network adapter for live migration traffic. This network should be separate from the network for private communication between the cluster nodes, from the network for the virtual machine, and from the network for storage
  • 1 for Cluster shared volumes. Microsoft recommends a dedicated network to support the communications traffic created by this new Hyper-V R2 feature. In the network adapter properties, Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks must be enabled to support SMB

Some interesting notes when comparing FC with iSCSI:

  • iSCSI and FC delivered comparable throughput performance irrespective of the load on the system.
  • iSCSI used approximately 3-5 percentage points more Hyper-V R2 CPU resources than FC to achieve comparable performance.

For information about the network traffic that can occur on a network used for Cluster Shared Volumes, see “Understanding redirected I/O mode in CSV communciation” in Requirements for Using Cluster Shared Volumes in a Failover Cluster in Windows Server 2008 R2 (

For more information on the network used for CSV communication, see Managing the network used for Cluster Shared Volumes.

It’s not recommend that you do use the same network adapter for virtual machine access and management.
If you are limited by the number of network adapters, you should configure a virtual local area network (VLAN) to isolate traffic. VLAN recommendations include 802.1q and 802.p.

Hyper-V : Virtual Hard Disks. Benefits of Fixed disks

March 31, 2011 5 comments


When creating a Virtual Machine, you can select to use either virtual hard disks or physical disks that are directly attached to a virtual machine.

My personal advise and what I have seen from Microsoft folks is to always use FIXED DISK for production environment, even with the release of Windows Server 2008 R2, which one of the enhancements was the improved performance of dynamic VHD files.

The explanation and benetifts for that is simple:

 1. Almost the same performance as passthroug disks

2. Portability : you can move/copy the VHD

3. Backup : you can backup at the VHD level and better, using DPM you can restore at ITEM level ( how cools is that! )

 4.You can have Snapshots

 5. The fixed sized VHD performance has been on-par with the physical disk since Windows Server 2008/Hyper-V

 If you use pass-through disks you lose all of the benefits of VHD files such as portability, snap-shotting and thin provisioning. Considering these trade-offs, using pass-through disks should really only be considered if you require a disk that is greater than 2 TB in size or if your application is I/O bound and you really could benefit from another .1 ms shaved off your average response time.  

 Disks Summary table:

Storage Container Pros Cons
Pass-through DisK
  • Fastest performance
  • Simplest storage path because file system on host is not involved.
  • Better alignment under SAN.
  • Lower CPU utilization
  • Support very large disks
  • VM snapshot cannot be taken
  • Disk is being used exclusively and directly by a single virtual machine.
  • Pass-through disks cannot be backed up by the Hyper-V VSS writer and any backup program that uses the Hyper-V VSS writer.
  • Fixed sized VHD
    • Highest performance of all VHD types.
    • Simplest VHD file format to give the best I/O alignment.
    • More robust than dynamic or differencing VHD due to the lack of block allocation tables (i.e. redirection layer).
    • File-based storage container has more management advantages than pass-through disk.
    • Expanding is available to increase the capacity of VHD.
    • No risk of underlying volume running out of space during VM operations
    • Up front space allocation may increase the storage cost when large of number fixed VHD are deployed.
    • Large fixed VHD Creation is time-consuming.
    • Shrinking the virtual capacity (i.e. reducing the virtual size) is not possible.
    Dynamically expanding or                  




    Differencing VHD


    • Good performance
    • Quicker to create than fixed sized VHD
    • Grow dynamically to save disk space and provide efficient storage usage.
    • Smaller VHD file size makes it more nimble in terms of transporting across the network.
    • Blocks of full zeros will not get allocated and thus save the space under certain circumstances.
    • Compact operation is available to reduce the actual physical file size
    • Interleaving of meta-data and data blocks may cause I/O alignment issues.
    • Write performance may suffer during VHD expanding.
    • Dynamically expanding and differencing VHDs cannot exceed 2040GB
    • May get VM paused or VHD yanked out if disk space is running out due to the dynamic growth.
    • Shrinking the virtual capacity is not supported.
    • Expanding is not available for differencing VHDs due to the inherent size limitation of parent disk.
    • Defrag is not recommended due to inherent re-directional layer.

    SCVMM 2012: Private Cloud Management. Got it!?

    March 23, 2011 Leave a comment

    It’s great pleasure to see how far Microsoft SCVMM went with the SCVMM 2012.
    Belevie me, it’s a whole new product.
    So, if you are seriuos about Private Cloud Management, that’s the product you will look into.

    •Fabric Management
    ◦Hyper-V and Cluster Lifecycle Management – Deploy Hyper-V to bare metal server, create Hyper-V clusters, orchestrate patching of a Hyper-V Cluster

    ◦Third Party Virtualization Platforms – Add and Manage Citrix XenServer and VMware ESX Hosts and Clusters

    ◦Network Management – Manage IP Address Pools, MAC Address Pools and Load Balancers

    ◦Storage Management – Classify storage, Manage Storage Pools and LUNs

    •Resource Optimization
    ◦Dynamic Optimization – proactively balance the load of VMs across a cluster

    ◦Power Optimization – schedule power savings to use the right number of hosts to run your workloads – power the rest off until they are needed.

    ◦PRO – integrate with System Center Operations Manager to respond to application-level performance monitors.

    •Cloud Management
    ◦Abstract server, network and storage resources into private clouds

    ◦Delegate access to private clouds with control of capacity, capabilities and user quotas

    ◦Enable self-service usage for application administrator to author, deploy, manage and decommission applications in the private cloud

    •Service Lifecycle Management
    ◦Define service templates to create sets of connected virtual machines, os images and application packages

    ◦Compose operating system images and applications during service deployment

    ◦Scale out the number of virtual machines in a service

    ◦Service performance and health monitoring integrated with System Center Operations Manager

    ◦Decouple OS image and application updates through image-based servicing.

    ◦Leverage powerful application virtualization technologies such as Server App-V

    Note: The SCVMM 2012 Beta is NOT Supported in production environments.
    Download SCVMM 2012 Beta Now