Archive

Posts Tagged ‘Private Cloud’

Plan your organization’s migration to a private cloud with the Hyper-V Cloud Fast Track Assessment!

June 17, 2011 Leave a comment

Use the MAP Toolkit to plan your organization’s migration to a private cloud with the Hyper-V Cloud Fast Track Assessment!

New MAP features  to:

  • Build portfolios of web applications and databases to migrate to Windows Azure and SQL Azure.
  • Assess your environment’s readiness for Office 365 or Internet Explorer 9
  • Identify and migrate databases from competing platforms like Oracle and MySQL to Microsoft SQL Server.
  • Consolidate your servers on to Hyper-V Cloud Fast Track Infrastructures

The beta of the MAP Toolkit v6.0 is now available. To get involved in the beta program

https://connect.microsoft.com/

 

 

Advertisements

Window 7 as Guest OS for VDI : Max Virtual Processors Supported

June 14, 2011 Leave a comment

Looking to implement a VDI scenario with Windows 7 as the guest with a 12:1 (VP:LP) ratio ? With the launch of the SP1 for W2008R2, Microsof increased the maximum number of running virtual processors (VP) per logical processor (LP) from 8:1 to 12:1 when running Windows 7 as the guest operating system for VDI deployments

Formula :  (Number of processors) * (Number of cores) * (Number of threads per core) * 12

 Virtual Processor to Logical Processor2 Ratio & Totals

Physical
Processors

Cores per
processor

Threads per
core

Max Virtual Processors
Supported

2

2

2

96

2

4

2

192

2

6

2

288

2

8

2

384

4

2

2

192

4

4

2

384

4

6

2

512
(576)1

4

8

2

512
(768)1

1Remember that Hyper-V R2 supports up to a maximum of up to 512 virtual processors per server so while the math exceeds 512, they hit the maximum of 512 running virtual processors per server.

2A logical processor can be a core or thread depending on the physical processor.

  • If a core provides a single thread (a 1:1 relationship), then a logical processor = core.
  • If a core provides two threads per core (a 2:1 relationship), then each thread is a logical
    processor.

More info :
http://technet.microsoft.com/en-us/library/ee405267%28WS.10%29.aspx
http://blogs.technet.com/b/virtualization/archive/2011/04/25/hyper-v-vm-density-vp-lp-ratio-cores-and-threads.aspx

SCVMM 2012 Management ports and protocols. Detailed

June 7, 2011 Leave a comment

Here are the list of ports/protocols for the new SCVMM 2012.

From To Protocol Default
port
Where to change port setting
VMM management server P2V
source agent (control channel)
DCOM 135
Load Balancer HTTP/HTTPS 80/443 Load balancer configuration provider
WSUS server (data channel) HTTP/HTTPS 80/8530
(non-SSL), 443/8531 (with SSL)
These ports are the IIS port binding with WSUS. They cannot be changed from VMM.
WSUS server (control channel) HTTP/HTTPS 80/8530
(non-SSL), 443/8531 (with SSL)
These ports are the IIS port binding with WSUS. They cannot be changed from VMM.
VMM agent on Windows Server–based host (data
channel for file transfers)
HTTPS
(using BITS)
443
(Maximum value: 32768)
Citrix XenServer host (customization data
channel)
iSCSI 3260 On XenServer in transfer VM
XenServer host (control channel) HTTPS 5989 On XenServer host in: /opt/cimserver/cimserver_planned.conf
remote Microsoft SQL Server database TDS 1433
VMM agent on Windows Server–based host (control
channel)
WS-Management 5985 VMM setup
VMM agent on Windows Server–based host (control
channel – SSL)
WS-Management 5986
in-guest agent (VMM to virtual machine control
channel)
WS-Management 5985
Storage Management Service WMI Local
call
Cluster PowerShell interface PowerShell n/a
P2V source agent (data channel) BITS User-Defined P2V cmdlet option
VMM library server hosts
file transfer
BITS 443
(Maximum value: 32768)
VMM setup
VMM host-to-host file transfer BITS 443
(Maximum value: 32768)
VMM Self-Service Portal VMM
Self-Service Portal web server
HTTPS 443 VMM setup
VMM Self-Service Portal web server VMM
management server
WCF 8100 VMM setup
Console connections (RDP) virtual
machines through Hyper-V hosts (VMConnect)
RDP 2179 VMM console
Remote Desktop virtual
machines
RDP 3389 On the virtual machine
VMM console VMM
management server
WCF 8100 VMM setup
VMM management server (HTTPS) WCF 8101 VMM setup
VMM management server (NET.TCP) WCF 8102 VMM setup
VMM management server (HTTP) WCF 8103 VMM setup
Windows PE agent VMM
management server (control channel)
WCF 8101 VMM setup
VMM management server (time sync) WCF 8103 VMM setup
WDS provider VMM
management server
WCF 8102 VMM setup
Storage Management Service  SMI-S Provider CIM-XML Provider-specific
port
VMM management server VMware
ESX Server 3i hosts
HTTPS 443

Others

Connection Type Protocol Default port Where to change port setting
OOB Connection – SMASH over WS-Man HTTPS 443 On BMC
OOB Connection IPMI IPMI 623 On BMC
BITS port for VMM transfers (data channel)
BITS 443 VMM setup
VMware ESX Server 3.0 and VMware ESX Server 3.5 hosts SFTP 22
VMware Web Services
communication
HTTPS 443 VMM console

Note: When you install the VMM management server you can assign some of the ports that it will use for communications and file transfers between the VMM components.

Hyper-V : Best Practices and Supported scenarios regarding Exchange Server 2010

May 19, 2011 Leave a comment

 The following are the supported scenarios for Exchange 2010 SP1 :

  • The Unified Messaging server role is supported in a virtualized environment.
  • Combined Exchange 2010 high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions that will move or automatically failover mailbox servers that are members of a DAG between clustered root servers

HyperV Guest Configuration

Keep in mind that because there are no routines within Exchange Server that test for a virtualized platform, Exchange Server behaves no differently programmatically on a virtualized platform than it does on a physical platform.

Determining Exchange Server Role Virtual Machine Locations

When determining Exchange Server Role virtual machine locations, consider the following general best practices:

  • Deploy the same Exchange roles across multiple physical server roots (to allow for load balancing and high availability).
  • Never deploy Mailbox servers that are members of the same Database Availability Groups (DAGs) on the same root.
  • Never deploy all the Client Access Servers on the same root.
  • Never deploy all the Hub Transport servers on the same root.
  • Determine the workload requirements for each server and balance the workload across the HyperV guest virtual machines.

Guest Storage

Each Exchange guest virtual machine must be allocated sufficient storage space on the root virtual machine for the fixed disk that contains the guest’s operating system, any temporary memory storage files in use, and related virtual machine files that are hosted on the root machine.Consider the following best practices when configuring Hyper-V guests:

  • Fixed VHDs are recommended for the virtual operating system.
  • Allow for a minimum of a 15-GB disk for the operating system, allow additional space for the paging file, management software, and crash recovery (dump) files. Then add Exchange server role space requirements.
  • Storage used by Exchange should be hosted in disk spindles that are separate from the storage that hosts the guest virtual machine’s operating system.
  • For Hub Transport servers, correctly provision the necessary disk space needed for the message queue database, and logging operations.
  • For Mailbox servers, correctly provision the necessary disk space for databases, transaction logs, the content index, and other logging operations. .

Guest Memory : Dynamic Memory should be disabled
Memory must be sized for guest virtual machines using the same methods as physical computer deployments. Exchange—like many server applications that have optimizations for performance that involve caching of data in memory—is susceptible to poor system performance and an unacceptable client experience if it doesn’t have full control over the memory allocated to the physical computer or virtual machine on which it is running.
Many of the performance gains in recent versions of Exchange, especially those related to reduction in input/output (I/O) are based on highly efficient usage of large amounts of memory. When that memory is no longer available, the expected performance of the system can’t be achieved. For this reason, memory oversubscription or dynamic adjustment of virtual machine memory must be disabled for production Exchange servers.

Deployment Recommendations

When designing an Exchange Server 2010 virtualized environment, the core Exchange design principles apply. The environment must be designed for the correct performance, reliability, and capacity requirements. Design considerations such as examining usage profiles, message profiles, and so on must still be taken into account.

See this article (Mailbox Storage Design Process) as a starting point when considering a high availability solution that uses DAGs.

Because virtualization provides the flexibility to make changes to the design of the environment later, some organizations might be tempted to spend less time on their design at the outset. As a best practice, spend adequate time designing the environment to avoid pitfalls later.

Group the Exchange Server roles in such a way that balances workloads on the root servers. Mixing both roles on the same HyperV root server can balance the workloads and prevent one physical resource from being unduly stressed, rather than if the same roles were put on the same hosts

The updated support guidance applies to any hardware virtualization vendor participating in the Windows Server Virtualization Validation Program (SVVP).

Best Practices for Virtualizing Exchange Server 2010 with Windows Server 2008 R2 Hyper-Vwhitepaper. This whitepaper is designed to provide technical guidance on Exchange server roles, capacity planning, sizing and performance, as well as high availability best practices.

Complete system requirements for Exchange Server 2010 running under hardware virtualization software can be found in Exchange 2010 System Requirements. Also, the support policy for Microsoft software running in non-Microsoft hardware virtualization software can be found here.

CentOS now have official support as guest VM in Hyper-V

May 18, 2011 2 comments

Effective immediately, Microsoft will support Windows Server2008 R2 Hyper-V to run CentOS.  

CentOS is a popular Linux distribution for Hosters, and this was the number one requirement for interoperability that we heard from that community.

This development will enable MS Hosting partners to consolidate their mixed Windows + Linux infrastructure on Windows Server Hyper-V; reducing cost and complexity, while betting on an enterprise class virtualization platform. .

How will support work?
Call Microsoft CSS. Support will cover installation issues as well as configuration issues.

What version of the Linux Integration Services support CentOS?

 The existing Hyper-V Linux Integration Services for Linux Version 2.1 support CentOS. The following features are included in the Hyper-V Linux Integration Services 2.1 release:

· Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use up to 4 virtual processors (VP) per virtual machine.

· Driver support for synthetic devices: Linux Integration Services supports the synthetic network controller and the synthetic storage controller that were developed specifically for Hyper-V.

· Fastpath Boot Support for Hyper-V: Boot devices take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.

· Timesync: The clock inside the virtual machine will remain synchronized with the clock on the host.

· Integrated Shutdown: Virtual machines running Linux can be gracefully shut down from either Hyper-V Manager or System Center Virtual Machine Manager.

· Heartbeat: Allows the host to detect whether the guest is running and responsive.

· Pluggable Time Source: A pluggable clock source module is included to provide a more accurate time source to the guest.

The Linux Integration Services are available via the Microsoft Download Center here: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=eee39325-898b-4522-9b4c-f4b5b9b64551

 From Wikipedia:

CentOS is a community-supported, mainly free software operating system based on Red Hat Enterprise Linux (RHEL). It exists to provide a free enterprise class computing platform and strives to maintain 100% binary compatibility with its upstream distribution. CentOS stands for Community ENTerprise Operating System.

Red Hat Enterprise Linux is available only through a paid subscription service that provides access to software updates and varying levels of technical support. The product is largely composed of software packages distributed under either an open source or a free software license and the source code for these packages is made public by Red Hat.

CentOS developers use Red Hat’s source code to create a final product very similar to Red Hat Enterprise Linux. Red Hat’s branding and logos are changed because Red Hat does not allow them to be redistributed.

CentOS is available free of charge. Technical support is primarily provided by the community via official mailing lists, web forums, and chat rooms. The project is not affiliated with Red Hat and thus receives no financial or logistical support from the company; instead, the CentOS Project relies on donations from users and organizational sponsors.

MS Virtualization for VMware Pros : Jump Start

April 28, 2011 Leave a comment

Exclusive Jump Start virtual training event – “Microsoft Virtualization for VMware Professionals”  FREE – on TechNet Edge

Where do I go for this great training?

The HD-quality video recordings of this course are on TechNet Edge. If you’re interested in one specific topic, I’ve included links to each module as well.

 ·   Entire course on TechNet Edge: “Microsoft Virtualization for VMware Professionals” Jump Start

o   Virtualization Jump Start (01): Virtualization Overview

o   Virtualization Jump Start (02): Differentiating Microsoft & VMware

o   Virtualization Jump Start (03a): Hyper-V Deployment Options & Architecture | Part 1

o   Virtualization Jump Start (03b): Hyper-V Deployment Options & Architecture | Part 2

o   Virtualization Jump Start (04): High-Availability & Clustering

o   Virtualization Jump Start (05): System Center Suite Overview with focus on DPM

o   Virtualization Jump Start (06): Automation with Opalis, Service Manager & PowerShell

o   Virtualization Jump Start (07): System Center Virtual Machine Manager 2012

o   Virtualization Jump Start (08): Private Cloud Solutions, Architecture & VMM Self-Service Portal 2.0

o   Virtualization Jump Start (09): Virtual Desktop Infrastructure (VDI) Architecture | Part 1

o   Virtualization Jump Start (10): Virtual Desktop Infrastructure (VDI) Architecture | Part 2

o   Virtualization Jump Start (11): v-Alliance Solution Overview

o   Virtualization Jump Start (12): Application Delivery for VDI

·  Links to course materials on Born to Learn

Hyper-V : Virtual Hard Disks. Benefits of Fixed disks

March 31, 2011 5 comments

 

When creating a Virtual Machine, you can select to use either virtual hard disks or physical disks that are directly attached to a virtual machine.

My personal advise and what I have seen from Microsoft folks is to always use FIXED DISK for production environment, even with the release of Windows Server 2008 R2, which one of the enhancements was the improved performance of dynamic VHD files.

The explanation and benetifts for that is simple:

 1. Almost the same performance as passthroug disks

2. Portability : you can move/copy the VHD

3. Backup : you can backup at the VHD level and better, using DPM you can restore at ITEM level ( how cools is that! )

 4.You can have Snapshots

 5. The fixed sized VHD performance has been on-par with the physical disk since Windows Server 2008/Hyper-V

 If you use pass-through disks you lose all of the benefits of VHD files such as portability, snap-shotting and thin provisioning. Considering these trade-offs, using pass-through disks should really only be considered if you require a disk that is greater than 2 TB in size or if your application is I/O bound and you really could benefit from another .1 ms shaved off your average response time.  

 Disks Summary table:

Storage Container Pros Cons
Pass-through DisK
  • Fastest performance
  • Simplest storage path because file system on host is not involved.
  • Better alignment under SAN.
  • Lower CPU utilization
  • Support very large disks
  • VM snapshot cannot be taken
  • Disk is being used exclusively and directly by a single virtual machine.
  • Pass-through disks cannot be backed up by the Hyper-V VSS writer and any backup program that uses the Hyper-V VSS writer.
  • Fixed sized VHD
    • Highest performance of all VHD types.
    • Simplest VHD file format to give the best I/O alignment.
    • More robust than dynamic or differencing VHD due to the lack of block allocation tables (i.e. redirection layer).
    • File-based storage container has more management advantages than pass-through disk.
    • Expanding is available to increase the capacity of VHD.
    • No risk of underlying volume running out of space during VM operations
    • Up front space allocation may increase the storage cost when large of number fixed VHD are deployed.
    • Large fixed VHD Creation is time-consuming.
    • Shrinking the virtual capacity (i.e. reducing the virtual size) is not possible.
    Dynamically expanding or                  

     
     

     

     

    Differencing VHD

       

    • Good performance
    • Quicker to create than fixed sized VHD
    • Grow dynamically to save disk space and provide efficient storage usage.
    • Smaller VHD file size makes it more nimble in terms of transporting across the network.
    • Blocks of full zeros will not get allocated and thus save the space under certain circumstances.
    • Compact operation is available to reduce the actual physical file size
    • Interleaving of meta-data and data blocks may cause I/O alignment issues.
    • Write performance may suffer during VHD expanding.
    • Dynamically expanding and differencing VHDs cannot exceed 2040GB
    • May get VM paused or VHD yanked out if disk space is running out due to the dynamic growth.
    • Shrinking the virtual capacity is not supported.
    • Expanding is not available for differencing VHDs due to the inherent size limitation of parent disk.
    • Defrag is not recommended due to inherent re-directional layer.