Archive

Posts Tagged ‘Management’

SCVMM 2008 Ports and Protocols

June 7, 2011 Leave a comment

SCVMM 2008, SCVMM 2008 R2 and SCVMM 2008 R2 SP1 default ports :

Connection type Protocol Default port Where to change the port setting
VMM server to VMM agent on Windows Server–based host (control) WS-Management 80 at VMM setup, registry
VMM server to VMM agent on Windows Server–based host (file
transfers)
HTTPS (using BITS) 443 (Maximum value: 32768) Registry
VMM server to remote Microsoft SQL Server database TDS 1433 Registry
VMM server to P2V source agent DCOM 135 Registry
VMM Administrator Console to VMM server WCF 8100 at VMM setup, registry
VMM Self-Service Portal Web server to VMM server WCF 8100 at VMM setup
VMM Self-Service Portal to VMM self-service Web server HTTPS 443 at VMM setup
VMM library server to hosts BITS 443 (Maximum value: 32768) at VMM setup, registry
VMM host-to-host file transfer BITS 443* (Maximum value: 32768)

* VMM 2008 R2 : port 30443 (http://support.microsoft.com/kb/971816)

Registry
VMRC connection to Virtual Server host VMRC 5900 VMM Administrator Console, registry
VMConnect (RDP) to Hyper-V hosts RDP 2179 VMM Administrator Console, registry
Remote Desktop to virtual machines RDP 3389 Registry
VMware Web Services communication HTTPS 443 VMM Administrator Console, registry
SFTP file transfer from VMWare ESX Server 3.0 and VMware ESX Server 3.5
hosts
SFTP 22 Registry
SFTP file transfer from VMM server to VMWare ESX Server 3i hosts HTTPS 443 Registry

More info  : http://technet.microsoft.com/en-us/library/cc764268.aspx

 

SCVMM 2012 Management ports and protocols. Detailed

June 7, 2011 Leave a comment

Here are the list of ports/protocols for the new SCVMM 2012.

From To Protocol Default
port
Where to change port setting
VMM management server P2V
source agent (control channel)
DCOM 135
Load Balancer HTTP/HTTPS 80/443 Load balancer configuration provider
WSUS server (data channel) HTTP/HTTPS 80/8530
(non-SSL), 443/8531 (with SSL)
These ports are the IIS port binding with WSUS. They cannot be changed from VMM.
WSUS server (control channel) HTTP/HTTPS 80/8530
(non-SSL), 443/8531 (with SSL)
These ports are the IIS port binding with WSUS. They cannot be changed from VMM.
VMM agent on Windows Server–based host (data
channel for file transfers)
HTTPS
(using BITS)
443
(Maximum value: 32768)
Citrix XenServer host (customization data
channel)
iSCSI 3260 On XenServer in transfer VM
XenServer host (control channel) HTTPS 5989 On XenServer host in: /opt/cimserver/cimserver_planned.conf
remote Microsoft SQL Server database TDS 1433
VMM agent on Windows Server–based host (control
channel)
WS-Management 5985 VMM setup
VMM agent on Windows Server–based host (control
channel – SSL)
WS-Management 5986
in-guest agent (VMM to virtual machine control
channel)
WS-Management 5985
Storage Management Service WMI Local
call
Cluster PowerShell interface PowerShell n/a
P2V source agent (data channel) BITS User-Defined P2V cmdlet option
VMM library server hosts
file transfer
BITS 443
(Maximum value: 32768)
VMM setup
VMM host-to-host file transfer BITS 443
(Maximum value: 32768)
VMM Self-Service Portal VMM
Self-Service Portal web server
HTTPS 443 VMM setup
VMM Self-Service Portal web server VMM
management server
WCF 8100 VMM setup
Console connections (RDP) virtual
machines through Hyper-V hosts (VMConnect)
RDP 2179 VMM console
Remote Desktop virtual
machines
RDP 3389 On the virtual machine
VMM console VMM
management server
WCF 8100 VMM setup
VMM management server (HTTPS) WCF 8101 VMM setup
VMM management server (NET.TCP) WCF 8102 VMM setup
VMM management server (HTTP) WCF 8103 VMM setup
Windows PE agent VMM
management server (control channel)
WCF 8101 VMM setup
VMM management server (time sync) WCF 8103 VMM setup
WDS provider VMM
management server
WCF 8102 VMM setup
Storage Management Service  SMI-S Provider CIM-XML Provider-specific
port
VMM management server VMware
ESX Server 3i hosts
HTTPS 443

Others

Connection Type Protocol Default port Where to change port setting
OOB Connection – SMASH over WS-Man HTTPS 443 On BMC
OOB Connection IPMI IPMI 623 On BMC
BITS port for VMM transfers (data channel)
BITS 443 VMM setup
VMware ESX Server 3.0 and VMware ESX Server 3.5 hosts SFTP 22
VMware Web Services
communication
HTTPS 443 VMM console

Note: When you install the VMM management server you can assign some of the ports that it will use for communications and file transfers between the VMM components.

Hyper-V : Best Practices and Supported scenarios regarding Exchange Server 2010

May 19, 2011 Leave a comment

 The following are the supported scenarios for Exchange 2010 SP1 :

  • The Unified Messaging server role is supported in a virtualized environment.
  • Combined Exchange 2010 high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions that will move or automatically failover mailbox servers that are members of a DAG between clustered root servers

HyperV Guest Configuration

Keep in mind that because there are no routines within Exchange Server that test for a virtualized platform, Exchange Server behaves no differently programmatically on a virtualized platform than it does on a physical platform.

Determining Exchange Server Role Virtual Machine Locations

When determining Exchange Server Role virtual machine locations, consider the following general best practices:

  • Deploy the same Exchange roles across multiple physical server roots (to allow for load balancing and high availability).
  • Never deploy Mailbox servers that are members of the same Database Availability Groups (DAGs) on the same root.
  • Never deploy all the Client Access Servers on the same root.
  • Never deploy all the Hub Transport servers on the same root.
  • Determine the workload requirements for each server and balance the workload across the HyperV guest virtual machines.

Guest Storage

Each Exchange guest virtual machine must be allocated sufficient storage space on the root virtual machine for the fixed disk that contains the guest’s operating system, any temporary memory storage files in use, and related virtual machine files that are hosted on the root machine.Consider the following best practices when configuring Hyper-V guests:

  • Fixed VHDs are recommended for the virtual operating system.
  • Allow for a minimum of a 15-GB disk for the operating system, allow additional space for the paging file, management software, and crash recovery (dump) files. Then add Exchange server role space requirements.
  • Storage used by Exchange should be hosted in disk spindles that are separate from the storage that hosts the guest virtual machine’s operating system.
  • For Hub Transport servers, correctly provision the necessary disk space needed for the message queue database, and logging operations.
  • For Mailbox servers, correctly provision the necessary disk space for databases, transaction logs, the content index, and other logging operations. .

Guest Memory : Dynamic Memory should be disabled
Memory must be sized for guest virtual machines using the same methods as physical computer deployments. Exchange—like many server applications that have optimizations for performance that involve caching of data in memory—is susceptible to poor system performance and an unacceptable client experience if it doesn’t have full control over the memory allocated to the physical computer or virtual machine on which it is running.
Many of the performance gains in recent versions of Exchange, especially those related to reduction in input/output (I/O) are based on highly efficient usage of large amounts of memory. When that memory is no longer available, the expected performance of the system can’t be achieved. For this reason, memory oversubscription or dynamic adjustment of virtual machine memory must be disabled for production Exchange servers.

Deployment Recommendations

When designing an Exchange Server 2010 virtualized environment, the core Exchange design principles apply. The environment must be designed for the correct performance, reliability, and capacity requirements. Design considerations such as examining usage profiles, message profiles, and so on must still be taken into account.

See this article (Mailbox Storage Design Process) as a starting point when considering a high availability solution that uses DAGs.

Because virtualization provides the flexibility to make changes to the design of the environment later, some organizations might be tempted to spend less time on their design at the outset. As a best practice, spend adequate time designing the environment to avoid pitfalls later.

Group the Exchange Server roles in such a way that balances workloads on the root servers. Mixing both roles on the same HyperV root server can balance the workloads and prevent one physical resource from being unduly stressed, rather than if the same roles were put on the same hosts

The updated support guidance applies to any hardware virtualization vendor participating in the Windows Server Virtualization Validation Program (SVVP).

Best Practices for Virtualizing Exchange Server 2010 with Windows Server 2008 R2 Hyper-Vwhitepaper. This whitepaper is designed to provide technical guidance on Exchange server roles, capacity planning, sizing and performance, as well as high availability best practices.

Complete system requirements for Exchange Server 2010 running under hardware virtualization software can be found in Exchange 2010 System Requirements. Also, the support policy for Microsoft software running in non-Microsoft hardware virtualization software can be found here.

CentOS now have official support as guest VM in Hyper-V

May 18, 2011 2 comments

Effective immediately, Microsoft will support Windows Server2008 R2 Hyper-V to run CentOS.  

CentOS is a popular Linux distribution for Hosters, and this was the number one requirement for interoperability that we heard from that community.

This development will enable MS Hosting partners to consolidate their mixed Windows + Linux infrastructure on Windows Server Hyper-V; reducing cost and complexity, while betting on an enterprise class virtualization platform. .

How will support work?
Call Microsoft CSS. Support will cover installation issues as well as configuration issues.

What version of the Linux Integration Services support CentOS?

 The existing Hyper-V Linux Integration Services for Linux Version 2.1 support CentOS. The following features are included in the Hyper-V Linux Integration Services 2.1 release:

· Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use up to 4 virtual processors (VP) per virtual machine.

· Driver support for synthetic devices: Linux Integration Services supports the synthetic network controller and the synthetic storage controller that were developed specifically for Hyper-V.

· Fastpath Boot Support for Hyper-V: Boot devices take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.

· Timesync: The clock inside the virtual machine will remain synchronized with the clock on the host.

· Integrated Shutdown: Virtual machines running Linux can be gracefully shut down from either Hyper-V Manager or System Center Virtual Machine Manager.

· Heartbeat: Allows the host to detect whether the guest is running and responsive.

· Pluggable Time Source: A pluggable clock source module is included to provide a more accurate time source to the guest.

The Linux Integration Services are available via the Microsoft Download Center here: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=eee39325-898b-4522-9b4c-f4b5b9b64551

 From Wikipedia:

CentOS is a community-supported, mainly free software operating system based on Red Hat Enterprise Linux (RHEL). It exists to provide a free enterprise class computing platform and strives to maintain 100% binary compatibility with its upstream distribution. CentOS stands for Community ENTerprise Operating System.

Red Hat Enterprise Linux is available only through a paid subscription service that provides access to software updates and varying levels of technical support. The product is largely composed of software packages distributed under either an open source or a free software license and the source code for these packages is made public by Red Hat.

CentOS developers use Red Hat’s source code to create a final product very similar to Red Hat Enterprise Linux. Red Hat’s branding and logos are changed because Red Hat does not allow them to be redistributed.

CentOS is available free of charge. Technical support is primarily provided by the community via official mailing lists, web forums, and chat rooms. The project is not affiliated with Red Hat and thus receives no financial or logistical support from the company; instead, the CentOS Project relies on donations from users and organizational sponsors.

MS Virtualization for VMware Pros : Jump Start

April 28, 2011 Leave a comment

Exclusive Jump Start virtual training event – “Microsoft Virtualization for VMware Professionals”  FREE – on TechNet Edge

Where do I go for this great training?

The HD-quality video recordings of this course are on TechNet Edge. If you’re interested in one specific topic, I’ve included links to each module as well.

 ·   Entire course on TechNet Edge: “Microsoft Virtualization for VMware Professionals” Jump Start

o   Virtualization Jump Start (01): Virtualization Overview

o   Virtualization Jump Start (02): Differentiating Microsoft & VMware

o   Virtualization Jump Start (03a): Hyper-V Deployment Options & Architecture | Part 1

o   Virtualization Jump Start (03b): Hyper-V Deployment Options & Architecture | Part 2

o   Virtualization Jump Start (04): High-Availability & Clustering

o   Virtualization Jump Start (05): System Center Suite Overview with focus on DPM

o   Virtualization Jump Start (06): Automation with Opalis, Service Manager & PowerShell

o   Virtualization Jump Start (07): System Center Virtual Machine Manager 2012

o   Virtualization Jump Start (08): Private Cloud Solutions, Architecture & VMM Self-Service Portal 2.0

o   Virtualization Jump Start (09): Virtual Desktop Infrastructure (VDI) Architecture | Part 1

o   Virtualization Jump Start (10): Virtual Desktop Infrastructure (VDI) Architecture | Part 2

o   Virtualization Jump Start (11): v-Alliance Solution Overview

o   Virtualization Jump Start (12): Application Delivery for VDI

·  Links to course materials on Born to Learn

Hyper-V R2 and right numbers of physical NIC’s

April 27, 2011 1 comment

When it comes to network configuration, be sure to provide the right number of physical network adapters on Hyper-V servers. Failure to configure enough network connections can make it appear as though you have a storage problem, particularly when using iSCSI.

Recommendation for network configuration ( number of dedicated Physical Nic’s ):

  • 1 for Management. Microsoft recommends a dedicated network adapter for Hyper-V server management.
  • At least 1  for Virtual machines. Virtual network configurations of the external type require a minimum of one network adapter.
  • 2 for SCSI. Microsoft recommends that IP storage communication have a dedicated network, so one adapter is required and two or more are necessary to support multipathing.
  • At least 1 for Failover cluster. Windows® failover cluster requires a private network.
  • 1 for Live migration. This new Hyper-V R2 feature supports the migration of running virtual machines between Hyper-V servers. Microsoft recommends configuring a dedicated physical network adapter for live migration traffic. This network should be separate from the network for private communication between the cluster nodes, from the network for the virtual machine, and from the network for storage
  • 1 for Cluster shared volumes. Microsoft recommends a dedicated network to support the communications traffic created by this new Hyper-V R2 feature. In the network adapter properties, Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks must be enabled to support SMB

Some interesting notes when comparing FC with iSCSI:

  • iSCSI and FC delivered comparable throughput performance irrespective of the load on the system.
  • iSCSI used approximately 3-5 percentage points more Hyper-V R2 CPU resources than FC to achieve comparable performance.

For information about the network traffic that can occur on a network used for Cluster Shared Volumes, see “Understanding redirected I/O mode in CSV communciation” in Requirements for Using Cluster Shared Volumes in a Failover Cluster in Windows Server 2008 R2 (http://go.microsoft.com/fwlink/?LinkId=182153).

For more information on the network used for CSV communication, see Managing the network used for Cluster Shared Volumes.

It’s not recommend that you do use the same network adapter for virtual machine access and management.
If you are limited by the number of network adapters, you should configure a virtual local area network (VLAN) to isolate traffic. VLAN recommendations include 802.1q and 802.p.

Microsoft is back for another Springboard Series Tour!

April 8, 2011 Leave a comment

Microsoft is back in the US for another Springboard Series Tour!

www.springboardseriestour.com

 

May 2 – Toronto | May 4 – Detroit | May 6 – Chicago | May 9 – Indianapolis | May 11 – Dallas | May 13 – Columbus

 

The Springboard Series Tour is back! This 6 city tour brings the top product teams from Microsoft to you. We understand the questions and issues that IT pros deal with every day. How do I manage end users bringing consumer devices into the office? Should we look to the cloud for key solutions? Should I virtualize? What are the best tools to manage application compatibility and deployment? The Springboard Series Tour brings the experts and the answers.

Join us for a full day’s deep dive into the tools, solutions and options to help you do more with less. We will cover managing the flexible workspace, a first look at Windows Intune and Office 365. We’ll also show you some of the new Windows Slates and give you details around Microsoft’s strategy for Slate devices. We will preview  the new tools in the MDOP 2011 suite, a deep dive into managing and deploying Office 2010 and great tips and tricks to help you deploy Windows 7 and move your users from Windows XP with speed and ease.

REGISTER NOW and save your seat for this free day of technical demos, Q&A sessions, and real-world guidance from Microsoft experts. We’ll see you on the road!

Hyper-V : Virtual Hard Disks. Benefits of Fixed disks

March 31, 2011 5 comments

 

When creating a Virtual Machine, you can select to use either virtual hard disks or physical disks that are directly attached to a virtual machine.

My personal advise and what I have seen from Microsoft folks is to always use FIXED DISK for production environment, even with the release of Windows Server 2008 R2, which one of the enhancements was the improved performance of dynamic VHD files.

The explanation and benetifts for that is simple:

 1. Almost the same performance as passthroug disks

2. Portability : you can move/copy the VHD

3. Backup : you can backup at the VHD level and better, using DPM you can restore at ITEM level ( how cools is that! )

 4.You can have Snapshots

 5. The fixed sized VHD performance has been on-par with the physical disk since Windows Server 2008/Hyper-V

 If you use pass-through disks you lose all of the benefits of VHD files such as portability, snap-shotting and thin provisioning. Considering these trade-offs, using pass-through disks should really only be considered if you require a disk that is greater than 2 TB in size or if your application is I/O bound and you really could benefit from another .1 ms shaved off your average response time.  

 Disks Summary table:

Storage Container Pros Cons
Pass-through DisK
  • Fastest performance
  • Simplest storage path because file system on host is not involved.
  • Better alignment under SAN.
  • Lower CPU utilization
  • Support very large disks
  • VM snapshot cannot be taken
  • Disk is being used exclusively and directly by a single virtual machine.
  • Pass-through disks cannot be backed up by the Hyper-V VSS writer and any backup program that uses the Hyper-V VSS writer.
  • Fixed sized VHD
    • Highest performance of all VHD types.
    • Simplest VHD file format to give the best I/O alignment.
    • More robust than dynamic or differencing VHD due to the lack of block allocation tables (i.e. redirection layer).
    • File-based storage container has more management advantages than pass-through disk.
    • Expanding is available to increase the capacity of VHD.
    • No risk of underlying volume running out of space during VM operations
    • Up front space allocation may increase the storage cost when large of number fixed VHD are deployed.
    • Large fixed VHD Creation is time-consuming.
    • Shrinking the virtual capacity (i.e. reducing the virtual size) is not possible.
    Dynamically expanding or                  

     
     

     

     

    Differencing VHD

       

    • Good performance
    • Quicker to create than fixed sized VHD
    • Grow dynamically to save disk space and provide efficient storage usage.
    • Smaller VHD file size makes it more nimble in terms of transporting across the network.
    • Blocks of full zeros will not get allocated and thus save the space under certain circumstances.
    • Compact operation is available to reduce the actual physical file size
    • Interleaving of meta-data and data blocks may cause I/O alignment issues.
    • Write performance may suffer during VHD expanding.
    • Dynamically expanding and differencing VHDs cannot exceed 2040GB
    • May get VM paused or VHD yanked out if disk space is running out due to the dynamic growth.
    • Shrinking the virtual capacity is not supported.
    • Expanding is not available for differencing VHDs due to the inherent size limitation of parent disk.
    • Defrag is not recommended due to inherent re-directional layer.

    SCVMM 2012: Private Cloud Management. Got it!?

    March 23, 2011 Leave a comment

    It’s great pleasure to see how far Microsoft SCVMM went with the SCVMM 2012.
    Belevie me, it’s a whole new product.
    So, if you are seriuos about Private Cloud Management, that’s the product you will look into.

    •Fabric Management
    ◦Hyper-V and Cluster Lifecycle Management – Deploy Hyper-V to bare metal server, create Hyper-V clusters, orchestrate patching of a Hyper-V Cluster

    ◦Third Party Virtualization Platforms – Add and Manage Citrix XenServer and VMware ESX Hosts and Clusters

    ◦Network Management – Manage IP Address Pools, MAC Address Pools and Load Balancers

    ◦Storage Management – Classify storage, Manage Storage Pools and LUNs

    •Resource Optimization
    ◦Dynamic Optimization – proactively balance the load of VMs across a cluster

    ◦Power Optimization – schedule power savings to use the right number of hosts to run your workloads – power the rest off until they are needed.

    ◦PRO – integrate with System Center Operations Manager to respond to application-level performance monitors.

    •Cloud Management
    ◦Abstract server, network and storage resources into private clouds

    ◦Delegate access to private clouds with control of capacity, capabilities and user quotas

    ◦Enable self-service usage for application administrator to author, deploy, manage and decommission applications in the private cloud

    •Service Lifecycle Management
    ◦Define service templates to create sets of connected virtual machines, os images and application packages

    ◦Compose operating system images and applications during service deployment

    ◦Scale out the number of virtual machines in a service

    ◦Service performance and health monitoring integrated with System Center Operations Manager

    ◦Decouple OS image and application updates through image-based servicing.

    ◦Leverage powerful application virtualization technologies such as Server App-V

    Note: The SCVMM 2012 Beta is NOT Supported in production environments.
    Download SCVMM 2012 Beta Now

    Virtual Machines that are misaligned

    March 14, 2011 Leave a comment

    For existing child VMs that are misaligned, in order to correct the partition offset, a new physical disk must be created and formatted, and the data has to be migrated from the original disk to the new one.

    Important note: Both Windows 7 and Windows 2008/2008R2  create aligned partition

    This problem  occurs when the partitioning  scheme used by the host OS doesn’t match the block boundaries inside the LUN.If the guest file system is not aligned, it may become necessary to read or write twice as many blocks of storage than the guest actually requested since any guest file system block actually occupies at least two partial storage blocks.

    All VHD types can be formatted with the correct offset at the time of  creation by booting the child VM before installing an OS and manually setting the partition offset. . The recommended starting offset value for Windows OSs is 32768. The default starting offset value typically observed is 32256.
    How to verify the Offset value:

    1. Run msinfo32 on the guest VM by selecting
    Start > All Programs > Accessories > System Tools > System Information.
    or type Start > Run and enter the following command : MSINFO32
    2. Navigate to Components > Storage > Disks and check the value for partition starting offset.


    If the misaligned virtual disk is the boot partition, follow these  steps:
    1. Back up the VM system image.
    2. Shut down the VM.
    3. Attach the misaligned system image virtual disk to a different VM.
    4. Attach a new aligned virtual disk to this VM.
    5. Copy the contents of the system image (for example, C: in Windows) virtual disk to the new aligned virtual disk.
    There are various tools that can be used to copy the contents from the misaligned virtual disk to the new aligned virtual disk:
    − Windows xcopy
    − Norton/Symantec™ Ghost: Norton/Symantec Ghost can be used to back up a full system image on the misaligned virtual disk and then be restored to a previously created, aligned virtual disk file system.
    For Microsoft Hyper-V LUNs mapped to the Hyper-V parent partition using the incorrect LUN protocol type but with

    aligned VHDs, create a new LUN using the correct LUN protocol type and copy the
    contents (VMs and VHDs) from the misaligned LUN to this new LUN.

    For Microsoft Hyper-V LUNs mapped to the Hyper-V parent partition using the   incorrect LUN protocol type but with misaligned VHDs, create a new LUN using the correct LUN protocol type and copy the contents (VMs and VHDs) from the misaligned LUN to this new LUN.!01

    Next, To set up the starting offset, follow these steps:

    1. Boot the child VM with the Windows Preinstall Environment boot CD.
    2. Select

    Start > Run and enter the following command:
    diskpart
    3. Type the following into the prompt:
    select disk 0
    4. Type the following into the prompt:
    create partition primary align=32
    5. Reboot the child VM with the Windows Preinstall Environment boot CD.
    6. Install the operating system as normal.
    Virtual disks to be used as data disks can be formatted with the correct  offset at the time of creation by using Diskpart in the VM. To align the virtual disk, follow these steps:

    1. Boot the child VM with the Windows Preinstall Environment boot CD.
    2. Select Start > Run and enter the following command:
    diskpart
    3. Determine the appropriate disk to use by typing the following into the prompt:
    list disk
    4. Select the correct disk by typing the following into the prompt:
    select disk [#]
    5. Type the following into the prompt:
    create partition primary align=32
    6. To exit, type the following in the prompt:
    exit
    7. Format the data disk as you would normally

    For pass-through disks and LUNs directly mapped to the child OS, create a new LUN using the correct LUN protocol type, map the LUN to the VM, and copy the contents from the misaligned LUN to this new aligned LUN.

    For info about  misaligned in Windows 2003, please have a look here : http://support.microsoft.com/default.aspx?scid=kb;EN-US;929491