Archive

Archive for March, 2010

Desktop Virtualization Roadshow : Microsoft and Citrix are coming to Brisbane and Camberra, Australia

March 31, 2010 Leave a comment
 
Join us to learn how Microsoft and Citrix can help you discover choices, preserve and extend your existing investments. Interact with Microsoft personnel, industry experts, IT leaders and get your questions answered.

Learn how server & desktop virtualization can help you:

  • Build a desktop virtualization management strategy that helps you manage your applications, data, mobile workers and multiple physical and virtual form factors.
  • Reduce desktop costs.
  • Enable flexible and agile IT through virtualization.
  • Increase desktop security and compliance.
  • Improve business continuity and end user productivity.
  • Understand how Microsoft is building a solid foundation for a private cloud.
  • Increase end user productivity and streamline your IT management with Windows 7.

Apr 29 Australia Brisbane
May 6 Australia Canberra

 

Register today!

 

 

 

Categories: Virtualization

VMware to create patch to fix performance behind Hyper-V R2 and XenServer

March 26, 2010 Leave a comment

 

In a recent update of the phase II of Project VRC, VMware was behind Hyper-V R2 and XenServer.

Although XenServer and  Hyper-V R2 performed nearly identical, Vmware get behind by 10%-30%. VMware responded to the findings  and wrote a fix for VMware vSphere 4.0 to try to close the gap :

 "…What we discovered led us to create a vSphere patch that would allow users to improve performance in some benchmarking environments."

There are three specific conditions that can excite this condition:

  1. A Xeon 5500 series processor is present with Hyper-Threading enabled,
  2. CPU utilization is near saturation, and
  3. A roughly one-to-one mapping between vCPUs and logical processors.

In this scenario, VMware vSphere favors fairness over throughput and sometimes pauses one vCPU to dedicate a whole core to another vCPU, eliminating gains provided by Hyper-Threading

Categories: Virtualization

VIRTUALIZING Terminal Server : analyzing Terminal Services (TS) workloads running on the latest generation hardware and hypervisors

March 25, 2010 Leave a comment
 

Virtualizing Terminal Server and Citrix XenApp workloads is highly recommended, for managment and consolidation benefits and if you are thinking on x86 TS so far for more scalability.

So, If you are planning to run Terminal Server in a Virtual Environment, you shoud read the The Project VRC phase 2 whitepaper, that focuses completely on analyzing Terminal Services (TS) workloads running on the latest generation hardware and hypervisors.

Here is a brief summary :

"When comparing hypervisors, performance is close to equal throughout when no Hyper-threading is used by the VM’s. In all test the hypervisors perform with a 5% range with the Terminal Server workloads, with a slight edge for vSphere 4.0. Utilizing Hyper-Threading, on all platforms a performance increase in seen, but vSphere 4.0 trailing slightly by 15% in comparison to XenServer 5.5 and Hyper-V 2.0. These differences are only visible under full load.

Strikingly, XenServer 5.5 and Hyper-V 2.0 perform almost identical in all tests. The only differentiator between these two is that XenServer 5.5 (and vSphere 4.0) support 8vCPU’s, where Hyper-V 2.0 has a maximum of 4vCPU per VM" 

if you are curious about the impact of different hypervisors and the performance differences with various hardware and if you are searching for best practices for your virtual Desktops … Project VRC whitepapers are a must read!

 

 

Categories: Virtualization

Dynamic Memory Coming to Hyper-V

March 25, 2010 Leave a comment
 
Most of the time, we overprovision our hardware not using it efficiently which in turn raises the TCO. Sometimes because we use vendor recommedations, minimum system requirements, or either if our users complain. So far, this is not the solution
Wouldn’t it be great if your workloads automatically and dynamically allocated memory based on workload requirements and you were provided a flexible policy mechanism to control how these resources are balanced across the system?
 
Checkout more here : Windows Virtualization Team Blog
Categories: Virtualization

Microsoft System Center Virtual Machine Manager Agent — WS-Management service is either not installed or disabled.

March 22, 2010 Leave a comment
 
If you are trying to ADD a MS Virtual Server 2005 R2 SP1 Host, running under Windows 2003 SP2,  to the System Center Virtual Machine manager and are getting this error:


"Product: Microsoft System Center Virtual Machine Manager Agent — WS-Management service is either not installed or disabled. Verify that Windows Hardware Management is installed and the Windows Remote Management service is enabled. Refer to the FAQ for more information."

You should install the KB 936059.  This update adds a new feature to Windows Remote Management (WinRM). This update also updates the version of Windows Remote Management that is included with Windows Server 2003 R2

 

 

Post-installation instructions

On a Windows Server 2003 R2-based computer that has the Windows Hardware Management optional component installed, the WsmanSelRg collector-initiated subscription collects events form the BMC. This subscription has a URI property that is set to wsman:microsoft/logrecord/sel. After you apply this hotfix, the WsmanSelRg subscription will not work until you set the URI property to the following:

To set the URI property, run the following command at a command prompt:

IMPORTANT NOTE:
 
if you are not running the Virtual Server 2005 R2 SP1 version, the you will get the error 10418. Virtual Machine Manager cannot add host xxx.xxx.xxx.xxx because this version of Virtual Server installed on the host is not supported.

 

Categories: Virtualization

Windows 2008 R2 : No Restrictions for Network adapter teaming in cluster environment

March 16, 2010 Leave a comment
 
Network Teaming, allowing us to group network adapter ports for a connection to a single physical , are available from some hardware manufacturers to provide fault tolerance. It means that if connectivity through one port is not working, another port is activated automatically. This operation is transparent to the operating system and other devices on the network.

In Windows Server 2008 and Windows Server 2008 R2, there are no restrictions that are associated with NIC Teaming and the Failover Clustering feature and allows it to be used on any network interface in a Failover Cluster.

 

The following table details the recommended, supported, and not recommended network configurations for live migration, and is organized in the order in which each network configuration is commonly used. Before reviewing the table, note the following:

  • When a network adapter is connected to a virtual switch, it is referred to as a virtual network adapter.

  • Network access for virtual machines can be on either a public or private network. To allow virtual machines access to computers on the physical network, they must be on a public network. The requirements for virtual machine access vary depending on network I/O needs and the number of virtual machines you are running on a single physical server.

  • In addition to the preferred network for the cluster and the Cluster Shared Volumes, a cluster can utilize at least one additional network for communication. This increases the high availability of the cluster. The cluster should also be on a private network.

  • If a network configuration is listed as “not recommended” in the following table, it should not be used because the performance of live migrations declines and cluster nodes might crash. Add another network adapter to separate traffic between live migration and Cluster Shared Volumes.

 

Host configuration Virtual machine access Management Cluster and Cluster Shared Volumes Live migration Comments

4 network adapters with 1 Gbps

Virtual network adapter 1

Network adapter 2

Network adapter 3

Network adapter 4

Recommended

3 network adapters with 1 Gbps; 2 adapters are teamed for link aggregation (private)

Virtual network adapter 1

Virtual network adapter 1 with bandwidth capped at 10%

Network adapter 2 (teamed)

Network adapter 2 with bandwidth capped at 40% (teamed)

Supported

3 network adapters with 1 Gbps

Virtual network adapter 1

Virtual network adapter 1 with bandwidth capped at 10%

Network adapter 2

Network adapter 3

Supported

2 network adapters with 10 Gbps

Virtual network adapter 1

Virtual network adapter 1 with bandwidth capped at 1%

Network adapter 2

Network adapter 2 with bandwidth capped at 50%

Supported*

2 network adapters with 10 Gbps; 1 network adapter with 1 Gbps

Virtual network adapter 1 (10 Gbps)

Network adapter 2 (1 Gbps)

Network adapter 3 (10 Gbps)

Network adapter 2 with bandwidth capped at 50%

Supported

2 network adapters with 10 Gbps; 2 network adapters with 1 Gbps

Virtual network adapter 1 (10 Gbps)

Network adapter 2 (1 Gbps)

Network adapter 3 (1 Gbps)

Network adapter 4 (10 Gbps)

Supported

3 network adapters with 1 Gbps; 2 adapters are teamed for link aggregation (public)

Virtual network adapter 1

Virtual network adapter 1 with bandwidth capped at 5%

Network adapter 2 (teamed)

Network adapter 2 with bandwidth capped at 90% (teamed)

Not recommended

2 network adapters with 1 Gbps

Virtual network adapter 1

Virtual network adapter 1 with bandwidth capped at 10%

Network adapter 2

Network adapter 2 with bandwidth capped at 90%

Not recommended

1 network adapter with 10 Gbps; 1 network adapter with 1 Gbps

Virtual network adapter 1 (10 Gbps)

Virtual network adapter 1 with bandwidth capped at 10%

Network adapter 2 (1 Gbps)

Network adapter 2 with bandwidth capped at 90%

Not recommended

*This configuration is considered recommended if your configuration has a redundant network path available for Cluster and Cluster Shared Volumes communication.

Categories: Virtualization

Hyper-V : The system time runs too fast on a Linux-based vm

March 8, 2010 Leave a comment
 
If you create a Linux VM on 2.6 Kernel and if you are experiencing the system time in the Linux guest operating system running too fast, this is the step to fix:
 
1. Open the Linux console
2.  Edit boot menu
   vi /boot/grub/menu.lst 
 
3. In the title Linux area of this file, add the clock=pit parameter to the kernel entry
 
 
4. Edit the NTP.CONF file
   vi /etc/ntp.conf
   
5. Add the NTP server ( click here to know more about NTP Server in windows 2008 )

 
6. Edit the CRONTAB:
 crontab -e
 
7. Type : 30  *  *  *  * /usr/sbin/ntpdate -su
 
Done. Now, every 30 minutes, the Linux VM will sync with the NTP server.
 
 
Categories: Virtualization

Download a Set of Free Tools for Managing Hyper-V R2

March 4, 2010 Leave a comment
 
An option for managing Windows Server 2008 R2 Hyper-V and Hyper-V Server 2008 R2 is to use the Remote Server Administration Tools (RSAT) for Windows 7. You can download RSAT for Windows 7 for free from the Microsoft Web site.
 
 
Categories: Virtualization

VMM : Lan Migration using BITS fail with error: HostAgentFail (2912) … and HRESULT: 0x800704DD

March 4, 2010 Leave a comment
 

This error occur when you either a new virtual machine creation or during a virtual machine migration over the network (LAN migration using BITS) 

The causes for this error could be:
 
1. The VMMAgent windows service is running using an account that is not LocalSystem.
Confirm and if this is the case, make the vmmagent service run as localsystem on both the VMM Server computer and on all the Host and Library server computers.

 

2. The Log On account for the VMMService windows service on the VMM server computer is running under an account other than localsystem.
Althoug using domain account is a fully supported scenario, however, the domain account ( e.g. contososcvmmadmin) needs to be logged into the VMM server computer in order for BITS jobs to complete successfully. Login as that account and then try the operation again. It should work.

 

 

Categories: Virtualization

Hyper-V R2 : Storage/Network Design for High Availability

March 1, 2010 1 comment

By converting your physical servers to virtual ones, you immediately get extra capabilities that make them less likely to go down and easier to bring back up when they do:

  • · Snapshots enable you to go back in time when a software update or configuration change blows up an otherwise healthy server.
  • · Virtual hard disks consolidate the thousands of files that comprise a Windows server into a single file for backups, which significantly improves the reliability of those backups.
  • · Volume Shadow Copy Service (VSS) support, which is natively available in Hyper-V, means that applications return from a restore with zero loss of data and immediately ready for operation.
  • · Migration capabilities improve planned downtime activities by providing a mechanism for relocating the processing of virtual machines to new hosts with little to no disruption in service.
  • · Failover clustering means that the loss of a virtual host automatically moves virtual machines to new locations where they can continue doing their job.

What’s become much more critical is that the servers/application/services to keep on working.

To Provide High Availability, we need to design properly our environment. With the right combinations of technology, you can inexpensively increase the availability of your environment.

The best practices are based on the following design principles:

  • · Redundant hardware to eliminate a single point of failure
  • · Load balancing and failover for iSCSI and network traffic
  • · Redundant paths for the cluster, Cluster Shared Volume (CSV), and live migration traffic
  • · Separation of each traffic type for security and availability
  • · Ease of use and implementation

Remember: Windows Server 2008 R2 Enterprise or Windows Server 2008 R2 Datacenter must be used for the physical computers. These servers must run the same version of Windows Server 2008 R2, including the same type of installation. That is, both servers must be either a full installation or a Server Core installation

Also, Hyper-V requires an x64-based processor, hardware-assisted virtualization, and hardware-enforced Data Execution Prevention (DEP). Specifically, you must enable the Intel XD bit (execute disable bit) or AMD NX bit (no execute bit).

Servers

Server-class equipment. The use of equipment that is not listed in the Windows catalog can impact supportability and may not best meet the needs of your virtual machines. Moving to tested and supported server-class equipment will ensure full support in the case of a problem. ). The Windows Server catalog is available at the Microsoft Web site http://go.microsoft.com/fwlink/?LinkId=111228

iSCSI Storage
I would recommend Dell Equalogic, Compellent, IBM NetApp, EMC, but you should evaluate others vendors.

iSCSI Software

If you need to use software-based iSCSI, look carefully at the features available. Microsoft clustering requires iSCSI to support SCSI Primary Commands-3, specifically the support of Persistent Reservations. Most for-cost iSCSI software currently supports this capability, but there is very little support for it in most open source software packages.

One inexpensive and easy-to-use software package is the StarWind iSCSI Target from StarWind Software. There is a free version of StarWind iSCSI target allowing multiple connections. You cannot get it filling automatic form on their site. You have to ask support@starwindsoftware.com for free NFR unlock key manuallyNetwork
How about the network configuration? Here is my proposal and this is what I am using in terms of NICs/Ports:1 management2 private: 1 for cluster private/CSV primary, 1 for live migration primary2 for network (in teaming)2 for iSCSI2 Dedicated (NIC/Ports) for the Network traffic configured as teaming.The failover cluster should be disabled from managing this network.
Provided by establishing the Hyper-V virtual switch on a network team. The team can provide load balancing, link aggregation, and failover capabilities to the virtual network
NIC teaming is the process of grouping together several physical NICs into one single logical NIC, which can be used for network fault tolerance and transmit load balance. The process of grouping NICs is called teaming. Teaming has two purposes:• Fault Tolerance: By teaming more than one physical NIC to a logical NIC, high availability is maximized. Even if one NIC fails, the network connection does not cease and continues to operate on other NICs.• Load Balancing: Balancing the network traffic load on a server can enhance the functionality of the server and the network. Load balancing within network interconnect controller (NIC) teams enables distributing traffic amongst the members of a NIC team so that traffic is routed among all available paths.2 Dedicated (NIC/Ports) for the CSV. (Minimum 1Gb). I personally recommend 10Gb. One a 2 nodes you can use cross-over, but if you plan to use more, than you need a switch. If you choose 10GB it means that your switch needs to be 10GB.
A feature of failover clusters called Cluster Shared Volumes is specifically designed to enhance the availability and manageability of virtual machines. Cluster Shared Volumes are volumes in a failover cluster that multiple nodes can read from and write to at the same time. This feature enables multiple nodes to concurrently access a single shared volume.CSV will provide many benefits, including easier storage management, greater resiliency to failures, the ability to store many VMs on a single LUN and have them fail over individually, and most notably, CSV provides the infrastructure to support and enhance live migration of Hyper-V virtual machines.Cluster private traffic will flow over the private network with the lowest cluster metric (typically has value of 1000). To view the cluster network metrics that have been assigned, run the following PowerShell command:
To view the cluster network metric settings, run the following Power Shell commands:Import-Module FailoverClusters
Get-ClusterNetwork | ft Name, Metric, AutoMetricIf the automatically assigned metrics are not the desired values, then the following Power Shell commands can be executed to manually set the metric values:Get-ClusterNetwork | ft Name, Metric, AutoMetricNote the name of the networks that you want to set the values on (used for next command)$cn = Get-ClusterNetwork “<cluster network name>”
$cn.Metric = <value>Cluster private/CSV should have a value of 1000
Live migration should have a value of 11002 Dedicated (NIC/Ports) for the iSCSI traffic.( Minimum 1Gb). I personally recommend 10Gb ( the difference in price will be about 10% more).Btw, remember: If you choose 10GB it means that your switch needs to be 10GB, also the Storage.
The mass-storage device controllers that are dedicated to the cluster storage should be identical. They should also use the same firmware version.Isolating iSCSI traffic to its own network path isolates that traffic to its own network segment, ensuring its full availability as network conditions change.A multipath I/O software needs to be installed on the Hyper-V hosts to manage the disks properly. This is done by first enabling Hyper-V-based MPIO support which is not installed by default.Also, Enable Jumbo frames on the two interfaces identified for iSCSI1 (NIC/Port) for the Management. External management applications (SCVMM, DMC, Backup/Restore, etc) communicate with the cluster through this network.Resuming :hyper-r2-host-ha