Archive

Archive for March, 2010

Desktop Virtualization Roadshow : Microsoft and Citrix are coming to Brisbane and Camberra, Australia

 
Join us to learn how Microsoft and Citrix can help you discover choices, preserve and extend your existing investments. Interact with Microsoft personnel, industry experts, IT leaders and get your questions answered.

Learn how server & desktop virtualization can help you:

  • Build a desktop virtualization management strategy that helps you manage your applications, data, mobile workers and multiple physical and virtual form factors.
  • Reduce desktop costs.
  • Enable flexible and agile IT through virtualization.
  • Increase desktop security and compliance.
  • Improve business continuity and end user productivity.
  • Understand how Microsoft is building a solid foundation for a private cloud.
  • Increase end user productivity and streamline your IT management with Windows 7.

Apr 29 Australia Brisbane
May 6 Australia Canberra

 

Register today!

 

 

 

Categories: Virtualization

VMware to create patch to fix performance behind Hyper-V R2 and XenServer

 

In a recent update of the phase II of Project VRC, VMware was behind Hyper-V R2 and XenServer.

Although XenServer and  Hyper-V R2 performed nearly identical, Vmware get behind by 10%-30%. VMware responded to the findings  and wrote a fix for VMware vSphere 4.0 to try to close the gap :

 "…What we discovered led us to create a vSphere patch that would allow users to improve performance in some benchmarking environments."

There are three specific conditions that can excite this condition:

  1. A Xeon 5500 series processor is present with Hyper-Threading enabled,
  2. CPU utilization is near saturation, and
  3. A roughly one-to-one mapping between vCPUs and logical processors.

In this scenario, VMware vSphere favors fairness over throughput and sometimes pauses one vCPU to dedicate a whole core to another vCPU, eliminating gains provided by Hyper-Threading

Categories: Virtualization

VIRTUALIZING Terminal Server : analyzing Terminal Services (TS) workloads running on the latest generation hardware and hypervisors

 

Virtualizing Terminal Server and Citrix XenApp workloads is highly recommended, for managment and consolidation benefits and if you are thinking on x86 TS so far for more scalability.

So, If you are planning to run Terminal Server in a Virtual Environment, you shoud read the The Project VRC phase 2 whitepaper, that focuses completely on analyzing Terminal Services (TS) workloads running on the latest generation hardware and hypervisors.

Here is a brief summary :

"When comparing hypervisors, performance is close to equal throughout when no Hyper-threading is used by the VM’s. In all test the hypervisors perform with a 5% range with the Terminal Server workloads, with a slight edge for vSphere 4.0. Utilizing Hyper-Threading, on all platforms a performance increase in seen, but vSphere 4.0 trailing slightly by 15% in comparison to XenServer 5.5 and Hyper-V 2.0. These differences are only visible under full load.

Strikingly, XenServer 5.5 and Hyper-V 2.0 perform almost identical in all tests. The only differentiator between these two is that XenServer 5.5 (and vSphere 4.0) support 8vCPU’s, where Hyper-V 2.0 has a maximum of 4vCPU per VM" 

if you are curious about the impact of different hypervisors and the performance differences with various hardware and if you are searching for best practices for your virtual Desktops … Project VRC whitepapers are a must read!

 

 

Categories: Virtualization

Dynamic Memory Coming to Hyper-V

 
Most of the time, we overprovision our hardware not using it efficiently which in turn raises the TCO. Sometimes because we use vendor recommedations, minimum system requirements, or either if our users complain. So far, this is not the solution
Wouldn’t it be great if your workloads automatically and dynamically allocated memory based on workload requirements and you were provided a flexible policy mechanism to control how these resources are balanced across the system?
 
Checkout more here : Windows Virtualization Team Blog
Categories: Virtualization

Microsoft System Center Virtual Machine Manager Agent — WS-Management service is either not installed or disabled.

 
If you are trying to ADD a MS Virtual Server 2005 R2 SP1 Host, running under Windows 2003 SP2,  to the System Center Virtual Machine manager and are getting this error:


"Product: Microsoft System Center Virtual Machine Manager Agent — WS-Management service is either not installed or disabled. Verify that Windows Hardware Management is installed and the Windows Remote Management service is enabled. Refer to the FAQ for more information."

You should install the KB 936059.  This update adds a new feature to Windows Remote Management (WinRM). This update also updates the version of Windows Remote Management that is included with Windows Server 2003 R2

 

 

Post-installation instructions

On a Windows Server 2003 R2-based computer that has the Windows Hardware Management optional component installed, the WsmanSelRg collector-initiated subscription collects events form the BMC. This subscription has a URI property that is set to wsman:microsoft/logrecord/sel. After you apply this hotfix, the WsmanSelRg subscription will not work until you set the URI property to the following:

To set the URI property, run the following command at a command prompt:

IMPORTANT NOTE:
 
if you are not running the Virtual Server 2005 R2 SP1 version, the you will get the error 10418. Virtual Machine Manager cannot add host xxx.xxx.xxx.xxx because this version of Virtual Server installed on the host is not supported.

 

Categories: Virtualization

Windows 2008 R2 : No Restrictions for Network adapter teaming in cluster environment

 
Network Teaming, allowing us to group network adapter ports for a connection to a single physical , are available from some hardware manufacturers to provide fault tolerance. It means that if connectivity through one port is not working, another port is activated automatically. This operation is transparent to the operating system and other devices on the network.

In Windows Server 2008 and Windows Server 2008 R2, there are no restrictions that are associated with NIC Teaming and the Failover Clustering feature and allows it to be used on any network interface in a Failover Cluster.

 

The following table details the recommended, supported, and not recommended network configurations for live migration, and is organized in the order in which each network configuration is commonly used. Before reviewing the table, note the following:

  • When a network adapter is connected to a virtual switch, it is referred to as a virtual network adapter.

  • Network access for virtual machines can be on either a public or private network. To allow virtual machines access to computers on the physical network, they must be on a public network. The requirements for virtual machine access vary depending on network I/O needs and the number of virtual machines you are running on a single physical server.

  • In addition to the preferred network for the cluster and the Cluster Shared Volumes, a cluster can utilize at least one additional network for communication. This increases the high availability of the cluster. The cluster should also be on a private network.

  • If a network configuration is listed as “not recommended” in the following table, it should not be used because the performance of live migrations declines and cluster nodes might crash. Add another network adapter to separate traffic between live migration and Cluster Shared Volumes.

 

Host configuration Virtual machine access Management Cluster and Cluster Shared Volumes Live migration Comments

4 network adapters with 1 Gbps

Virtual network adapter 1

Network adapter 2

Network adapter 3

Network adapter 4

Recommended

3 network adapters with 1 Gbps; 2 adapters are teamed for link aggregation (private)

Virtual network adapter 1

Virtual network adapter 1 with bandwidth capped at 10%

Network adapter 2 (teamed)

Network adapter 2 with bandwidth capped at 40% (teamed)

Supported

3 network adapters with 1 Gbps

Virtual network adapter 1

Virtual network adapter 1 with bandwidth capped at 10%

Network adapter 2

Network adapter 3

Supported

2 network adapters with 10 Gbps

Virtual network adapter 1

Virtual network adapter 1 with bandwidth capped at 1%

Network adapter 2

Network adapter 2 with bandwidth capped at 50%

Supported*

2 network adapters with 10 Gbps; 1 network adapter with 1 Gbps

Virtual network adapter 1 (10 Gbps)

Network adapter 2 (1 Gbps)

Network adapter 3 (10 Gbps)

Network adapter 2 with bandwidth capped at 50%

Supported

2 network adapters with 10 Gbps; 2 network adapters with 1 Gbps

Virtual network adapter 1 (10 Gbps)

Network adapter 2 (1 Gbps)

Network adapter 3 (1 Gbps)

Network adapter 4 (10 Gbps)

Supported

3 network adapters with 1 Gbps; 2 adapters are teamed for link aggregation (public)

Virtual network adapter 1

Virtual network adapter 1 with bandwidth capped at 5%

Network adapter 2 (teamed)

Network adapter 2 with bandwidth capped at 90% (teamed)

Not recommended

2 network adapters with 1 Gbps

Virtual network adapter 1

Virtual network adapter 1 with bandwidth capped at 10%

Network adapter 2

Network adapter 2 with bandwidth capped at 90%

Not recommended

1 network adapter with 10 Gbps; 1 network adapter with 1 Gbps

Virtual network adapter 1 (10 Gbps)

Virtual network adapter 1 with bandwidth capped at 10%

Network adapter 2 (1 Gbps)

Network adapter 2 with bandwidth capped at 90%

Not recommended

*This configuration is considered recommended if your configuration has a redundant network path available for Cluster and Cluster Shared Volumes communication.

Categories: Virtualization

Hyper-V : The system time runs too fast on a Linux-based vm

 
If you create a Linux VM on 2.6 Kernel and if you are experiencing the system time in the Linux guest operating system running too fast, this is the step to fix:
 
1. Open the Linux console
2.  Edit boot menu
   vi /boot/grub/menu.lst 
 
3. In the title Linux area of this file, add the clock=pit parameter to the kernel entry
 
 
4. Edit the NTP.CONF file
   vi /etc/ntp.conf
   
5. Add the NTP server ( click here to know more about NTP Server in windows 2008 )

 
6. Edit the CRONTAB:
 crontab -e
 
7. Type : 30  *  *  *  * /usr/sbin/ntpdate -su
 
Done. Now, every 30 minutes, the Linux VM will sync with the NTP server.
 
 
Categories: Virtualization