In the second part of this series of articles about Hyper-V Networking and Configuration, I described some best-practices for Hyper-V Networking and gave a few Hyper-V Networking examples. The second part can be found here if you have not read it already.
In this part, I’ll continue to try to use a practical Hyper-V Networking example to introduce the concepts. This time, I’ll use a more complex example which illustrates the type of networking configuration that is required when it comes to configuring Hyper-V Networking between two or more Hyper-V Hosts.
We will also describe Hyper-V VMBUS and VSP/VSC design, and explain how it improves the communication between Virtual Machines and Hyper-V Parent Partition. You might want to read Part I of this series of articles here if you are not familiar with the terms used in this article. Since this part continues to explain the Hyper-V Networking examples, I recommend reading Part II here if you have not already.
We will discuss the following topics in detail:
- A Complex Hyper-V Networking Configuration Scenario
- Hyper-V Networking VMBUS and VSP/VSC design and how it improves the communication.
Some Terms:
- Hypervisor
- Hypervisor is a layer of code which runs directly on top of the hardware. It is the lowest thing which owns the hardware and provides access to Virtual Machines running on Hyper-V Server.
- Synthetic VM
- Synthetic Virtual Machine runs Integration Services Components. Synthetic Virtual Machines use VMBUS and VSP/VSC design to communicate with hardware devices and Virtual Machines.
- Emulated VM
- An Emulated Virtual Machine is not aware of the VMBUS and VSP/VSC design or components. It communicates with the Parent partition using emulation.
A Complex Hyper-V Networking Configuration Scenario
In referring to the final scenario, I’ll assume that you’ve read Part 2 of this series here, so please do so if you have not done so already because you might get a bit lost otherwise. The scenarios in part 2 deal with configuring Hyper-V Networking on a single Hyper-V Host. It does not explain how to configure Hyper-V networking across two or more Hyper-V Hosts. This is the most complex configuration which will illustrate how to create Hyper-V networking between two Hyper-V Hosts and how to enable communication across Hyper-V Servers.
You will have noticed in the ‘Hyper-V Networking Best Practices’ section in Part 2 of this article series, which can be found here, that you should not create a Virtual Network Switch for the sake of it, because it increases the overhead for the VMMS.EXE process. This is where I hope that these scenarios, explained here, will prove to be helpful, because you can adapt the other methods explained in this topic to achieve the results as necessary. There is also a checklist provided in Part 2 that you can use before you dig more into the actual details of the configuration.
This part covers the more typical complex Hyper-V Networking configuration that is required in most production environments. This example includes two Hyper-V Hosts, and shows how you can use the methods explained in Part 2 of this article of series to achieve the communication between Virtual Machines running on two Hyper-V Servers whilst minimizing the overhead.
In this configuration example, you have two Hyper-V Servers; Host1 and Host2. There are four departments; Finance, HR, Sales, and R&D. Virtual Machines of these departments are hosted on both the Hyper-V Servers except R&D. The Hyper-v Host1 is running six Virtual Machines. Three Virtual Machines are running on Host2. Two Virtual Machines for Sales, one for Finance, and one for HR departments are running on Hyper-V Host1. One Virtual Machine for the Sales, Finance, and HR departments are running on Hyper-V Host2. Hyper-V Host1 is also running two virtual machines from R&D department. R&D Virtual Machines are running SQL Servers. There are three Physical Servers located on LAN; Server1, Server2, and Server3.
The Virtual Machine running on each Hyper-V Host looks like as shown in below figure, and before configuring Hyper-V Networking:
Requirements:
- Virtual Machines running for the Sales Department should be able to communicate with each other and the corresponding Virtual Machines running on Hyper-V Host2. These Virtual Machines should also be able to communicate with SQL Server 2008 running on Server1. Sales Virtual Machines store data on Server1 and it is very important not to disclose this data to any other department.
- Finance’s Virtual Machines need to store data on the Physical Server Server2, and also communicate with the corresponding Virtual Machines on Hyper-V Host2.
- HR Virtual Machines need to access a shared folder called Server that is located on the Physical LAN Server3, and also communicate with corresponding Virtual Machines on HOST2.
- R&D Virtual Machines should not be able to talk to any Virtual Machines or Physical Servers on the LAN. R&D Virtual Machines traffic should not be visible to any other devices.
- Hyper-V Networking should be configured in such a way that there are no communication issues between Virtual Machines running on both the Hyper-V Servers.
- R&D Virtual Machines should be configured to use a dedicated disk from SAN.
The first thing to remember is that two Hyper-V Hosts are required, and the communication should take place between Virtual Machines running on both the Hyper-V Servers except for the R&D Virtual Machines. This is completely different from the scenarios which were explained in the Part 2 here. The requirement clearly indicates that the connectivity to the External Network or Physical LAN is required because three departments except R&D need to access Servers located on the LAN. That means that we need to create an External Virtual Network Switch to achieve these requirements.
You must keep separate the communication for controlling and managing the Parent Partition from the communication between Virtual Machines across Hyper-V Servers. We’ve already suggested that you must separate the management traffic by adding one more Physical NIC (assuming you have only one NIC added to Hyper-V Host) to the Hyper-V Servers to avoid any communication issues between the Virtual Machines and thereby increasing the overall performance of the Hyper-V Servers.
We must use the method of Hyper-V Virtual Switch and VLAN Tagging that we described in Part 2 in order to get the results we want. Otherwise you will end up creating Hyper-V Network Virtual Switches to achieve each requirement resulting in poor performance on Hyper-V Server.
The Figure 1.1 shows the Hyper-V Networking configuration between two Hyper-v Hosts; Host1 and Host2. The requirements have been achieved both by using a Hyper-V Virtual Network Switch and Hyper-V Virtual Network Switch with VLAN Tagging. Both these methods are required to meet the requirements.
You’ll also see from the diagram that the Virtual Machines (VM1, VM2, VM3 and VM4) are assigned a different set of VLAN IDs. VM1 and VM2 belong to Sales department. VM3 and VM4 belong to Finance and HR departments respectively. VM1 and VM2 are assigned VLAN ID 2 and 3 and 4 are assigned to VM3 and VM4. All these Virtual Machines are connected to a Hyper-V External Network Switch called Switch A. This Hyper-V Network External Virtual Switch is not configured with any VLAN ID: in fact. it is configured in Trunk Mode.
Tip: A Hyper-V Networking Virtual Switch is referred to as being configured in ‘Trunk Mode’ when it is NOT configured with any VLAN IDs or left blank.
Switch A, in turn, is mapped to P-NIC1 on the Hyper-V Host1. This enables communication outside of the Hyper-V Host (e.g. Servers connected to Physical Switch A). P-NIC1 on Hyper-V Server is connected to Physical Switch A which is configured in Trunk Mode. Server1, Server2 and Server3 which are configured to use the VLAN IDs based on their departments VMs and are connected to Physical Switch A.
Physical Switch A, in turn, is connected to Physical Switch C which is mapped to P-NIC1 on Hyper-V Host2.
An interesting part of the requirements is the R&D Virtual Machines configuration. Hyper-V Virtual Network Switch B on Hyper-V Host1 is configured as a Private Hyper-V Network Switch to which both R&D Virtual Machines are connected. This enables us to achieve the R&D requirement. For R&D Virtual Machines, we have created a separate Hyper-V Virtual Network Switch. This is actually not required if we assign a unique VLAN IDs to them. A unique VLAN ID 5 can be assigned to these Virtual Machines and connected to Switch A which enables us to block the communications between these two Virtual Machines only. This configuration does not require us to create another Hyper-V Virtual Private Switch to meet the R&D requirement. But this is not the case with R&D Virtual Machines requirement. We have created another Hyper-V Virtual Private Network Switch by keeping the complete requirement in mind (e.g. R&D traffic must not be seen by other Virtual Machines and servers located on the LAN).
Tip: You’ll remember from previous about for tagged and untagged traffic.in my previous articles that a packet generated from a Virtual Machine will be sent to all the devices connected to it.
On the Hyper-V Host2, we have three Virtual Machines running (VM7, VM8 and VM9) with unique VLAN IDs assigned to them (2, 3, and 4 respectively). Switch C as Hyper-V External Network Switch is created and mapped to P-NIC1 on the Hyper-V Host2. These Virtual Machines belong to Sales, Finance and HR and are connected to Switch C. Switch C, in turn, is connected to Physical Switch C so that traffic generated by these Virtual Machines can be seen by the Physical Switch C.
Physical Switch A and Physical Switch C are connected to each other to accept/forward any traffic generated by the Virtual Machines on both Hyper-V Servers. As shown in figure 1.1 above, both physical switches are configured in Trunk Mode so they can accept the packets from devices which are configured with VLAN IDs 2,3, and 4.
P-NIC2, which is a physical NIC, on Hyper-V Host1 is reserved for controlling/managing the Parent Partition.
R&D Virtual Machines are running SQL Server 2008. Per requirement stated above, SQL Server database must be configured on a dedicated volume to avoid any performance issues. Thus, P-NIC3 is a HBA NIC which is connected to SAN Device. E:\ and F:\ drives are configured in Parent Partition to be used by R&D Virtual Machines.
You’ll have read in the my previous article that both Hyper-V External Network Switches; Switch A and Switch B are not configured with any VLAN IDs. If you assign any VLAN ID on the Virtual Switch, the traffic will be tagged with that VLAN ID and the communication will be allowed only for that VLAN ID Tagged packet.
You should not face any performance issues if you follow the guidelines provided in Part 2 and this section when configuring Hyper-V Networking to achieve any requirements.
The next section explains the underlying components which play an important role in improving the networking performance of Hyper-V Server and Virtual Machines running on it. You will also learn how these components help speed up the network communication between Virtual Machines.
Hyper-V Networking VMBUS and VSP/VSC design and how it improves the communication.
As you know Microsoft Hyper-V Networking is completely different from its competitor; VMWare. Microsoft Hyper-V Networking relies on components called VMBUS and VSP/VSC which are running in the Virtual Machines. VMBUS and VSP/VSC components are part of Integration Services. Integration Services components equivalent to VMWare’s VMWare Tools and VPC Additions found in Virtual Server. Explaining Integration Services components in detail is out of scope for this part of this series of articles.
We will learn about the following topics in this section:
- Microsoft Hyper-V Hypervisor
- VMBUS communication Channel and VSP/VSC design
Microsoft Hyper-V Hypervisor
Microsoft Hyper-V runs directly on top of the hardware. There are some facts regarding Microsoft Hyper-V:
- Hyper-V is a hypervisor-based technology.
- Operating Systems execute in partitions
- The Virtualization Stack has direct access to hardware.
- The parent partition creates child partitions through HYPERCALLs APIs.
- Child partitions do not have direct access to Hardware.
- VMBUS is a logical inter-partition communication channel and available only when Integration Services are installed
- Parent partition hosts VSPs. Child Partitions host VSCs.
- VSCs communicate to VSPs over VMBUS to handle device access.
- The hypervisor handles the processor interrupts and redirect to the respective partition
- When a new VM is created in parent Partition, a new Child Partition is created to host that VM.
- Microsoft Hypervisor is a layer of code that runs natively on the hardware. It is the lowest level on system and is responsible for doing the hardware and resource partitioning. Location of drivers and some OS Core Components differs from the VMWare design.
- Drivers exist in the Parent partition and not in the Hypervisor itself. This reduces the size of the Hypervisor.
- Microsoft Hypervisor is 600KB in size.
- It is very secure because no third party code can be injected in it.
VMBUS communication Channel and VSP/VSC design
Microsoft developers have designed unique components for Hyper-V. These are VMBUS and VSP/VSC. VMBUS stands for Virtual Machine BUS. These components play an important role when it comes to provide Virtual Machines access to the hardware. VMBUS functions as communication layer for Virtual Machines.
A VMBUS is a logical component over which VSC and VSP talks to each other. VSP stands for Virtual Service Provider which is always running in Parent Partition on Hyper-V Server. There are four VSPs running in the Parent Partition serving requests coming from four Virtual Service Clients (VSCs) running in the child partitions. The four VSP/VSCs running are:
- Network VSC/VSP
- Storage VSC/VSP
- Video VSC/VSP
- HID VSC/VSP
VSCs running in Child Partitions will always communicate with VSPs running in Parent Partition over VMBUS communication channel. There is only one VMBUS component running in Parent Partition which talks to the VMBUS component running in Child Partitions.
As you can see in Figure 1.2 above, there are four Virtual Machines created or I should perhaps say four Child Partitions (VM1, VM2, VM3, and VM4). VMBUS is a component which is running in all the partitions to help communicate with the components running in Parent Partition (VSPs). Explaining all VSP components is out of scope of this article. So we will stick to the Network VSP/VSC design and see how it improves the overall Hyper-V Networking performance.
Before Microsoft Hyper-V, Virtual Server dominated the market for a number of years. Virtual Server did not work very well as it operated same as VMWare’s GSX Server. In Hyper-V, Microsoft introduced VMBUS and VSP/VSC components which helped Microsoft gain market share on Server Virtualization. The overall goal of Microsoft was to reduce the extra communication layer which was the case with Virtual Server. The VMBUS and VSP/VSC design in Hyper-V reduces the extra communication layer.
There are two types of Virtual Machines which can be operated in a Hyper-V environment; Emulated Virtual Machine and Synthetic Virtual Machine. As shown in Figure 1.2, VM1 through VM3 are Synthetic Virtual Machines and VM4 is operating as Emulated Virtual Machine. A Synthetic Virtual Machine knows that it is running in a Hyper-V environment and knows how to talk to Parent Partition to access hardware. An Emulated Virtual Machine completely relies on Operating System for accessing hardware as it does not have knowledge of VMBUS and VSP/VSC components.
Microsoft and VMWare both provide a set of Virtual Machine components which are recommended to install in the Virtual Machine. They are called Integration Services and VMWare Tools respectively.
VMBUS and VPS/VSC are part of Integration Services. These components are available only once you have installed Integration Services in the Virtual Machine. As shown in Figure 1.2, Integration Services are installed in VM1, VM2, and VM3 but for VM4 the integration Services components have not been installed. So you do not see VMBUS and VSC components in VM4 child partition. Thus, VM4 operates differently. It has got an extra communication layer to communicate with the hardware. Any traffic generated from VM4 for hardware access will be received by the OS component or the Operating System itself. VM4 does not know how to talk to rest of the devices more efficiently. VM4 works same as Virtual Server VM.
As you can see in Figure 1.2 above, there are two Hyper-V Virtual Networking Switches created; vSwitch 1 and vSwitch 2. These Switches are directly connected to the Network Virtual Server Provider in Parent Partition. The VMBUS communication channel handles the requests coming from Network Virtual Service Clients (VSCs) running in Child Partitions and forward the requests to VSPs running in Parent Partition. This communication is more efficient and faster.
In Figure 1.2 above, you can also see something called Third-Party VSC which is running in VM3. VM3 is running a Linux version which is a supported Guest Operating System on Hyper-V environment. Microsoft has designed Integration Services components for non-Windows Operating Systems also. This includes Linux and other non-Windows Operating Systems. A complete list of Guest and Server supported Operating Systems can be found in Part 1 of articles of this series here. The Third-Party VSC knows how to talk to VSP which is running in the Parent Partition.
Tip: An Operating System is called Supported Operating System in a Hyper-V environment if the Integration Services components can be installed successfully.
You cannot think of improving the overall performance of your Virtual Machines running in Hyper-V environment without installing the Integration Services components. This is one of the required components that you must install in Virtual Machines. Integration Services components provides knowledge to Virtual Machines that they are running in a Hyper-V environment and they should utilize VMBUS and VSP/VSC to communicate with Parent Partition and Virtual Machines.
When you install Integration Services, a set of drivers get installed in the Virtual Machines. For Network VSC, the driver which gets installed in Virtual Machine is netvsc60.sys or netvsc50.sys. These drivers have been programmed for Hyper-V Networking and talk to each other to improve the overall networking performance.
As shown in figure 1.3 – The Synthetic Virtual Machine, once Integration Services Components have been installed, will have “Microsoft VMBUS Network Adaptor” added under the Network Adaptors category in the Device Manager. Emulated Virtual Machine will have the emulated device driver installed. Emulated Virtual Machines emulate a known hardware; Intel 440 BX Motherboard and an Intel 21140-based Network Adaptor.
You can install Integration Services components only if the Guest or Server Operating System is supported by the Hyper-V or Microsoft. For a list of supported Operating Systems, you can read the Part 1 of these articles of this series here.
Below is the comparison between Supported Operating System and Unsupported Operating System in Hyper-V environment and the Hyper-V component which is available.
Conclusion
In this article we saw how Hyper-V Networking is configured between two or more Hyper-V Hosts. We have also learned how Hyper-V improves the network communication with the help of VMBUS and VSP/VSC components. We also learned about the Synthetic and Emulated Virtual Machines and how they are different from each other.
- In the final in this series of articles, we will primarily focus on the following topics:
- Three Ways to configure VLAN Trunking for Hyper-V Virtual Machines
- Configuring Hyper-V Networking using SCVMM
Load comments