Remote Direct Memory Access- Understanding Hyper- V

When most of us think of Hyper- V, we think of a group of virtual machines sharing access to a systems resource. With Windows Server 2022, Hyper- V includes Remote Direct Memory Access (RDMA).

RDMA allows one computer to directly access memory from the memory of another computer without the need of interfacing with either one’s operating system. This gives systems the ability to have high throughput and low- latency networking. This is useful when it comes to clustering systems (including Hyper- V).

Windows Server 2012 R2 RDMA services couldn’t be bound to a Hyper- V Virtual Switch and because of this, Remote Direct Memory Access and Hyper-V  had to be on the same computer as the network adapters. Because of this, there was a need for a higher number of physical network adapters that were required to be installed on the Hyper-V  host.

Because of the improvements of RDMA on Windows Server 2022, you can use fewer network adapters while using RDMA.

Switch Embedded Teaming

Earlier we discussed NIC Teaming, but we also have the ability to do Switch Embedded Teaming (SET). SET can be an alternative to using NIC Teaming in environments that include Hyper- V and the Software- Defined Networking (SDN) stack in Windows Server 2022. SET is available in all versions of Windows Server 2022 that include Hyper-V  and SDN stack.

SET does use some of the functionality of NIC Teaming into the Hyper- V Virtual Switch, but SET allows you to combine a group of physical adapters (minimum of 1 adapter and a maximum of 8 adapters) into software based virtual adapters.

By using virtual adapters, you get better performance and greater fault tolerance in the event of a network adapter going bad. For SET to be enabled, all of the physical network adapters must be installed on the same physical Hyper-V  host.

One of the requirements of SET is that all network adapters that are members of the SET group be identical adapters. This means that they need to be the same adapter types from the same manufacturers.

One main difference between NIC Teaming and SET is that SET only supports Switch Independent mode setups. Again, this means that Switch Independent mode means the NICs control the teaming. They could be on the same or different switches.

You need to create a SET team at the same time you create the Hyper-V  virtual switch.

You can do this by using the Windows PowerShell command New- VMSwitch. At the time you create a Hyper- V virtual switch, you must include the  EnableEmbeddedTeaming parameter in your command syntax. The following  example shows a Hyper- V switch named StormSwitch:

New- VMSwitch – Name StormSwitch – NetAdapterName “NIC 1″,”NIC 2”

– EnableEmbeddedTeaming $true

You also have the ability to remove a SET team by using the following PowerShell command. This example removes a virtual switch named StormSwitch: Remove- VMSwitch “StormSwitch”

Storage Quality of Service

Windows Server 2022 Hyper- V includes a feature called Storage Quality of Service (QoS). Storage QoS allows a Hyper- V administrator to manage how virtual machines access storage throughput for virtual hard disks.

Storage QoS gives you the ability to guarantee that the storage throughput of a single VHD cannot adversely affect the performance of another VHD on the same host. It does this by giving you the ability to specify the maximum and minimum I/O loads based on I/O operations per second (IOPS) for each virtual disk in your virtual machines.

To configure Storage QoS, you would set the maximum IOPS values (or limits) and set the minimum values (or reserves) on virtual hard disks for virtual machines.

Installing Hyper- V Integration Components

Hyper- V Integration Components, also called Integration Services, are required to make your guest operating system hypervisor- aware. Similar to the VM Additions that were part of Microsoft Virtual Server 2005, these components improve the performance of the guest operating system once they are installed. From an architectural perspective, virtual devices are redirected directly via the VMBus; thus, quicker access to resources and devices is provided.

If you do not install the Hyper- V Integration Components, the guest operating system uses emulation to communicate with the host’s devices, which of course makes the guest operating system slower.

Exercise 2.5 shows you how to install Hyper-V  Integration Components on one of your virtual machines running Windows Server 2022.

EXERCISE 2.5

Installing Hyper- V Integration Components

  1. Open Hyper-V  Manager.
  2. In Hyper- V Manager, in the Virtual Machines pane, right-c lick the virtual machine on which you want to install Hyper- V Integration Components and click Start.
  3. Right- click the virtual machine again and click Connect. Meanwhile, your virtual machine should already be booting.
  4. If you need to log into the operating system of your virtual machine, you should do so.
  5. Starting with Windows Server 2012 R2, Integration Services aren’t installed via an emulated floppy like it was prior to Windows Server 2016. Instead, they are installed as a Windows update. So now that the virtual machine is set up, do your updates on the Hyper- V host along with the updates for the Hyper-V  guest. After you reboot, Integration Components should be installed and ready to go.
Linux and FreeBSD Image Deployments

One of the features of Windows 2022 is the ability for Hyper-V  to support Linux and FreeBSD virtual machines. Hyper- V now can support these new virtual machines because  Hyper- V has the ability to emulate Linux and FreeBSD devices. Because Hyper- V now has the ability to emulate these two devices, no additional software needs to be installed on Hyper- V.

Unfortunately, because Hyper- V has to emulate these devices, you lose some of the Hyper V functionality like high performance and full management of the virtual machines. So it’s a trade- off. You get to run Linux and FreeBSD type Hyper- V virtual machines, but you lose some of the benefits of Hyper- V.

But wait; there is a way to get your Hyper-V  functionality back. This issue can be resolved as long as you install Hyper- V on machines that can support Linux and FreeBSD operating systems. The drivers that are needed on Hyper-V  are called Linux Integration Services (LIS) and FreeBSD Integrated Services (FIS). By putting these drivers on a device that can handle Linux and FreeBSD, you can then have Hyper-V  with all of the features Microsoft offers.

To get these drivers and make Hyper- V work with all of its functionality, you must make sure that you install a newer release of Linux that includes LIS. To get the most out of FreeBSD you must get a version after 10.0. For FreeBSD versions that are older than 10.0, Microsoft offers ports that work with BIS drivers that need to be installed. Hyper-V  will work with Linux and FreeBSD without the need of any additional drivers or equipment. By having drivers and equipment that supports Linux and FreeBSD, you just get all of the Hyper- V features that your organization may need.

I have personally installed Kali Linux and Parrot Linux on Windows

Server 2022 Hyper- V. So, you have many different options when installing Linux. The installation screens will be different but the installation of these versions of Linux can be easily done. The only issue that I have encountered when installing Kali and Parrot is that I need to choose Generation 1 when installing these versions.

In Exercise 2.6, I will show you how to install Linux into a virtual machine. I will then walk you through a full installation of a Linux server. Before you complete this lab, you must download a copy of Linux. For this exercise, I downloaded a free copy of Linux Ubuntu as an image file (ISO). If you choose a different version of Linux, the installation screens during the exercise may be different.

Leave a Reply

Your email address will not be published. Required fields are marked *