In this post, I describe the necessary steps to integrate OpenStack Ussuri and a single node Hyper-V compute with Open vSwitch provider+VXLAN networking. Open vSwitch is mandatory when you plan to use VXLAN networking.
Setup of the following components is not covered by this blogpost.
Windows Server version
This blogpost is written using english Windows Server Core 2019 Datacenter Evaluation.
I recommend either Windows Server Core 2019 or Hyper-V Server 2019. You can also install Windows Server 2019 Desktop Experience if you are more familiar with the full Windows desktop. The default install is Server Core with command line focused graphical interface.
Different server software editions, costs, virtualization restrictions and guest licensing are summarized in the following table. Comparison is based on the Comparison of Standard and Datacenter editions of Windows Server 2019 page.
Windows version | Licensing | Max VMs | Licence for VMs |
---|---|---|---|
Windows Server 2019 Standard | paid license | 2 per licence | no |
Windows Server 2019 Datacenter | paid license | unlimited | yes (AVMA) |
Hyper-V Server 2019 | free | unlimited | no |
Localized Windows Server (or Hyper-V Server) versions can be affected by various incompatibility problems, thus I gently advise you to choose english version.
Basic configuration
Log on to the freshly installed server, set password and start configuration using command line.
You can install hardware drivers using pnputil as required.
Server Core and Hyper-V Servers can be configured using the sconfig utility.
Microsoft (R) Windows Script Host Version 5.812 Copyright (C) Microsoft Corporation. All rights reserved. Inspecting system... =============================================================================== Server Configuration =============================================================================== 1) Domain/Workgroup: Domain: example.net 2) Computer Name: OSHV2K19 3) Add Local Administrator 4) Configure Remote Management Enabled 5) Windows Update Settings: DownloadOnly 6) Download and Install Updates 7) Remote Desktop: Enabled (all clients) 8) Network Settings 9) Date and Time 10) Telemetry settings Unknown 11) Windows Activation 12) Log Off User 13) Restart Server 14) Shut Down Server 15) Exit to Command Line Enter number to select an option:
Configure NTP if the server is not domain member.
net stop w32time w32tm /config "/manualpeerlist:pool.ntp.org,0x8" /syncfromflags:MANUAL net start w32time
Firewall can be switched off in a lab environment.
netsh advfirewall set allprofiles state off
Start PowerShell in a new window.
start powershell
You need to install Hyper-V role on Windows Server 2019. Hyper-V Server 2019 comes with these roles built-in.
PS > Add-WindowsFeature Hyper-V PS > Add-WindowsFeature Hyper-V-PowerShell
Enable remote PowerShell for remote Hyper-V management.
PS > Enable-PSRemoting
Cloudbase provides signed Open vSwitch builds for Windows, but the latest downloadable stable version is 2.10.0 at time of writing. This is an old version and I have experienced some serious packet loss with that.
Finally, I built my own Open vSwitch MSI package from source tree and I installed that. Fortunatelly, I have not had to install a Windows build platform but I utilized AppVeyor autobuild service. Just fork the ovs repository under your GitHub account, link the forked repository to your AppVeyor account, create a new build and download the MSI installer. The latest commit was f0e4a73 in the repository when my MSI installer was made.
Needed dependencies to be installed on the target Hyper-V can be read from appveyor.yml.
Install Visual Studio 2013/2019 runtime.
vcredist_x64.exe
Install OpenSSL 1.0.2u.
Win64OpenSSL-1_0_2u.exe
AppVeyor Open vSwitch builds are not signed and therefore you have to configure Windows to accept unsigned Open vSwitch drivers. Run the following commands in command line and reboot Windows.
bcdedit /set LOADOPTIONS DISABLE_INTEGRITY_CHECKS bcdedit /set TESTSIGNING ON bcdedit /set nointegritychecks ON shutdown /r
Install your Open vSwitch build. Enabling installer logging (/l*v ovs-install-log.txt) is not mandatory but can be useful at a later time.
msiexec /i OpenvSwitch-Release.msi /qr /l*v ovs-install-log.txt
Enable listening on TCP/6640.
PS > sc.exe config ovsdb-server binPath= --% "\"C:\Program Files\Open vSwitch\bin\ovsdb-server.exe\" --log-file=\"C:\Program Files\Open vSwitch\logs\ovsdb-server.log\" --pidfile --service --service-monitor --unixctl=\"C:\ProgramData\openvswitch\ovsdb-server.ctl\" --remote=punix:\"C:\ProgramData\openvswitch\db.sock\" --remote=ptcp:6640 \"C:\Program Files\Open vSwitch\conf\conf.db\"" PS > sc.exe stop ovs-vswitchd PS > sc.exe stop ovsdb-server PS > sc.exe start ovsdb-server PS > sc.exe start ovs-vswitchd
Logoff and re-logon to take effect the new value of the %PATH% variable.
Create VMswitch. Open vSwitch has a Hyper-V virtual switch extension.
Get-NetAdapter | select "Name" Name ---- management Ethernet New-VMSwitch -Name vSwitchOVS -NetAdapterName "Ethernet" -AllowManagementOS $true Get-VMSwitchExtension -VMSwitchName vSwitchOVS -Name "Open vSwitch Extension" Enable-VMSwitchExtension -VMSwitchName vSwitchOVS -Name "Open vSwitch Extension"
Create basic Open vSwitch configuration.
ovs-vsctl.exe add-br br-ext ovs-vsctl.exe add-port br-ext Ethernet
Assign a VXLAN endpoint IP address to br-ext.
Enable-NetAdapter br-ext New-NetIPAddress -IPAddress <VXLAN_IPaddr> -InterfaceAlias br-ext -PrefixLength <example: 24>
Install OpenStack Nova compute Hyper-V agent. The installer contains also Neutron OVS agent. Enabling installer logging (/l*v nova-compute-install-log.txt) is not mandatory but can be useful at a later time.
msiexec /i HyperVNovaCompute_Ussuri_21_0_0.msi /l*v nova-compute-install-log.txt
Go through the installer wizard.
Alternatively, silent or unattended install is also available. Manual install process can be found in OpenStack Hyper-V virtualization platform document.
VM virtual network interface plugging is not handled correctly in my lab environment. After a VM is launched, it has not network connectivity and ovs-vsctl show reports error.
ovs-vsctl show ... Port "932cb602-8ec0-43be-bed5-b690a8b1d99a" Interface "932cb602-8ec0-43be-bed5-b690a8b1d99a" error: "could not open network device 932cb602-8ec0-43be-bed5-b690a8b1d99a (No such device)" ...
Relevant code can be found in C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\Lib\site-packages\compute_hyperv\nova\vmops.py. Two functions, spawn() and power_on() initiate vif plugging. According to the code, the virtual network interface is added to the OVS bridge before Hyper-V creates it upon VM power on. But the right behavior is that nova compute agent must wait until VM power on (not only wait until VM creation) before plugging in virtual network interface to the OVS bridge.
I wrote a small patch which powers on VM first, then add virtual network interface to the OVS bridge. Apply the following patch on vmops.py and restart nova-compute service.
--- orig/vmops.py 2020-10-06 02:15:08.000000000 +0200 +++ updated/vmops.py 2021-11-08 08:46:11.044979827 +0100 @@ -312,13 +312,10 @@ self._create_ephemerals(instance, block_device_info['ephemerals']) try: - with self.wait_vif_plug_events(instance, network_info): - # waiting will occur after the instance is created. - self.create_instance(context, instance, network_info, - block_device_info, vm_gen, image_meta) - # This is supported starting from OVS version 2.5 - self.plug_vifs(instance, network_info) - + LOG.debug("Creating instance") + self.create_instance(context, instance, network_info, + block_device_info, vm_gen, image_meta) + LOG.debug("Updating device metadata") self.update_device_metadata(context, instance) if configdrive.required_by(instance): @@ -337,6 +334,11 @@ self.power_on(instance, network_info=network_info, should_plug_vifs=False) + self.pause(instance) + with self.wait_vif_plug_events(instance, network_info): + LOG.debug("Plugging vifs") + self.plug_vifs(instance, network_info) + self.unpause(instance) except Exception: with excutils.save_and_reraise_exception(): self.destroy(instance, network_info, block_device_info) @@ -1018,9 +1020,18 @@ self._volumeops.fix_instance_volume_disk_paths(instance.name, block_device_info) + self._set_vm_state(instance, os_win_const.HYPERV_VM_STATE_ENABLED) if should_plug_vifs: + LOG.debug("Pause instance", instance=instance) + self._set_vm_state(instance, + os_win_const.HYPERV_VM_STATE_PAUSED) + LOG.debug("Unplug instance vif(s)", instance=instance) + self.unplug_vifs(instance, network_info) + LOG.debug("Plug instance vif(s)", instance=instance) self.plug_vifs(instance, network_info) - self._set_vm_state(instance, os_win_const.HYPERV_VM_STATE_ENABLED) + LOG.debug("Unpause instance", instance=instance) + self._set_vm_state(instance, + os_win_const.HYPERV_VM_STATE_ENABLED) def _set_vm_state(self, instance, req_state): instance_name = instance.name
The MSI installer does not configure provider network bridge mappings. Edit C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\neutron_ovs_agent.conf according to the followings.
[ovs] ... bridge_mappings = 'provider:br-ext' ...
Install FreeRDP WebConnect which is a RDP-HTML5 proxy that provides access to a virtual machine console from a web browser.
Go through the installer wizzard.
The generated configuration file is located at C:\Program Files\Cloudbase Solutions\FreeRDP-WebConnect\etc\wsgate.ini.
VM console URL can be obtained from OpenStack command line.
nova get-rdp-console <VM instance> rdp-html5
Discover new hypervisor node on the controller.
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Orphan, erroneous virtual network interfaces can be removed from OVS bridge after system restart or neutron agent restart using the following PowerShell snippet.
$interfaces = ovs-vsctl --format=csv find interface ofport="-1" | ConvertFrom-Csv | Select "name" Foreach($i in $interfaces) { ovs-vsctl del-port br-int "$($i.name)" }