Skip to content

Hypervisors

Role access

Only administrators have access to this feature.

Types of hypervisors

IsardVDI supports different hypervisor configurations based on deployment architecture and video traffic routing requirements. Understanding these types is essential for proper network configuration and performance optimization.

Hypervisor Flavour

The hypervisor flavour includes integrated video proxy services that allow direct client connections to the hypervisor for desktop streaming:

  • Integrated video services: Includes isard-video proxy for handling client video connections
  • Direct client access: Clients connect directly to the hypervisor for video/display traffic
  • Self-contained: Complete hypervisor solution with built-in video proxy capabilities
  • Letsencrypt support: Can generate its own SSL certificates for secure video connections
  • Optimal performance: Eliminates video proxy overhead by serving clients directly

Network characteristics:

  • Video traffic flows directly from clients to hypervisor
  • Reduces network latency and central proxy load
  • Each hypervisor manages its own video certificates and proxy

Hypervisor-Standalone Flavour

The hypervisor-standalone flavour operates without integrated video services, relying on a central video proxy:

  • No integrated video proxy: Does not include isard-video service
  • Centralized video routing: All video traffic is proxied through the main IsardVDI video service
  • Simplified deployment: Minimal hypervisor footprint focused on VM execution
  • Central management: Video certificates and proxy configuration managed centrally

Network characteristics:

  • Video traffic flows: Client → Central IsardVDI → Hypervisor-Standalone
  • Requires VIDEO_HYPERVISOR_HOSTNAMES configuration on the central IsardVDI server
  • Requires ports 5900-7899 TCP open on hypervisor-standalone for video proxy access
  • Central proxy handles all SSL termination and video routing
  • Hypervisor only needs outbound connectivity to central services

VPN Tunneling Modes

New Feature

VPN tunneling mode configuration is available for hypervisors to optimize network performance and compatibility.

IsardVDI supports two different VPN tunneling modes for hypervisors, allowing administrators to choose the most appropriate networking configuration based on their infrastructure requirements:

WireGuard + Geneve (Default)

This is the default and recommended tunneling mode that provides:

  • Full VPN encryption: All traffic between the hypervisor and main IsardVDI server is encrypted using WireGuard
  • Secure remote connections: Ideal for hypervisors located in different networks or over the internet
  • Automatic Geneve tunnel setup: Creates overlay networks for virtual desktop traffic
  • High security: Best choice when hypervisors are in untrusted networks

Use cases:

  • Hypervisors in remote locations
  • External hypervisors over the internet
  • Multi-site deployments
  • When maximum security is required

Geneve Only

This mode uses only Geneve tunneling without WireGuard encryption:

  • Direct network connectivity: Assumes hypervisors can directly reach the main server
  • Lower overhead: Reduces encryption overhead for better performance
  • Simplified networking: Easier to troubleshoot and monitor
  • Local deployments: Ideal for trusted local networks

Use cases:

  • Hypervisors in the same datacenter
  • Trusted local network environments
  • High-performance requirements with minimal overhead
  • Networks where encryption is handled at infrastructure level

Configuration

The VPN tunneling mode can be configured when creating or editing a hypervisor:

  1. Through Web UI: Select the desired tunneling mode in the hypervisor configuration form
  2. Through API: Set the vpn_tunneling_mode field to either "wireguard+geneve" or "geneve"

Important considerations:

Network Requirements

Required Ports by Tunneling Mode:

WireGuard + Geneve mode: - 4443 UDP: WireGuard tunnel from hypervisor to isard-vpn (configurable via WG_HYPERS_PORT) - 2022 TCP: SSH control from isard-engine to hypervisor - 5900-7899 TCP: Video/display traffic to hypervisor-standalone (not required for hypervisor flavour)

Geneve Only mode: - 6081 UDP: Direct Geneve tunnel from hypervisor to isard-vpn (standard Geneve port) - 2022 TCP: SSH control from isard-engine to hypervisor
- 5900-7899 TCP: Video/display traffic to hypervisor-standalone (not required for hypervisor flavour)

Port Configuration

  • WireGuard port can be customized via WG_HYPERS_PORT in isardvdi.cfg (default: 4443)
  • Geneve uses the standard UDP port 6081 for tunnel encapsulation
  • All ports must be accessible between hypervisor and IsardVDI server

Performance vs Security

Choose WireGuard + Geneve for maximum security and Geneve Only for maximum performance in trusted networks.

Technical Details

  • Geneve tunneling: Creates overlay networks using UDP encapsulation (port 6081)
  • WireGuard encryption: Provides state-of-the-art VPN encryption when enabled (port 4443 by default)
  • Automatic configuration: IsardVDI automatically configures the chosen tunneling mode
  • OVS integration: Both modes integrate seamlessly with Open vSwitch for network management

Port Requirements Summary

Traffic Type WireGuard+Geneve Geneve Only Direction Open/Forward Where Description
VPN Tunnel 4443/UDP 6081/UDP Hypervisor → IsardVDI IsardVDI server (incoming) Primary tunnel connection
SSH Control 2022/TCP 2022/TCP IsardVDI → Hypervisor Hypervisor (incoming) Engine management
Video/Display 5900-7899/TCP 5900-7899/TCP Client → Hypervisor Hypervisor Standalone only Desktop streaming
Storage Tasks 6379/TCP 6379/TCP Hypervisor (isard-storage) → IsardVDI IsardVDI server (incoming) Redis/storage operations

Firewall and Port Forwarding Configuration

For WireGuard + Geneve mode:

# On IsardVDI server - forward incoming connections to specific container IPs
iptables -t nat -A PREROUTING -p udp --dport 4443 -j DNAT --to-destination 172.31.255.23:4443
# OR using firewalld:
firewall-cmd --add-forward-port=port=4443:proto=udp:toaddr=172.31.255.23:toport=4443 --permanent

# On hypervisor - forward incoming SSH connections to hypervisor container
iptables -t nat -A PREROUTING -p tcp --dport 2022 -j DNAT --to-destination 172.31.255.17:22
# OR using firewalld:
firewall-cmd --add-forward-port=port=2022:proto=tcp:toaddr=172.31.255.17:toport=22 --permanent

For Geneve Only mode:

# On IsardVDI server - forward incoming connections to specific container IPs
iptables -t nat -A PREROUTING -p udp --dport 6081 -j DNAT --to-destination 172.31.255.23:6081
iptables -t nat -A PREROUTING -p tcp --dport 6379 -j DNAT --to-destination 172.31.255.12:6379
# OR using firewalld:
firewall-cmd --add-forward-port=port=6081:proto=udp:toaddr=172.31.255.23:toport=6081 --permanent
firewall-cmd --add-forward-port=port=6379:proto=tcp:toaddr=172.31.255.12:toport=6379 --permanent

# On hypervisor - forward incoming SSH connections to hypervisor container
iptables -t nat -A PREROUTING -p tcp --dport 2022 -j DNAT --to-destination 172.31.255.17:22
# OR using firewalld:
firewall-cmd --add-forward-port=port=2022:proto=tcp:toaddr=172.31.255.17:toport=22 --permanent

Docker Port Forwarding Details

These ports must be forwarded to the specific Docker container IP addresses in the IsardVDI network:

On IsardVDI server side (default network: 172.31.255.0/24): - WireGuard (4443/UDP) → Forward to 172.31.255.23:4443 (isard-vpn container) - Geneve (6081/UDP) → Forward to 172.31.255.23:6081 (isard-vpn container)
- Redis (6379/TCP) → Forward to 172.31.255.12:6379 (isard-redis container if hypervisor has isard-storage also)

On hypervisor side (default network: 172.31.255.0/24): - SSH (2022/TCP) → Forward to 172.31.255.17:22 (isard-hypervisor container)

Additional ports for hypervisor-standalone (video/display access):

# On hypervisor-standalone - forward video/display ports for direct client access
iptables -t nat -A PREROUTING -p tcp --dport 5900:7899 -j DNAT --to-destination 172.31.255.17:5900-7899
# OR using firewalld (example for specific ports):
firewall-cmd --add-forward-port=port=5900:proto=tcp:toaddr=172.31.255.17:toport=5900 --permanent
# ... repeat for range 5900-7899 as needed

Notes: - These IPs are configurable via DOCKER_NET environment variable (default: 172.31.255) - Hypervisors only need outgoing connections allowed for tunnel establishment - Use docker network inspect isard-network to verify current container IPs - Router/firewall must forward external ports to these internal container IPs

Video Proxy Configuration for hypervisor-standalone

When using hypervisor-standalone flavour, you must configure the video proxy settings:

In isardvdi.cfg on the IsardVDI server:

# Allow hypervisor-standalone hosts to be proxied for video traffic
# Note: if the main IsardVDI is an all-in-one flavour with an hypervisor,
#       you should also add to the list the isard-hypervisor name itself.
VIDEO_HYPERVISOR_HOSTNAMES=hypervisor1.example.com,hypervisor2.example.com
VIDEO_HYPERVISOR_PORTS=5900-7899

Security considerations: - Only add trusted hypervisor hostnames to VIDEO_HYPERVISOR_HOSTNAMES - This setting controls which hypervisors the IsardVDI video proxy will accept connections from - Comma-separated list of hostnames as seen from the IsardVDI server

Manual Install

If you want to use another host as hypervisor/hypervisor-standalone, ensure it has shared storage at least. You can also share the certs folder if the domain to acces all them is the same (usually):

  • /opt/isard/templates
  • /opt/isard/groups
  • /opt/isard/media
  • /opt/isard/volatile (note: you can avoid sharing this and use local storage in hyper)
  • /opt/isard/storage_pools
  • /opt/isard/certs

VLANS in hypervisor

To connect a trunk interface with vlan definitions inside the hypervisor you have to set (and uncomment) variables in the isardvdi.cfg file:

IMPORTANT NOTE: Remember to rebuild the docker-compose by issuing the ./build.sh command again.

  • HYPERVISOR_HOST_TRUNK_INTERFACE: This is the host network trunk interface name as seen in the host. For example: eth1

If you don't set HYPERVISOR_STATIC_VLANS the hypervisor will auto detect vlans for 260 seconds each time it starts isard-hypervisor container. So, it is better to define the static vlans you know are in the trunk.

  • HYPERVISOR_STATIC_VLANS: Set a list of vlan numbers comma separated. Setting this will avoid autodetecting vlans for 260 seconds.

This will add into the database the found vlans as 'vXXX' where XXX is the vlan number found. For this to work you should check that STATS_RETHINKDB_HOST can be reached at port 28015 from the hypervisor.

Check that you have the correct hostname for the one holding the isard-db. This is only needed if you have the hypervisor in another machine:

  • STATS_RETHINKDB_HOST: Set it to your correct isard-db host if you have the hypervisor in another machine. Not needed if you have an 'all-in-one' IsardVDI.

NOTE: The host interface should be in promiscous mode: ip link set dev ethX promic on. If the isard-hypervisor container is started you have to stop it and start it again.

Infrastructure concepts

When you add an external hypervisor you should be aware (and configure as needed) of:

  • Disks path: By default it will store disks in /opt/isard. That folder doesn't need to be created but it is recommended that speedy IO storage (like nvme disk) is mounted in that path.
  • Network performance: If you are going to use a NAS you should take into account that speedy network must be used. Recommended network speed over 10Gbps.

Network performance

The storage network between hypervisors and NAS storage servers should be at least 10Gbps. All hypervisors should mount storage in /isard path and that mount should be the same for all hypervisors.

As you can choose when creating hypervisor to be a pure hypervisor, pure disk operations or a mixed one you could add NAS storage servers as pure disk operations. That will bring a quicker disks manipulation as IsardVDI will make use of direct storage and commands on NAS.

We have an IsardVDI infrastructure with six hypervisors and two pacemaker clustered NAS self made that share storage with NFSv4 and the performance is very good.

High Availability / High Performance tech docs

We try to keep all the knowledge and experience of IsardVDI in high performance and availability clusters using pacemaker, drbd, disk caches, live migrations, etc. in the Project Deploy section of the manual.

In that section of the manual there are videos about migrating storage and live virtual desktop that were made on the IsardVDI infrastructure: migrating virtual desktop from one hypervisor to another while at the same time we migrated it's storage from one NAS storage to another without the virtual desktop user knowing what had happened.

Hypervisor Auto-Registration and VPN Setup Flow

Understanding the Process

This section explains how hypervisors automatically register themselves with the IsardVDI system and establish secure network connectivity.

When a hypervisor container starts with proper configuration, the following automated self-registration process occurs:

1. Hypervisor Self-Registration

graph TD
    A[Hypervisor starts with cfg environment] --> B[Hypervisor contacts IsardVDI API]
    B --> C[API registers/updates hypervisor in database]
    C --> D[Event triggers isard-vpn configuration update]
    D --> E[Hypervisor requests final configuration from API]
    E --> F[Hypervisor configures VPN based on tunneling mode]
    F --> G[Hypervisor performs self-checks]
    G --> H[Hypervisor auto-enables itself in database]
    H --> I[isard-engine connects via SSH for verification]
    I --> J[Engine sets final hypervisor status]

Step-by-step process:

  1. API Registration: Hypervisor contacts IsardVDI API with its setup environment variables from configuration
  2. Database Update: API creates or updates hypervisor entry in the IsardVDI database with configuration parameters
  3. VPN Event Trigger: Database change triggers event in isard-vpn to update hypervisor networking configuration
  4. Configuration Retrieval: Hypervisor requests its final configuration from the API (WireGuard keys, network settings, etc.)

2. VPN Configuration and Self-Setup

The hypervisor configures its own VPN setup based on the tunneling mode received from the API:

WireGuard + Geneve Mode

# 1. Hypervisor receives WireGuard configuration from API
# 2. Hypervisor configures WireGuard interface (wg-hypers)
# 3. WireGuard tunnel established to isard-vpn (4443/UDP by default)
# 4. Geneve overlay network created over WireGuard tunnel
# 5. OVS bridges configured for virtual machine networking

Configuration flow:

  • Hypervisor requests WireGuard configuration from IsardVDI API
  • isard-vpn provides unique WireGuard keys and peer configuration
  • Hypervisor creates and configures WireGuard interface (wg-hypers)
  • Geneve tunnels are established over the encrypted WireGuard connection
  • Open vSwitch ports are created and flow rules configured

Geneve Only Mode

# 1. Hypervisor resolves IsardVDI server hostname/IP from configuration
# 2. Direct Geneve tunnel established to isard-vpn (6081/UDP)
# 3. OVS bridges and ports configured

Configuration flow:

  • Hypervisor directly connects to isard-vpn container IP (172.31.255.23:6081)
  • Geneve tunnel interface created without encryption layer
  • OVS ports and flow rules configured for direct networking

3. Self-Verification and Auto-Enable

Hypervisor self-checks:

  • Network connectivity tests to IsardVDI server
  • OVS bridge and tunnel functionality verification
  • Service availability checks (libvirtd, storage if applicable)
  • Certificate and security validation

Auto-enable process:

  • Hypervisor updates its own status to "enabled" in the database
  • Signals readiness to accept virtual desktop workloads
  • Begins listening for management connections on port 2022

4. Engine Verification and Final Status

External verification by isard-engine:

  • isard-engine connects to hypervisor via SSH (port 2022)
  • Verifies libvirtd service is running and accessible
  • Sets final hypervisor status in database (online/offline/error)

5. Certificate and Security Setup

Automatic certificate management:

  • Viewer certificates downloaded from IsardVDI server
  • TLS certificates for secure hypervisor communication
  • Certificate validation and renewal processes

6. Health Monitoring and Maintenance

Ongoing processes:

  • Regular health checks via SSH from isard-engine
  • Certificate renewal and updates
  • VPN tunnel monitoring and automatic reconnection
  • OVS flow rule updates based on virtual desktop lifecycle

Troubleshooting Common Issues

Connection failures: - Verify SSH key authentication between isard-engine and hypervisor - Check firewall rules allow required ports (2022/TCP, 4443/UDP or 6081/UDP) - Ensure proper DNS resolution between hypervisor and IsardVDI server

VPN tunnel issues: - Check isard-vpn container logs for WireGuard/Geneve errors - Verify container IP forwarding rules are correctly configured - Confirm DOCKER_NET environment matches actual container IPs

OVS configuration problems: - Check for duplicate port creation errors in hypervisor logs - Verify proper interface ofport assignment - Ensure no conflicting network configurations

Monitoring the Setup Process

Useful commands for monitoring:

# Check hypervisor connection status
docker logs isard-engine | grep <hypervisor_hostname>

# Monitor VPN tunnel establishment
docker logs isard-vpn | grep -E "(peer|tunnel|geneve)"

# Verify OVS configuration
docker exec isard-hypervisor ovs-vsctl show

# Check active WireGuard connections (if using WireGuard mode)
docker exec isard-vpn wg show