# KVM specific details and deployment

## KVM Specific Requirements

* **Vectra currently only supports KVM on Red Hat or Ubuntu Linux installations.**
* Ensure that `libvirt` (including `libvirt-client`) and `virt-install` are installed locally on the KVM host server.
* You must either be a root user or ensure that the user that will later launch the deployment script is in the `libvrt` group on the KVM host server.

## KVM Networking Guidance

* Ensure that the desired KVM networks (that will be used for management and capture) are present, persistent, active, and autostarted using `virsh net-list –all`.
  * The default driver supported for the network interface is `virtio-pci`.
* Vectra recommends that customers create a new virtual network on the KVM host server from which they want to capture packets and set the interface to promiscuous mode.
  * This is the only configuration that Vectra officially tests and supports.
  * The Vectra vSensor(s) should be the only virtual machine(s) on this network, unless there are other similar virtual machines that you want to provide the same set of traffic into.
* KVM traffic mirroring may be implemented using either Open vSwitch or `tc`.
  * The steps to create a new mirror from an existing physical network interface to a new virtual one are as follows:

### **Open vSwitch**

```
# Define the interfaces and the mirror
BRIDGE="br0"
SRC="eth0"
DST="vnet0"
MIRROR="span"
# Create a bridge and set all ports up
ovs-vsctl add-br ${BRIDGE}
ip link set dev ${BRIDGE} up
ip link set dev ${SRC} up
ip link set dev ${DST} up
# Enable promiscuous mode on the source interface
ip link set ${SRC} promisc on
# Add the ports to the bridge and enable a traffic mirror from SRC to DST
ovs-vsctl \
-- add-port ${BRIDGE} ${SRC} \
-- add-port ${BRIDGE} ${DST} \
-- --id=@${SRC} get port ${SRC} \
-- --id=@${DST} get port ${DST} \
-- --id=@${MIRROR} create mirror \
 name=${MIRROR} \
 select-src-port=@${SRC} \
 select-dst-port=@${SRC} \
 output-port=@${DST} \
-- set bridge ${BRIDGE} mirrors=@${MIRROR}
```

### `tc`

```
SRC_IFACE="eth0"
DST_IFACE="vnet0"
# ingress
tc qdisc add dev $SRC_IFACE ingress
tc filter add dev $SRC_IFACE parent ffff: \
          protocol all \
          u32 match u8 0 0 \
          action mirred egress mirror dev $DST_IFACE
# egress
tc qdisc add dev $SRC_IFACE handle 1: root prio
tc filter add dev $SRC_IFACE parent 1: \
          protocol all \
          u32 match u8 0 0 \
          action mirred egress mirror dev $DST_IFACE
```

### Jumbo Frame Support

If your deployment requires jumbo frame support, please see the below additional guidance:

* To process jumbo frames, the `MTU` needs to be set to `9000` on both the host bridge and the `vnet` interface used for capture.
* `virsh net-info <network` can be used to show the bridge associated with the network you have defined as your capture network. In the below example, `capture` is the network name being used for capture. In this example the bridge is `virbr1`.

```
root@kvm1:~# virsh net-info capture 
Name:           capture
UUID:           eb04c023-18d2-4aa5-93fd-de2e6e9f9036
Active:         yes
Persistent:     yes
Autostart:      yes
Bridge:         virbr1
```

* If the bridge is set to `9000` before deploying your Vectra KVM vSensor then the target device will be set to `9000` automatically during the vSensor deployment.
* If you need to set `MTU` to `9000` after the vSensor has been deployed, `virsh dumpxml <sensor name>` will show the bridge and the associated target device that need to be set to `9000`. Replace `<sensor name>` with the name of your Vectra KVM vSensor name.
  * Please note that the output of this command is long, and we only care about the interface sections.

<pre><code>root@kvm1:~# virsh dumpxml &#x3C;sensor name> 
...
    &#x3C;interface type='bridge'>
      &#x3C;mac address='52:54:00:34:63:37'/>
      &#x3C;source bridge='br0'/>
      &#x3C;target dev='vnet5'/>
      &#x3C;model type='virtio'/>
      &#x3C;alias name='net0'/>
      &#x3C;address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    &#x3C;/interface>
    &#x3C;interface type='network'>
      &#x3C;mac address='52:54:00:13:b8:a5'/>
      &#x3C;source network='capture' portid='4ca15199-5414-4356-b783-2ef1d04d205a' <a data-footnote-ref href="#user-content-fn-1">bridge='virbr1'</a>/>
      &#x3C;<a data-footnote-ref href="#user-content-fn-2">target dev='vnet6'</a>/>
      &#x3C;model type='virtio'/>
      &#x3C;alias name='net1'/>
      &#x3C;address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    &#x3C;/interface>
...
</code></pre>

* In the above example (the output was truncated to only show the relevant section) , you can see the `bridge` and `target device` (`virbr1` and `vnet6`) are annotated
* The command to set the `MTU` to `9000` on a network (bridge or target device/interface) is `ip link set dev <network> mtu 9000`. The command has no output as seen in the example below:

```
root@kvm1:~# ip link set dev vnet6 mtu 9000
```

* You can validate the current MTU settings with the `ip a` command. Please see abridged example below:

<pre><code>root@kvm1:~# ip a 1: lo: &#x3C;LOOPBACK,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever 2: eth0: &#x3C;BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000     link/ether 2c:ea:7f:78:aa:5b brd ff:ff:ff:ff:ff:ff 
…
15: <a data-footnote-ref href="#user-content-fn-1">virbr1</a>: &#x3C;BROADCAST,MULTICAST,UP,LOWER_UP> <a data-footnote-ref href="#user-content-fn-3">mtu 9000</a> qdisc noqueue state UP group default qlen 1000     link/ether 52:54:00:0d:21:25 brd ff:ff:ff:ff:ff:ff     inet 192.168.100.0/24 brd 192.168.100.255 scope global virbr1        valid_lft forever preferred_lft forever 16: virbr1-
… 190: <a data-footnote-ref href="#user-content-fn-4">vnet6</a>: &#x3C;BROADCAST,MULTICAST,UP,LOWER_UP> <a data-footnote-ref href="#user-content-fn-3">mtu 9000</a> qdisc fq_codel master virbr1 state UNKNOWN group default qlen 1000     link/ether fe:54:00:13:b8:a5 brd ff:ff:ff:ff:ff:ff 
</code></pre>

## Basic Interaction with KVM Virtual Machines

The easiest way to interact with KVM VMs via the command line is by using the **“virsh”** series of commands. Vectra’s deployment script will automatically start and configure the guest you, however these basic commands could prove helpful when diagnosing setup issues:

View the list of configured virtual machines:

`virsh list --all`

Start - In order to start a given virtual machine for which a configuration already exists:

`virsh start <name of vm>`

Stop and save state:

`virsh shutdown <name of vm>`

Stop quickly and don't save state (used only when you know you are going to purge the vm):

`virsh destroy <name of vm>`

Purge the vm after stopping (needs to be used in tandem with destroy):

`virsh undefine <name of vm>`

View the IP address(es) and MAC address(es) of a given virtual machine and its interfaces:

`virsh qemu-agent-command <name of vm> '{"execute":"guest-network-get-interfaces"}' | jq .`

## Downloading the KVM vSensor Image

The KVM vSensor image is available under *Configuration → COVERAGE → Data Sources > Network > Sensors* in your Vectra UI. Navigate to this area, click **Download Virtual Image** at the top right, and select the KVM vSensor (QCOW2) option.

![](https://content.gitbook.com/content/HJ1ltuWFvsArFWtevnRn/blobs/J8r9yrqCN2G2HVvgLFh6/KVM_vSensor_Deployment_Guide-2025_Aug_27-5.png)

## Deploying the KVM vSensor

Once downloaded, the machine image can be placed in the directory of your choice and uncompressed. It is a tar file. On an Ubunto machine the `tar -xf` command will uncompress the file fully. Using other tools may require 2 steps.

![](https://content.gitbook.com/content/HJ1ltuWFvsArFWtevnRn/blobs/H9oK1AWGqpAoY56vKwUN/KVM_vSensor_Deployment_Guide-2025_Aug_27-6.png)

The script to start the deployment is `vectra-vsensor.sh` which has help that can be displayed as shown below:

```
vadmin@kvm1:~/vsensor-6.12$ ./vectra-vsensor.sh -h
6.12.0-14-11
Vectra Networks
KVM deployment script for Vectra virtual appliances.
 
USAGE: ./vectra-vsensor.sh [NAME] [DISK] [DEST_PATH] [MGT_NETWORK] [CAPTURE_NETWORK] [CONFIG_ID]
 
All arguments may be optionally provided as environment variables.
 
ARGS:
NAME              The name of the VM to be created.
                      default: vectra-vsensor
DISK              The path to the qcow2 disk image to either attach directly or clone.
                      default: /home/vadmin/vsensor-6.12/vectra-vsensor.qcow2
DEST_PATH         The storage path for the new VM. If this is not the same directory as the disk image,
                    then the image will be copied to the path before attachment.
                      default: /home/vadmin/vsensor-6.12/
MGT_NETWORK       The KVM network name to be used for management.
                      default: User selection
CAPTURE_NETWORK   The KVM network name to be used for traffic capture.
                  Not applicable for brain and stream modes.
                      default: User selection
CONFIG_ID         The index of the VM config to use. Run without args to view the options
                      default: User selection
 
CONFIGURATIONS:
ID       Flavor   CPUs     Memory   Disk
0        2core    2        8192     100G
1        4core    4        8192     150G
2        8core    8        16384    150G
3        16core   16       65536    500G
 
NETWORKS:
ID   Name
0    capture
1    default
```

Execute the script to begin the deployment:

```
vadmin@kvm1:~/vsensor-6.12$ ./vectra-vsensor.sh tbilen-sensor
[Thu Oct 14 19:14:25 UTC 2021] - Setting VM configuration
ID       Flavor   CPUs     Memory   Disk
0        2core    2        8192     100G
1        4core    4        8192     150G
2        8core    8        16384    150G
3        16core   16       65536    500G
Select a configuration [0]: 0
 
[Thu Oct 14 19:14:30 UTC 2021] - Setting management NIC
ID   Name
0    capture
1    default
Select a management network [0]: 1
 
[Thu Oct 14 19:14:35 UTC 2021] - Setting capture NIC
ID   Name
0    capture
1    default
Select a capture network [0]: 0
 
[Thu Oct 14 19:14:41 UTC 2021] - Resizing disk
[Thu Oct 14 19:14:41 UTC 2021] - Creating VM: tbilen-sensor
[Thu Oct 14 19:14:42 UTC 2021] - Attaching brain data ISO
[Thu Oct 14 19:14:42 UTC 2021] - Adding capture NIC
[Thu Oct 14 19:14:42 UTC 2021] - Starting VM: tbilen-sensor
```

* As you can see in the above output, you will need to select some options:
  * `VM configuration` - See the [KVM vSensor sizing](https://docs.vectra.ai/deployment/ndr-virtual-cloud-appliances/introduction-and-general-requirements#kvm-vsensor-requirements-and-throughput) in the earlier section of this guide for details on the expected performance and resource requirements for each configuration.
  * `Management NIC` – Which NIC to use for the management interface of the vSensor.
  * `Capture NIC` – Which NIC to use for the capture interface of the vSensor.
* Your KVM vSensor will start automatically once the deployment script has finished.

[^1]: Bridge

[^2]: Target Device

[^3]: MTU

[^4]: Device
