vSensor specifications
CPU, memory, disk, and throughput sizing guidance for vSensors.
vSensor Requirements Summary
VMWare ESX (vCenter/vSphere)
2 vCores / 8 GB RAM / 100 GB disk / 2 Capture Ports* / 500 Mbps Monitoring
4 vCores / 8 GB RAM / 150 GB disk / 2 Capture Ports* / 1 Gbps Monitoring
8 vCores / 16 GB RAM / 150 GB disk / 4 Capture Ports / 2 Gbps Monitoring
16 vCores / 64GB RAM / 600 GB disk** / 4 Capture Ports / 5 Gbps Monitoring
32 vCores / 114GB RAM / 830 GB disk** / 4 Capture Ports*** / 20 Gbps Monitoring
*See notes below about capture port configurations
**16 core requires 600 GB storage, 32 core requires 830 GB storage.
***32 core requires additional configuration. Please check the "Required configuration parameters in advanced VM options for ONLY the 32 core vSensor" below.
Virtual Sensor (vSensor) CPU usage
Vectra vSensors may run their CPU at close to 100% when bandwidth usage is close to the specified limit of the sensor. This is expected and normal behavior.
It is critical that vSensors do not have their CPU and RAM usage restricted. Restricting sensor resources in this manner will negatively affect both sensor capture performance and sensor system stability.
vSensor Resource Requirements
CPU
vSensors can be configured with either 2, 4, 8, 16 or 32 vCores yielding capture performance of 500MBps, 1GBps, 2Gbps, 5Gbps and 20Gbps respectively.
The virtual CPU must support a minimum SSE instruction level of 4.2 and must support the POPCNT instruction. The modern Intel AVX and AVX2 instruction specifications includes SSE 4.2 support. This requires the hypervisor host to be running one of the following processors or later:
Intel Silvermont processors
Intel Goldmont processors
Intel Nehalem processors and newer
Intel Haswell processors and newer
AMD Bulldozer-based processors and newer
AMD Jaguar-based processors and newer
AMD Piledriver-based processors and newer
It is important to note the restriction below regarding Enhanced vMotion Compatibility, where SSE/AVX support may be excluded from a hypervisor CPU due to the cluster composition.
RAM
The 2 and 4 vCore models require at least 8GB of RAM, while the 8 vCore model requires at least 16GB of RAM and the 32 vCore model requires 114GB of RAM. Lower RAM allocations are not supported and will affect both system performance and system stability.
Storage
The 2 vCore model requires at least 100 GB of available storage, while the 4 and 8 vCore models require 150GB of available storage space.
The 16 and 32 vCore model will deploy with 150GB of storage due to limitations of the VMware OVA deployment methodology. This vSensor will need to have its storage manually increased by the administrator from 150GB to 600GB for the 16 vCore and 830GB for the 32 vCore.
Vectra recommends that sensors are configured to use storage local to the hypervisor. If you are considering deploying your vSensor on remote storage please read the note below regarding running vSensors on Storage Area Networks.
Capture Ports
Each capture port can capture traffic from all VLANs on one vSwitch.
vSensors with 8GB of RAM support two virtual capture ports, this includes 2-core and 4-core vSensors in the default CPU/RAM configuration.
vSensors with a minimum of 10GB of RAM support four virtual capture ports, this includes the 8-core vSensor in the default CPU/RAM configuration.
2-core and 4-core vSensors support up to four virtual capture ports if the VM settings are adjusted to have a minimum of 10GB of RAM.
32-core, please check below the section: "Required configuration parameters in advanced VM options for ONLY the 32 core vSensor"
Capture Ports
VMware networking requirements
vSwitch (vNetwork Standard Switch or VSS) Portgroup requirements
The promiscuous mode must be set to "Accept" within the port group's configuration settings (Edit Settings -> Security -> Promiscuous Mode).
Within the vSensor VM settings, the VLAN ID (port group, vSwitch, or adapter) must be set to All (4095) to ensure no packets are filtered by VMware before reaching the virtual Sensor. See the step by step guide below for explicit instructions.
dvSwitch (vNetwork Distributed Switch or VDS) Portgroup requirements
The promiscuous mode must be set to accept to ensure that the virtual Sensor is able to receive packets.
VLAN type must be set to VLAN trunking with the VLAN trunk range set to 0-4094 to ensure that no packets are filtered by VMware before reaching the virtual Sensor
Required configuration parameters in advanced VM options for ONLY the 32 core vSensor
Shut down the vSensor if it is already powered on.
Ethernet Configuration"
Edit the settings in VMware (embedded ESXi client or vCenter/vSphere client) for the Sensor.
Go to VM Options > Advanced and then edit the configuration parameters and add two new parameters:
If the link speed will be 10 Gbps, the linkspeed parameter is NOT required but is a best practice.
The default link speed is 10 Gbps for a capture NIC, modify this as required for your deployment.
20 Gbps is the max throughput for the Sensor but the link speed can be set to the same as the physical NIC associated with this interface if capturing physical traffic or the aggregate bandwidth required if combining multiple sources into one feed.
Examples for 1st capture port (Network Adapter 2) (MGT is Network Adapter 1 / eth0):
Name/Key: ethernet1.pNicFeatures
Value: 4
Name/Key: ethernet1.linkspeed
Value: 40000 – This represents a 40 Gbps link speed, adjust as needed for your required link speed.

Repeat adding both parameters for any additional capture NICs (max of 4 capture NICs)
Use ethernet2 for 2nd capture port, ethernet 3 for 3rd capture port, etc.
Save the configuration.
NUMA Configuration"
This section applies to the 32 core vSensor only. No changes are required for other vSensor sizes.
VMware provides guidance for Using NUMA Systems with ESXi in the linked documentation. Virtual NUMA Controls documents the parameters. The numa.vcpu.maxPerVirtualNode parameter controls NUMA configuration for Vectra VMware VMs. Vectra cannot set this parameter at the .ova level and on some 32 core VMware vSensors (this varies by the underlying hardware platforms used), the parameter must be set by the customer after the VM deployment or errors will be seen during boot of the VM.
If the VM reboots frequently, every 3 to 4 min, if you can see the output of “show system-health” at the CLI of your VMware vSensor and there is a message about NUMA, then you know this is the issue. To avoid the issue, it is best to check for the proper setting of the parameter before powering on the VM, and set it if required.
numa.autosize.vcpu.maxPerVirtualNode is an advanced parameter in VMware vSphere/ESXi. It controls how many vCPUs ESXi can automatically assign to a NUMA node when it is handling wide VMs. By default, ESXi sets and manages this internally based on host NUMA topology, VM sizing, and hypervisor defaults. The value of {{numa.autosize.vcpu.maxPerVirtualNode}} should be set to 16, so that each NUMA node can get an equal number of vCPUs.
To check the parameter and set it if required:
Go to VM Options > Advanced and then edit the configuration parameters and find:
If the setting is 16, you are done and can close the parameters/VM options.
If the setting is not 16, change it to 16 and save the configuration and power on the vSensor.
Deployment Considerations
vMotion / Moving / Copying
Vectra recommends that vMotion be disabled for vSensors but can be used with VMware Brains. vMotion should not change the UUID of VMware Brain VMs. If the Brain VM is migrated to another location or lands on different underlying hardware, the Brain VM may get a new UUID. In such a case the VMware Brain will need to be relicensed when it is rebooted because it will have a new UUID. Please see the following Broadcom (VMware) KBs:
vSensors should be considered 'local' to the hypervisor they are deployed on and should be configured to cover all relevant VMs on that hypervisor. vSensors should be deployed proactively for each hypervisor and remain on that hypervisor even if other VMs on that hypervisor are moved to other hypervisors. Affinity rules should be used to keep the vSensor local to the hypervisor it was deployed on. In situations where you are not fully deploying vSensors to have full coverage of your virtual environment (i.e. a limited deployment PoV), customers have successfully used affinity rules to keep a vSensor with a workload (AD Server) and participate in vMotion. They also used anti-affinity rules to keep from having more than 1 vSensor on the same physical host. These cases should be the exception, rather than the rule, and are not recommended for production deployments.
vSensors may be shut down safely by the administrator if their services are not required upon the hypervisor they are currently deployed.
Enhanced vMotion Compatibility
Vectra has become aware that certain VMware cluster configurations will reduce the available CPU feature flags due to Enhanced vMotion Compatibility. With this feature enabled the cluster will inhibit CPU features on all CPUs so that all hypervisors in the clusters present the same CPU feature flags.
In these configurations, it is possible for the hypervisor to disable SSE4.2 support for the vSensor (required to operate normally) even when the underlying hypervisor CPU supports it.
Further information on EVC including strategies for enabling EVC for vSensors or disabling vMotion/EVC for the vSensor are available through your normal VMware support channel.
Unsupported hypervisors/virtual environments
The following hypervisors are not currently supported by vSensors:
VMware Workstation
VMware Fusion
Xen
The following network adapters are not currently supported by vSensors:
DirectPath, SR-IOV Passthrough, or emulated network adapters
Running with reduced CPU/RAM allowance
The minimum requirements for vSensors are hard limits; running with less RAM or CPU than the minimum required may cause the vSensor to become unstable or unresponsive.
The vSensors will use almost all of the CPUs and require all RAM to be permanently available. The hypervisor should not be permitted to limit the CPU and RAM to the vSensor as this will significantly degrade the performance of the sensor and will affect packet processing or stability.
Storage Area Networks
Vectra recommends the vSensor storage is local to the hypervisor.
Virtual sensors will write the full incoming packet stream to disk as a rolling buffer for packet capture retrieval by the brain. The bandwidth of these writes is often a problem for network storage and may negatively affect the performance of the vSensor or other systems using the SAN.
Special consideration should be given to SAN replication systems, where vSensor disk images are replicated between SAN nodes. Due to the high disk throughput for vSensors replicating these disk images may be extremely expensive for SAN replication systems.
Should deployment to SAN be necessary by hypervisor architecture the SAN should be scaled to accommodate the full incoming packet capture stream at minimal latency.
For example A sensor capturing 2Gbps may write up to 250Mbytes/second to the storage subsystem on a continuous basis.
If the deployment requires network-attached storage particular attention should be paid to:
Minimizing the network infrastructure between the hypervisor and the SAN.
Ensuring that the SAN is capable of both reading and writing this data on a continuous basis.
Provisioning the full storage space for the vSensor, do not deploy Sparse disks.
Disk deduplication is likely to be ineffective due to the unique data written to disk by each vSensor.
Further reading
Check this deployment guide which contains more detailed information: VMware vSensor Deployment Guide
Last updated
Was this helpful?