# Brain deployment in VMware

{% hint style="info" %}
**Please Note for 32 core VMware Brains:**

Per the [performance and VMware requirements](https://docs.vectra.ai/deployment/ndr-virtual-cloud-appliances/introduction-and-requirements#performance-and-vmware-requirements) section, 32 core Brains may need their NUMA settings adjusted after deployment, before the initial power on. If deploying a 32 core Brain, please see [32 Core NUMA Configuration](#id-32-core-numa-configuration) for details on checking and, if required, setting the NUMA autosize parameter.
{% endhint %}

## Downloading the latest VMware Brain OVA image

The current Brain OVA image can be downloaded from the Vectra Customer Support Portal after logging in.

The URL for the download page is: <https://support.vectra.ai/vectra/additional-resources>.

* Click on the **Download** tab.
* Click on the **VMWare Brain OVA File** and then the **Download File** link to download the image.
  * A SHA256 hash is also provided to allow you to verify the download completed successfully.

{% hint style="info" %}
Always download a current copy when you go to deploy a new VMware Brain for your organization. This will save time during deployment as fewer updates will need to be downloaded after deployment. Make this file available via a URL or the local filesystem where you will run the vSphere client from.
{% endhint %}

<figure><img src="https://4227135129-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHJ1ltuWFvsArFWtevnRn%2Fuploads%2FCSkjG2Yy6QASOf2JhTQ0%2Fimage.png?alt=media&#x26;token=d265770c-876f-4308-a480-707614907190" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
**Please choose one of the following two deployment methods for the OVA:**

* [vSphere Client / vCenter Server](#deploying-the-ova-vsphere-client-vcenter-server) - Supports static or DHCP deployment.
* [Embedded host client for ESXi](#deploying-the-ova-embedded-host-client-for-esxi) - Supports DHCP deployment only.
  * You can still change to a static assignment for the management interface after initial deployment.
    {% endhint %}

## Deploying the OVA (vSphere Client / vCenter Server)

* On the host you wish to deploy the Brain on, right click and select **Deploy OVF Template…**
* Select the URL or Local file option depending on where you made the image available.
  * You can select the OVA itself or if you have chosen to decompress the OVA you can select the `.ovf` and associated `.vmdk` files.
  * Click **Next**.
* Configure a Virtual machine name and location for the virtual machine.
  * Click **Next**.
* Select a compute resource for the deployment.
  * Click **Next**.
* Review details and then click **Next**.
* Choose a Configuration and click **Next**.
  * Vectra may add additional configuration options in the future. Please refer the [performance and VMware requirements](https://docs.vectra.ai/deployment/ndr-virtual-cloud-appliances/introduction-and-requirements#performance-and-vmware-requirements) section for details on the supported configurations.
  * When deploying a v8.1 and higher base image, new 4 and 6 core Respond UX specific configurations are available. **ONLY** choose these if you are doing Respond UX for Network deployment.

![](https://content.gitbook.com/content/HJ1ltuWFvsArFWtevnRn/blobs/41jQ7mUTXnPZbYYMoKoM/VMware_Brain_Deployment_Guide-2025_Oct_7-27.png)

* Select storage.
  * Vectra recommends thick provisioning for storage (lazy or eager zeroed). Thin provisioning may work in some situations such as lab systems that don’t require high throughput, etc.
  * Storage DRS is not supported and should be disabled for this virtual machine.
  * During the initial booting process, Vectra does a performance test to determine how well the system performs in comparison to established baselines.
    * Results can be retrieved from the CLI using the `performance-test` command when logged in as the `vectra` user. Additional detail is available in [performance testing](https://docs.vectra.ai/deployment/ndr-virtual-cloud-appliances/post-deployment-guidance#performance-testing).
* Select the network for the `mgt1` (Management) interface and click **Next**.
* On the **Customize template** screen, please fill in the required details:
  * **DHCP** – Choose this box if you want the `mgt1` (Management) interface to boot with DHCP enabled. If this option is chosen, the rest of the fields do not need to be filled in if DHCP will assign all of them.
  * **Hostname, IP Address, Netmask, Gateway, and DNS Servers** – Fill these in as required.
    * If a static IPv6 address is assigned during deployment, IPv6 support will be automatically enabled. Please see [IPv6 Management Support for Vectra Appliances](https://docs.vectra.ai/deployment/getting-started/ipv6-management-support-for-vectra-appliances) for more details.
  * **RespondUX** – Choose this option if you are doing a Respond UX for Network deployment.
    * This option should be selected for any Respond UX for Network deployment. i.e. You still need to pick this option even if you previously chose the **6CORE\_RespondUX** configuration.
    * When this option is selected, the Brain will boot directly into a state that is ready to be linked to the Vectra Cloud for use with the Respond UX. There will be no local Quadrant UX GUI served as there normally would be for a standard VMware Brain deployment before it is linked with Vectra for use with the Respond UX. Vectra personnel will still need to link your Brain to your Respond UX tenant to complete your deployment.

![](https://content.gitbook.com/content/HJ1ltuWFvsArFWtevnRn/blobs/IYpThZqsxqPe7QIzUo7G/VMware_Brain_Deployment_Guide-2025_Oct_7-28.png)

* Click **Next**.
* On the **Ready to complete** screen, validate all the details and when ready click **Finish**.
* The OVF package will be imported and deployed.

## Deploying the OVA (embedded host client for ESXi)

* Select **Create/Register VM** on your host. This will open a **New virtual machine** window.
* Select **Deploy a virtual machine from an OVF or OVA file** and click **Next**.
* Enter a name for the Virtual machine name and then select or drag/drop your downloaded `.ova` file.
  * Click **Next**. If you see an error message about ignoring a disk, this can be ignored.
* Select the storage location for your VM and click **Next**.
* On the Deployment options screen, configure the following:
  * Network mappings – Choose the vSwitch to deploy the `mgt1` (Management) interface into.
    * As mentioned in the [Licensing and Deployment Overview](https://docs.vectra.ai/deployment/ndr-virtual-cloud-appliances/vmware-brain/licensing-and-brain-deployment-overview), DHCP is the only option supported by VMware when using the embedded host client for ESXi.
  * Deployment type – Choose the configuration you wish to deploy. Please refer to [performance and VMware requirements](https://docs.vectra.ai/deployment/ndr-virtual-cloud-appliances/introduction-and-requirements#performance-and-vmware-requirements) for details on the supported configurations.
    * When deploying a v8.1 and higher base image, new 4 and 6 core Respond UX specific configurations are available. **ONLY** choose these if you are doing Respond UX for Network deployment.

![](https://content.gitbook.com/content/HJ1ltuWFvsArFWtevnRn/blobs/R0QM1tWLqNWyZL7s52KE/VMware_Brain_Deployment_Guide-2025_Oct_7-24.png)

* Unlike when deploying using the vSphere Client / vCenter Server as per the other option above, there is no option to deploy directly into a Respond UX enabled state when using the ESXi embedded client to do the OVA deployment.
  * Vectra personnel will still need to convert your Brain to a state that is ready to be linked to the Vectra Cloud for use with the Respond UX.
* Disk provisioning – Choose Thin or **Thick** provisioning. Vectra recommends thick provisioning for storage (lazy or eager zeroed). Thin provisioning may work in some situations such as lab systems that don’t require high throughput, etc.
  * During the initial booting process, Vectra does a performance test to determine how well the system performs in comparison to established baselines.
    * Results can be retrieved from the CLI using the `performance-test` command when logged in as the `vectra` user. Additional detail is available in [performance testing](https://docs.vectra.ai/deployment/ndr-virtual-cloud-appliances/post-deployment-guidance#performance-testing).
* Choose if you wish to automatically power on the VM after deployment.
* Click **Next**.
* On the **Ready to complete** screen, validate all the details and when ready click **Finish**.
* The VM will be created quickly and then the disks will be uploaded.

## 32 Core NUMA Configuration

{% hint style="info" %}
**Please Note:**

This section applies to the 32 core Brain only. No changes are required for other Brain sizes.
{% endhint %}

VMware provides guidance for [Using NUMA Systems with ESXi](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/vsphere-resource-management-7-0/using-numa-systems-with-esxi.html) in the linked documentation. [Virtual NUMA Controls](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/vsphere-resource-management-7-0/using-numa-systems-with-esxi/using-virtual-numa/virtual-numa-controls.html) documents the parameters. The `numa.vcpu.maxPerVirtualNode` parameter controls NUMA configuration for Vectra VMware VMs. Vectra cannot set this parameter at the `.ova` level and on some 32 core VMware Brains (this varies by the underlying hardware platforms used), the parameter must be set by the customer after the VM deployment or errors will be seen during boot of the VM.

If the VM reboots frequently, every 3 to 4 min, if you can see the output of `show system-health` at the CLI of your VMware Brain and there is a message about NUMA, then you know this is the issue. To avoid the issue, it is best to check for the proper setting of the parameter before powering on the VM, and setting it if required.

`numa.autosize.vcpu.maxPerVirtualNode` is an advanced parameter in VMware vSphere/ESXi. It controls how many vCPUs ESXi can automatically assign to a NUMA node when it is handling wide VMs. By default, ESXi sets and manages this internally based on host NUMA topology, VM sizing, and hypervisor defaults. The value of `{{numa.autosize.vcpu.maxPerVirtualNode}}` should be set to `16`, so that each NUMA node can get an equal number of vCPUs.

To check the parameter and set it if required:

* Go to *VM Options > Advanced* and then edit the configuration parameters and find:

![](https://content.gitbook.com/content/HJ1ltuWFvsArFWtevnRn/blobs/4c2SXsTgUyXqAyXaIhfZ/VMware_vSensor_Deployment_Guide-2025_Oct_8-5.png)

* If the setting is 16, you are done and can close the parameters/VM options.
* If the setting is not 16, change it to 16 and save the configuration.
