# Introduction and requirements

## Introduction

This guide is intended to help customers or partners deploy a virtual Brain appliance in VMware environments. A VMware Brain appliance can be used in Vectra AI Platform deployments that use either the Respond UX or the Quadrant UX. The Respond UX is served from Vectra’s cloud and the Quadrant UX is served locally from the Brain appliance. For more detail on Respond UX vs Quadrant UX please see [Vectra Analyst User Experiences (Respond vs Quadrant)](https://docs.vectra.ai/deployment/getting-started/analyst-ux-options-rux-vs-qux).

This guide will cover basic background information, connectivity requirements (firewall rules that may be needed in your environment), licensing, deployment, and next steps. One of the below guides should be the starting point for your overall Vectra deployment:

* [Vectra Respond UX Deployment Guide](https://docs.vectra.ai/deployment/getting-started/respond-ux-deployment-guide)
* [Vectra Quadrant UX Deployment Guide](https://docs.vectra.ai/deployment/getting-started/quadrant-ux-deployment)

## Demo Deployment Video

{% embed url="<https://vimeo.com/729362095/0b6a7f5ae5>" %}

## About VMware Brain Images and Updates

The .ova image used to deploy a Brain in VMware is made available on the [Vectra Customer Portal](https://support.vectra.ai/vectra/login) which is part of [Vectra Support](https://support.vectra.ai/vectra/). Vectra periodically updates the base image used for VMware Brain deployment.&#x20;

{% hint style="info" %}
It is a best practice to always download the latest image from the Vectra Customer Portal prior to deployment of a new VMware Brain.
{% endhint %}

Brains that are connected to Vectra are updated automatically according to the settings on that Brain. Offline updates are also possible for Quadrant UX deployments only. Please see [Offline Updates](https://docs.vectra.ai/operations/readme-1/offline-updates-v89) for instructions on how to apply offline updates.

## VMware Brain Requirements and Performance

### General Requirements

* IP address and subnet mask for the Management interface of the Brain.
* DNS server addresses.
* Current login to a fully approved Vectra Support Portal account.
  * Accounts that are self-registered and not fully approved on the Vectra Support Portal will not have the license request option enabled.
* An open Proof of Value (Proof of Concept or Trial) that you are working with Vectra or a Vectra partner or a valid entitlement to Vectra NDR through purchase.
  * The licensing system cannot provide licenses for customers who are not currently entitled to a license through a trial or purchase.

{% hint style="info" %}
All VMware Brains support being deployed on VMware vSphere versions 6.5 through 8.&#x20;
{% endhint %}

### Performance and VMware Requirements

**For use in any Respond UX or Quadrant UX deployments:**

| Performance                          | 2 Gbps         | 4 Gbps         | 10 Gbps              |
| ------------------------------------ | -------------- | -------------- | -------------------- |
| CPU                                  | 8 Cores        | 16 Cores       | 32 Cores<sup>1</sup> |
| Memory                               | 64 GB RAM      | 128 GB RAM     | 256 GB RAM           |
| Drive (OS, Data) – Requires 260 MB/s | 128 GB, 512 GB | 128 GB, 512 GB | 128 GB, 512 GB       |
| Max Paired Sensors                   | 15             | 25             | 100                  |
| Max Simultaneous Tracked Hosts²      | 50,000         | 50,000         | 150,000              |

**For use ONLY in Respond UX for Network deployments:**

* A Respond UX for Network deployment means using network Sensors with the Respond UX.

| Performance<sup>2</sup>                    | 150 Mbps       | 500 Mbps       |
| ------------------------------------------ | -------------- | -------------- |
| CPU                                        | 4 Cores        | 6 Cores        |
| Memory                                     | 48 GB RAM      | 48 GB RAM      |
| Drive (OS, Data) Requires 260 MB/s         | 128 GB, 512 GB | 128 GB, 512 GB |
| Max Paired Sensors                         | 5              | 10             |
| Max Simultaneous Tracked Hosts<sup>3</sup> | 25,000         | 37,500         |

{% hint style="info" %}
**Footnotes from above tables:**

<sup>**1**</sup> Please see [32 Core NUMA Configuration](https://docs.vectra.ai/deployment/ndr-virtual-cloud-appliances/brain-deployment-in-vmware#id-32-core-numa-configuration) for details on checking and setting (if required) for 32 core Brains.

<sup>**2**</sup> Performance represents the aggregate bandwidth observed on the capture interfaces of any Sensors that are paired to the Brain. Guidance is for average traffic mixes. Traffic mixes that skew toward larger flows (like file transfers) will perform better than traffic mixes that skew towards smaller flows (like DNS) as they produce more metadata.

<sup>**3**</sup> Refers to how many hosts the Brain can track simultaneously (open host sessions). Brains retain and display data for larger numbers of hosts, this only refers to how many hosts the system can process metadata for simultaneously.
{% endhint %}

{% hint style="warning" %}
**Special Note regarding Vectra supported VMware hardware versions:**

* Vectra supports only versions 11 and 15 of VMware hardware.
* **DO NOT** update the hardware version if offered during deployment or any other situation.
* If you move to an unsupported hardware version, contact Vectra support for guidance. Downgrades may be possible but are not officially supported. Support will be best effort in these situations.
  {% endhint %}

{% hint style="warning" %}
**Please Note:**

The virtual CPU **MUST** support the pdpe1gb cpu flag (1GB Large Pages) – [More information](https://www.intel.com/content/www/us/en/support/articles/000090980/processors.html), and a minimum SSE instruction level of 4.2, and must support the POPCNT (population count) instruction. This requires the hypervisor host to be running one of the following processors or later:

* Intel Nehalem (2008) processors and newer
* AMD Bulldozer (2011) processors and newer
* Check [VMware’s Enhanced vMotion Compatibility (EVC Explained) article](https://blogs.vmware.com/vsphere/2019/06/enhanced-vmotion-compatibility-evc-explained.html) for details on EVC settings that may mask the underlying physical CPU’s required flags. Change EVC settings if required.
  {% endhint %}

{% hint style="info" %}
**Please Note:**

vMotion is compatible with VMware Brains but new HW or copying the VM can cause VMware to generate a new UUID which causes the Brain license to become invalid, and require relicensing.

* If VMware gives you a choice, to keep the existing UUID, always pick **I moved it** or **Keep it** instead of copying to retain the UUID and avoid relicensing. See these VMware KBs for more details:
  * [Changing or keeping a UUID for a moved virtual machine](https://knowledge.broadcom.com/external/article/320246/changing-or-keeping-a-uuid-for-a-moved-v.html)
  * [Migrating VMs with vSphere vMotion](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/vcenter-and-host-management-8-0/migrating-virtual-machines-host-management/migration-with-vmotion-host-management.html?utm_source=chatgpt.com)
    {% endhint %}

{% hint style="info" %}
**Additional Notes:**

* Vectra VMware based Brains do **NOT** support Mixed Mode deployment.
  * They can only be used in Brain mode.
* Vectra VMware based Brains support running in FIPS mode.
  * Note that the underlying hardware must also be FIPS compliant (it must support the RDRAND CPU instruction).
* Vectra recommends that Brains are configured to use storage local to the hypervisor and are not stored on a SAN.
  * Vectra Brains require extremely high throughput from their disk storage and this throughput cannot normally be sustained by SAN systems without impact to other SAN users.
* See VMware deployment details and considerations (the next section in this guide) for additional guidance around Storage/SANs, networking requirements, vMotion, Enhanced vMotion compatibility, and unsupported hypervisors.
  {% endhint %}
