Directing traffic to GCP vSensors
How to direct traffic to the GCP vSensor capture port using Google Network Security Integration (NSI), VPC Packet Mirroring, or 3rd party VXLAN-based packet brokers.
Directing Traffic to the Sensor
The input to the Sensor can be from GCP native technologies such as Network Security Integration (NSI) or VPC Packet Mirroring but also support any VXLAN-based 3rd party packet broker. NSI is recommended to be the default for new deployments by Google but VPC packet mirroring is still supported. Vectra will be adding more guidance to this document for NSI is a future update. For now, if NSI will be used, please see the documentation linked from Google above.
Please note that the required load balancer for use with 3rd party VXLAN-based packet brokers or VPC Packet Mirroring is created automatically by the deployment script. Do not modify the configuration of the load balancer. The protocol for the load balancer is intended to be UDP and GCP passes all traffic regardless. See here for detail:
3rd Party VXLAN-based Packet Brokers
The input to the Sensor can be 3rd party packet brokers that support VXLAN encapsulation. To utilize such a solution, work with that 3rd party to configure their solution and direct traffic to the load balancer for the Sensor. To find that load balancer you can use the GCP CLI or console GUI.
CLI:
❯ gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
sensor-lb-forwarding-tme-gcp-sensor us-east4 10.200.0.3 UDP us-east4/backendServices/tme-gcp-sensor-backendHere you can see forwarding rules and any IPs associated with them along with the target. If you have several, make sure you are looking for the one associated with your $DEPLOYMENT_NAME-backend.
GUI:
Navigate in the GCP console to Network services → Load balancing.

Select the Frontends tab for the Sensor you are targeting, and the forwarding rule and IP will be displayed.

VPC Packet Mirroring
Packet mirroring is a native functionality provided by GCP that clones the traffic of specified instances in your VPC network and forwards it for examination to solutions such as Vectra’s NDR (formerly Detect for Network). For additional information about packet mirroring from Google, please see the following articles:
It should be noted that GCP’s mirroring happens on the VM instances and not on the network. This means that enabling packet mirroring will consume additional resources on the source VMs that you are mirroring traffic from. If there are concerns about performance of the already deployed VMs, Vectra recommends that you work with Google to determine if upgrades are required to the source VM resources.
This document does not intend to be a replacement for Google’s own documentation or the expertise of your team that manages your GCP environment. We will share some basic facts and guidance from Google along with some examples. More complex environments may require architecture discussions internally and with Vectra and/or GCP.
Facts and Guidance
Sources can be specified by subnet, network tags, or individual instance names.
A collector destination is an instance group that is behind an internal load balancer.
This group and load balancer is setup by Vectra’s deployment template.
When you specify a collector destination you enter the name of a forwarding rule that is associated with the internal load balancer collector destination when building a policy.
Filters can be applied based on protocol, IP address ranges, direction of traffic (ingress-only, egress-only, or both) or a combination.
GCP’s Packet Mirroring overview → Key properties section goes over a number of constraints and behaviors that are important to understand.
Of special note is that if you have a collector destination in a different VPC network than the mirrored sources, the VPC networks must be peered using VPC Network Peering in a reflexive manner.
When VPC networks are peered they share routes, but firewall rules still need to be configured to allow traffic from source instances to the destination instances that are part of the load balancer.
At a minimum, mirrored instances must have an egress rule that allows them to send traffic to the forwarding rule of the internal load balancer.
Collector instances (i.e. the Vectra Sensor) in the load balancer instance group must have an ingress rule that allows it to receive traffic from mirrored instances or from the IP address range of mirrored instances.
Example Deployment
The screenshot below shows 3 different VPC networks that were configured for this deployment:
tme-sensor-mgt(management)This is the network that the Sensor management interface is deployed into.
tme-sensor-trf(traffic)This is the network that the Sensor traffic interface is deployed into.
tme-cognito-source(source)sfdThis is a 3rd network configured just to be a separate network used to generate source traffic.
The Sensor only requires 2 networks (they can be existing or new).

There are 3 instances in the deployment:

tme-gcp-sensor-instance-c4fxis the Sensor instance. It has 2 nics:

traffic-sourceis a ubuntu instance deployed on the source network.traffic-source2is an ubuntu instance deployed on the traffic network.This network also includes the Sensor’s traffic interface.
Packet mirroring goals of the example deployment:
Mirror traffic on the management network so that the Sensor itself would be visible to Vectra NDR.
Mirror traffic on the source network to get visibility to the traffic-source instance.
Mirror traffic of the traffic-source2 instance to get visibility to it.
VPC network peering rules:
Peering was setup between the management and traffic networks (this must be done both ways).
Peering was setup between the source and traffic networks (this must be done both ways).
Peering was not required between the management and source networks, but we set it up anyway to allow for machine to machine communication as part of testing.

Creating peering connections:
Navigate in the GCP console to Networking → VPC network → VPC network peering and click on + CREATE PEERING CONNECTION.
Give the peering connection a name, select a source network, select a network to peer with, and choose options related to custom and subnet route import/export.
Click create and then do the same thing but reverse the source and peered network to create the reflexive relationship.

Firewall rules:
Firewall rules were setup to allow egress to anywhere from each of the 3 VPC networks.
Firewall rules were setup to allow ingress from any of the 3 VPC networks.
Note that it was required to allow ingress from the network that this rule was applied to, not just the other 2 networks.
Firewall rules were setup to allow tcp:22 from another location (blurred) for inbound SSH to the machines.

Creating firewall rules:
Navigate in the GCP console to Networking → VPC network → Firewall and click on + CREATE FIREWALL RULE.
Give the rule a name, select the network for the rule, direction of traffic, action on match (allow), select targets and source filter, specify protocols and ports, and click create.
An example allowing SSH is shown below:

This can also be done via the gcloud CLI command by modifying the example below:
Packet mirroring policies:
Two policies were setup to mirror all traffic from the management and source networks
A policy was setup to mirror traffic from the traffic-source2 instances

Creating packet mirroring policies:
Navigate in the GCP console to Networking → VPC network → Packet mirroring and click on + CREATE POLICY.
On the 1. Define policy overview page, give the policy a name, select a region and click continue.

On the 2. Select VPC network page, follow the instructions and choose a flow appropriate if the source and collector destination are in the same or separate VPCs.
In this example we are mirroring the management network, so we pick the option for separate and select the traffic network as the destination and then click continue.

On the 3. Select mirrored source page, choose a mirror source by subnet, network tag, or individual instances.
In this example we want to mirror the entire management subnet, so we choose the subnet option and select the management subnet and then click continue.

On the 4. Select collector destination page, choose the forwarding rule that represents the Sensor you are targeting for the mirror and click continue.

On the 5. Select mirrored traffic page, choose which traffic you would like to mirror and click submit.

The mirroring policy is now complete and in effect.
The other two polices in our example were completed in a similar manner.
Validating traffic flow
Once you have completed the packet mirroring policies and have traffic flowing to the Sensor you can validate the traffic flow using the GCP monitoring dashboard and in your Vectra Detect for Network implementation.
Please see the following Vectra support article for recommendations on network traffic that should be examined and excluded from analysis:
To validate flow from the GCP side, follow instructions from google at the following page:
To validate flow from the Vectra side there are several methods
To see that packets are being receive by the traffic interface, use ssh to login to the CLI of the Sensor as the
vectrauser and use theshow traffic statscommand.Some ssh guidance was shared on page 2 of this document in the SSH Key Pair section.
Run the
show traffic statsseveral times to see that packet counts are increasing.
In the Vectra GUI if you navigate to Network Stats → Ingested Traffic, you can see the traffic graph for your Sensor.
For this graph to display, there must be at least 1 Mbps of traffic being captured.
Once traffic capture begins, it will take a few minutes for this graph to be populated. Use the CLI of the Sensor as shown above to validate that packets are flowing first.

The Network Stats → Observed IPs page shows the subnets being observed and numbers of hosts seen:

After sending traffic to your Sensors, it is a best practice to validate that the traffic observed meets quality standards required for accurate detection and processing. Vectra’s Enhanced Network Traffic Validation feature provides alarms and metrics that can be used to validate the quality of your traffic. Please see the following Vectra support article for details:
Last updated
Was this helpful?