Deploying IVM Console Server to AWS using load balancers and reverse scan engines

Are we doing this correctly or is there a better way?

We have deployed an IVM Console Server in AWS and we are attempting to get everything configured and working for scan engines in multiple data centers and various AWS accounts to talk to the Console Server (using the reverse scan engine configuration) and we are looking for some guidance or recommendations on how to do this correctly, since we could not find clear documentation on how to configure the AWS environment to deploy IVM this way.

Based on AWS’s Well Architected Framework, the secure way to expose an EC2 asset to the Internet is through a load balancer (ideally, you should never expose an EC2 directly to the Internet and EC2’s should be deployed in a private subnet within a VPC). AWS has four distinct load balancer options available today: ALB (Application Load Balancer), Gateway Load Balancer, NLB (Network Load Balancer), and Classic Load Balancer. We were able to get the ALB functioning with the IVM Console Server’s HTTPS service, but we could not get it to route scan engine traffic over tcp/40815 (presumably because the ALB is a Layer 7 LB and is only accepting traffic for HTTP/HTTPS).

We have had some success with the classic load balancer in getting all traffic to route, but classic LBs have dynamic IP addresses so this could be an issue for us with our egress firewall rules at data centers where the scan engines are deployed.

We are now trying to get an NLB configured with static IP addresses and we think we can use this, but we have lost a lot of time troubleshooting connectivity so we are seeking additional clarity and guidance.