AWS VPC
- Updated2026-04-08
- 3 minute(s) read
A Virtual Private Cloud (VPC) is an isolated network environment within AWS.
AWS VPC provides complete control over networking configuration, including:
- IP address ranges
- Subnets
- Routing tables
- Network gateways
For SystemLink Enterprise deployments, a properly configured VPC is essential for security, scalability, and network isolation.
Configure your VPC to isolate compute and storage infrastructure on private subnets, separate from internet-facing gateways and load balancers. This configuration follows cloud security best practices and ensures that sensitive workloads are not directly accessible from the internet.
For a general reference of the architecture, refer to the AWS SystemLink Enterprise Kubernetes Architecture Diagram.
Subnet Architecture
Configure your VPC with both public and private subnets to separate internet-facing components from internal infrastructure.
Deploy the following components in private subnets with no direct internet access:
- EKS cluster nodes: Worker nodes hosting SystemLink Enterprise pods, including web services, web applications, and supporting infrastructure.
- Databases: Amazon RDS instances for PostgreSQL, or self-managed database instances.
- VPC Endpoints for S3: Use VPC gateway endpoints to give private subnets direct access to Amazon S3 without sending traffic over the internet.
Deploy the following components in public subnets with internet gateway access:
- Application Load Balancers (ALB): Internet-facing load balancers for HTTPS traffic to the SystemLink web application and API
- Network Load Balancers (NLB): TCP load balancers for Salt Master traffic (ports 4505 and 4506)
- NAT Gateways: Enable outbound internet access for resources in private subnets.
CIDR Block Planning
Plan your VPC CIDR blocks to ensure sufficient IP addresses for SystemLink Enterprise and future growth.
- Recommended VPC size: /16 CIDR block (65,536 IP addresses) provides flexibility for scaling
- Minimum VPC size: /20 CIDR block (4,096 IP addresses) for smaller to medium size deployments
Allocate subnets based on the following considerations:
- Private subnets for EKS nodes: Size based on maximum expected node count. Each EKS node requires one IP address from the subnet CIDR.
- Private subnets for pods: If using custom networking or VPC CNI custom mode, allocate
additional subnets for pod IP addresses. Each pod requires its own IP address.
- Start with the default VPC CNI configuration where pods share node subnets unless you have specific requirements for pod-level network isolation.
- Monitor IP utilization carefully. Each node can consume 10-100+ IP addresses depending on the EC2 instance type.
- Plan for growth. Ensure subnet sizing can support significant scaling of your workload.
- Public subnets: Smaller allocation, such as /24 is sufficient for load balancers and NAT gateways.
- Multi-AZ deployment: Create subnet pairs (public/private) in at least two availability zones for high availability.
| Subnet Type | Availability Zone | CIDR Block | Available IPs |
|---|---|---|---|
| Private (EKS nodes) | us-east-1a | 10.0.0.0/19 | 8,192 |
| Private (EKS nodes) | us-east-1b | 10.0.32.0/19 | 8,192 |
| Private (Databases) | us-east-1a | 10.0.64.0/24 | 256 |
| Private (Databases) | us-east-1b | 10.0.65.0/24 | 256 |
| Public | us-east-1a | 10.0.128.0/24 | 256 |
| Public | us-east-1b | 10.0.129.0/24 | 256 |
Security Best Practices
Observe the following security best practices when configuring your VPC for SystemLink Enterprise:
- No direct internet access for compute and storage: All EKS nodes, databases, and internal services must reside in private subnets with no internet gateway route.
- Traffic flow isolation: Internet traffic flows through the Internet, the public ALB/NLB, the private Kubernetes Ingress Controller (private subnet), and then the private SystemLink services.
- Outbound internet via NAT: Use NAT Gateways in public subnets to provide outbound internet access for private subnet resources. Deploy one NAT Gateway per availability zone for high availability.
- VPC endpoints: Use VPC endpoints for AWS services (S3) to avoid internet routing and reduce data transfer costs.
- Security groups: Configure security groups to allow only necessary traffic between components. Use the principle of least privilege.
Availability Zone Considerations
Deploy SystemLink Enterprise across multiple availability zones for high availability and fault tolerance:
- Minimum deployment: Use at least two availability zones with subnet pairs in each zone.
- EKS node groups: Distribute worker nodes across availability zones.
- Database Multi-AZ: Enable Multi-AZ deployments for RDS PostgreSQL and DocumentDB.
- Load balancer distribution: Configure ALB and NLB to distribute traffic across subnets in multiple availability zones.
Related Information
- Preparing to Host and Operate SystemLink Enterprise
Before installing SystemLink Enterprise, ensure that the following network, compute, storage, and security infrastructure is in place.
- Public and Private Subnets
Public and private subnets are fundamental networking constructs in cloud environments that enable network segmentation and security isolation. Understanding how to properly configure these subnets is essential for deploying SystemLink Enterprise securely in AWS or Azure.
- Azure VNet
An Azure Virtual Network (VNet) is an isolated network environment within Azure.
- Internet Facing Clusters
Enter a short description of your concept here (optional).
- Corporate Network Connected Clusters
A corporate network connected cluster deployment integrates SystemLink with your the private network infrastructure of your organization, ensuring secure access and secure integration with on-premises systems.
- Networks and TLS
Learn how to configure networking and Transport Layer Security (TLS) for SystemLink Enterprise.
- DNS and Network Security Considerations
SystemLink Enterprise is hosted in a Kubernetes cluster. SystemLink Enterprise connects to test systems to aggregate data for monitoring and analysis.
- Private Certificate Authorities
If you are using a private certificate authority (CA), you must configure SystemLink Enterprise to use the private CA to establish trust.
- Layer 7 (Application) Ingress
Layer 7 ingress provides application-level HTTPS load balancing and routing for web services. SystemLink Enterprise uses Layer 7 ingress to expose HTTP-based services through two separate ingress endpoints: one endpoint for the web UI and one endpoint for API access.
- Layer 7 Ingress in AWS
This section describes Layer 7 ingress configuration using the AWS Application Load Balancer (ALB) for SystemLink Enterprise deployed on Amazon EKS. The ALB provides HTTPS load balancing and routing for the SystemLink UI and API hosts.
- AWS Global Ingress Configuration
SystemLink Enterprise configures separate ingress resources for the UI endpoints and API endpoints. Configure the following annotations in your Helm configuration file.
- Layer 7 Ingress in Azure
This section describes Layer 7 ingress configuration using the Azure Application Gateway for SystemLink Enterprise deployed on Azure Kubernetes Service (AKS). The Application Gateway provides HTTPS load balancing and routing for the SystemLink UI and API hosts.
- Azure Global Ingress Configuration
SystemLink Enterprise configures separate ingress resources for the UI endpoints and API endpoints. Configure the following annotations in your Helm configuration file.
- Layer 7 Ingress in Traefik
SystemLink Enterprise supports Traefik Hub API Gateway as a Layer 7 ingress controller. Traefik Hub provides HTTPS load balancing and routing for the SystemLink UI and API hosts.
- Layer 4 (TCP) Ingress
Layer 4 ingress provides TCP-level load balancing for services that require direct TCP connections. SystemLink Enterprise uses Layer 4 ingress for the Salt Master service.
- Enabling Salt Communication in AWS
SystemLink Enterprise uses Salt to manage test systems. Salt communicates with test systems using a TCP-based protocol on ports 4505 and 4506. This section describes using the AWS Network Load Balancer (NLB) for Layer 4 (TCP) ingress with the Salt Master service.
- Enabling Salt Communication in Azure
SystemLink Enterprise uses Salt to manage test systems. Salt communicates with test systems using a TCP-based protocol on ports 4505 and 4506. This section describes using Azure Load Balancer for Layer 4 (TCP) ingress with the Salt Master service.
- AWS SystemLink Enterprise Kubernetes Architecture Diagram
- What is Amazon VPC?
- Amazon EKS VPC and Subnet Requirements
- SystemLink Environment Architecture
SystemLink Enterprise is an application with a service-oriented architecture. Kubernetes hosted microservices compose the architecture. SystemLink Enterprise is scalable, fault-tolerant, and highly available. The following table summarizes the major components of the SystemLink Enterprise architecture.
- SystemLink Enterprise in AWS EKS
Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that simplifies running Kubernetes on AWS without needing to install and operate your own Kubernetes control plane.