SystemLink Enterprise in AWS EKS
- Updated2026-05-14
- 7 minute(s) read
SystemLink Enterprise in AWS EKS
Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that simplifies running Kubernetes on AWS without needing to install and operate your own Kubernetes control plane.
This section provides guidance on configuring EKS clusters to host SystemLink Enterprise with the required networking, storage, security, and integration components.
Before deploying SystemLink Enterprise on EKS, ensure you have configured your AWS VPC with the appropriate public and private subnet architecture. Refer to AWS VPC for detailed networking guidance.
EKS Cluster Requirements
Configure your EKS cluster with the following specifications to support SystemLink Enterprise workloads.
NI recommends using a Kubernetes version that AWS EKS actively supports and compatible with your SystemLink Enterprise version. Refer to the SystemLink Enterprise and External Dependencies Compatibility Matrix for specific Kubernetes version compatibility.
Configure cluster endpoint access (private and/or public endpoints) based on your organizational security policies and operational requirements for cluster management.
EKS node groups provide compute capacity for running SystemLink Enterprise pods. Configure node groups based on workload requirements and scaling needs.
EKS supports two types of worker nodes:
- Managed node groups: AWS manages the lifecycle, updates, and scaling of EC2 instances.
- Self-managed nodes: You manage EC2 instances directly using Auto Scaling Groups. Use the self-managed nodes if you require custom configurations not supported by managed node groups.
Select EC2 instance types and configure node group capacity based on your SystemLink Enterprise workload requirements. Consider the following when planning your node groups:
- Instance families: Choose instance types based on workload
characteristics:
- General purpose
- Compute-optimied
- Memory-optimized
- High availability: Distribute nodes across at least two availability zones.
- Scaling capacity: Plan for minimum, desired, and maximum node counts to support autoscaling.
Refer to the SystemLink Enterprise pilot-sizing.yaml for example resource requests and limits for SystemLink services.
SystemLink Enterprise supports workload segmentation across multiple node pools for optimal resource allocation and isolation. NI recommends the following node pool types for SystemLink Enterprise deployments:
- Services Node Pool: For general SystemLink services including web services, web applications, and supporting infrastructure
- Notebook Execution Node Pool: For notebook execution workloads.
- Jupyter Node Pool: For Jupyter user environments.
- Dremio Node Pool: For DataFrame Service workloads.
Configure node selectors, tolerations, and affinities for your SystemLink Enterprise deployment to control pod placement on these node pools. Refer to the SystemLink Enterprise node-selectors.yaml for detailed configuration examples.
Consider the following recommendations when sizing your SystemLink Enterprise node pools. Refer to Sizing examples for resource requirements of different services.
- Services Node Pool: Use general-purpose instances with sufficient cpu and sufficient memory for web services and infrastructure components. Size based on concurrent user load and API request volume.
- Notebook Execution Node Pool: Use memory optimized or general-purpose instances depending on notebook workload characteristics. Size based on the number of concurrent notebook executions and the resource requirements of your analysis workloads.
- Jupyter Node Pool: Use general-purpose or memory-optimized instances based on user workspace requirements. Size based on the number of concurrent Jupyter users and typical notebook memory usage patterns.
- Dremio Node Pool: Use memory-optimized or general purpose instances with high cpu and memory capacity for optimal DataFrame Service query performance. Dremio is resource intensive and requires both substantial memory and substantial cpu resources. Size based on data volume, query complexity, and concurrent query load.
For all node pools, configure appropriate minimum and maximum node counts to support autoscaling during peak usage periods while maintaining cost efficiency during low-demand periods.
Networking Configuration
EKS cluster networking integrates with your VPC configuration to provide connectivity for pods, services, and external access.
Deploy EKS worker nodes in private subnets across multiple availability zones for high availability. Ensure sufficient IP addresses for nodes and pods, as each requires one IP address when using the VPC CNI plugin. Refer to AWS VPC for detailed subnet architecture and IP address planning guidance.
The Amazon VPC CNI plugin is an essential component of EKS network configuration. It provides native VPC networking for Kubernetes pods by assigning IP addresses from your VPC subnets directly to each pod. The VPC CNI plugin is installed by default on all EKS clusters and enables pods to operate as first-class citizens in your AWS VPC with integrated security, monitoring, and networking capabilities.
Storage Configuration
SystemLink Enterprise requires persistent storage for application data and object storage for files and backups.
Install the Amazon EBS CSI driver as an EKS add-on to provision persistent volumes for SystemLink services. Configure an IAM role with EBS management permissions and associate it with the EBS CSI driver service account using EKS Pod Identity.
Configure Amazon S3 for object storage. Create VPC gateway endpoints for S3 to enable private connectivity from cluster nodes. Use EKS Pod Identity to grant pods access to S3 buckets. Configure buckets with encryption, versioning, and lifecycle policies based on your data retention requirements. Refer to AWS VPC for VPC endpoint configuration and Object Storage for more details on S3 configuration.
IAM and Security
Configure IAM roles and security policies to control access to AWS resources and cluster components.
The EKS cluster requires an IAM role with permissions to manage cluster resources:
- Required policies: Attach the AmazonEKSClusterPolicy managed policy to the cluster role
- Custom policies: Add custom policies as needed for specific requirements, such as KMS encryption policies for encrypting Kubernetes secrets at rest
Worker nodes require an IAM role with permissions to join the cluster and access AWS services:
- Required policies: Attach the following managed policies to the
node role:
- AmazonEKSWorkerNodePolicy
- AmazonEC2ContainerRegistryReadOnly
- AmazonEKS_CNI_Policy
- Additional permissions: Use EKS Pod Identity or IRSA to assign specific permissions to individual pods. This approach avoids granting broad permissions to the entire node group IAM role. Add custom policies to the node group IAM role only when required for node-level operations, such as KMS encryption policies for encrypting EBS volumes.
- Dremio node group: Dremio authenticates to S3 using EC2 instance
metadata rather than EKS Pod Identity or IRSA. If you use a
dedicated node group for Dremio, follow these steps to
configure the IAM role:
- Assign the Dremio node group IAM role with read/write access to the S3 buckets.
- Ensure the S3 buckets are used for both:
- Dremio distributed storage
- DataFrame Service data
NI recommends using EKS Pod Identity to grant IAM permissions to Kubernetes service accounts. EKS Pod Identity is a simpler alternative to IRSA that provides improved security and easier cluster portability:
- Simplified configuration: EKS Pod Identity does not require an OIDC provider or manual trust relationship configuration
- Service account associations: Create IAM roles and associate the roles with Kubernetes service accounts
- Cluster portability: Pod Identity associations are cluster-scoped, making it easier to replicate configurations across clusters
Configure EKS Pod Identity for SystemLink service accounts that require AWS resource access, such as the EBS CSI driver, AWS Load Balancer Controller, and SystemLink service pods requiring S3 access.
IRSA (IAM Roles for Service Accounts) is an alternative authentication method that grants fine-grained IAM permissions to Kubernetes service accounts. Most SystemLink Enterprise services support IRSA. NI recommends using EKS Pod Identity instead for simplified configuration and improved security.
If you choose to use IRSA, configure it as follows:
- OIDC provider: Create an IAM OIDC identity provider for your EKS cluster to enable IRSA
- Service account roles: Create IAM roles for specific service
accounts:
- EBS CSI driver
- AWS Load Balancer Controller
- SystemLink service pods
- Trust relationships: Configure IAM role trust policies to allow assumption by specific service accounts
Configure security groups to control network traffic to and from cluster components:
- Cluster security group: EKS automatically creates a cluster security group for control plane and node communication
- Node security group: Configure additional security groups for node groups to control inbound and outbound traffic
- Minimum required rules: Allow communication between nodes, control plane communication, and access to required AWS services
- Application access: Configure security group rules to allow load balancer traffic to reach cluster services
Load Balancer Integration
Configure load balancers to provide external access to SystemLink Enterprise services.
Install the AWS Load Balancer Controller to manage Application Load Balancers (ALB) and Network Load Balancers (NLB) from Kubernetes. Create an IAM role with load balancer management permissions and associate it with the controller service account using EKS Pod Identity.
Use ALB for Layer 7 HTTPS traffic to SystemLink web UI and API. Use NLB for Layer 4 TCP traffic to Salt Master. Configure load balancers based on your access requirements (internet-facing or private). Refer to Layer 7 (Application) Ingress and Layer 4 (TCP) Ingress for detailed configuration guidance.
Container Image Management
You can store SystemLink Enterprise container images in any container registry accessible from your EKS cluster. If using Amazon Elastic Container Registry (ECR), create ECR private repositories and VPC interface endpoints for private connectivity. Grant ECR pull permissions to the node group IAM role using the AmazonEC2ContainerRegistryReadOnly policy. Refer to Configuring SystemLink Repositories for more details on setting up container registries and mirrors.
Secrets Management
You can manage secrets like connection strings, access keys, certificates, and API keys using Kubernetes Secrets or external managers such as AWS Secrets Manager. You can create and refresh Kubernetes Secrets automatically by using the External Secrets Operator with values stored in AWS Secrets Manager. Enable encryption of Kubernetes Secrets at rest in etcd using customer-managed keys stored in AWS KMS. Refer to Required Secrets for more details on secret management.
Monitoring and Logging
Configure monitoring and logging based on your organizational requirements. For AWS-native monitoring, you can enable EKS control plane logging to send audit and API server logs to CloudWatch Logs. You can also install CloudWatch Container Insights for metrics collection. Alternatively, configure integration with your existing monitoring infrastructure. Refer to Observing an SystemLink Enterprise Environment for more details on monitoring and logging configuration.
Upgrading and Maintenance
Plan for regular cluster upgrades and maintenance activities. Upgrade EKS clusters one minor version at a time, following the sequence: control plane, node groups, then add-ons. Test upgrades in non-production environments before upgrading production clusters
Related Information
- Supported Cloud Providers
NI has validated and continuously tests supported cloud providers in cloud provider environments for production workloads.
- SystemLink Enterprise and External Dependencies Compatibility Matrix
- SystemLink Enterprise pilot-sizing.yaml
- SystemLink Enterprise sizing examples
- SystemLink Enterprise node-selectors.yaml
- What is Amazon EKS?
- Amazon EKS Networking
- Amazon EKS Storage
- Amazon EKS IAM
- External Secrets Operator
- SystemLink Environment Architecture
SystemLink Enterprise is an application with a service-oriented architecture. Kubernetes hosted microservices compose the architecture. SystemLink Enterprise is scalable, fault-tolerant, and highly available. The following table summarizes the major components of the SystemLink Enterprise architecture.
- AWS VPC
A Virtual Private Cloud (VPC) is an isolated network environment within AWS.
- Layer 7 (Application) Ingress
Layer 7 ingress provides application-level HTTPS load balancing and routing for web services. SystemLink Enterprise uses Layer 7 ingress to expose HTTP-based services through two separate ingress endpoints: one endpoint for the web UI and one endpoint for API access.
- Layer 4 (TCP) Ingress
Layer 4 ingress provides TCP-level load balancing for services that require direct TCP connections. SystemLink Enterprise uses Layer 4 ingress for the Salt Master service.
- Object Storage
Several SystemLink Enterprise services require an object storage provider. SystemLink Enterprise supports the following storage providers:
- Configuring SystemLink Repositories
Configure the NI public Helm repository and mirror it on an internal server.
- Required Secrets
Secrets are Kubernetes objects that are used to store sensitive information.
- Observing an SystemLink Enterprise Environment
SystemLink Enterprise supports integration with observability tools. With these tools you can monitor application performance. Trace requests across microservices and aggregate logs for troubleshooting.
- Connecting SystemLink Enterprise to Log Aggregation Tools
SystemLink Enterprise services generate logs that can be collected and forwarded to log aggregation tools for centralized monitoring, searching, and analysis.