Install SystemLink Enterprise using Helm commands.

When installing SystemLink Enterprise, we typically perform configuration in three primary places:

  1. systemlink-values.yaml: Configures most of the application.
  2. systemlink-admin-values.yaml: Defines global resources in the SystemLink Admin Helm chart. These resources must be installed by Helm before installing the SystemLink Helm chart.
  3. systemlink-secrets.yaml: Defines Secrets in Helm. NI recommends using techniques such as sops to encrypt the file or the secret values in the file. It is optional to use this file to deploy secrets. If you do not wish to deploy secrets with this file, set global.deplySecrets to false in the systemlink-values.yaml.

Creating Namespaces

Create namespaces to organize your cluster.

  • Create a namespace for the SystemLink Helm chart:
    kubectl create namespace <namespace>
  • Create a namespace for the SystemLink Admin Helm chart:
    kubectl create namespace <admin-namespace>

Preparing Certificates

Configure certificates for Transport Layer Security (TLS) communication and authentication with external resources.

Note SystemLink Enterprise endpoints require TLS to function. NI recommends using TLS for all communication between SystemLink Enterprise and external data storage and identity providers. Only obtain certificates from trusted sources.

If you are using a certificate to authenticate and encrypt communication with your PostgreSQL instances, refer to PostgreSQL to deploy and reference these certificates in Helm.

If you are using a certificate signed by a private authority for the SystemLink Enterprise hostnames, MongoDB, or S3, refer to Private Certificate Authorities to deploy and reference these certificates in Helm.

Installing Cluster Prerequisites

Install prerequisite resources globally on the cluster.

A user with the following privileges must complete the following steps on the SystemLink Admin Helm chart:

  • A cluster administrator with full access rights.
  • An Argo Workflows user deploying only CustomResourceDefinition.
  • A Flink Operator user deploying ClusterRoles and ClusterRoleBindings. The Flink Operator may require permissions to deploy cross-namespace.

For more details on Kubernetes permissions required for installation, refer to Required Kubernetes Permissions.

Download the SystemLink Admin Values File

Download a copy of systemlink-admin-values.yaml.

If you already have Argo Workflows CRDs installed as part of another deployment, set argoworkflowscrds.crds.install to false:
argoworkflowscrds:
  crds:
    install: false
If you already have Flink Operator deployed to your Kubernetes cluster, set flinkoperator.enabled to false:
flinkoperator:
  enabled: false

Install Prerequisites

helm upgrade <admin-release> oci://downloads.artifacts.ni.com/ni-docker/ni/helm-charts/systemlinkadmin --install --version <version> --namespace <admin-namespace> --values systemlink-admin-values.yaml --values systemlink-values.yaml --values systemlink-secrets.yaml --wait --timeout 20m0s
Table 8. Cluster Install Prerequisites
Parameter Description
admin-release The release name used for installing the SystemLink Admin Helm chart.
downloads.artifacts.ni.com/ni-docker The URL of the registry. If using a local mirror, replace this URL with the URL of the mirror registry.
version The specific version of the software to install.
admin-namespace The namespace created for the SystemLink Admin Helm chart.

This command waits for up to the configured timeout for the install to complete and for all resources to enter a ready state. The default timeout is 20 minutes. The timeout is conservative but installation times might vary due to a variety of factors. Adjust the timeout if needed.

Configuring SystemLink Enterprise

Before installing SystemLink Enterprise, you must configure your SystemLink values files. Download the template configuration files from the SystemLink Enterprise GitHub repository to get started.

Configuration parameters for systemlink-values.yaml, systemlink-admin-values.yaml, and systemlink-secrets.yaml are documented throughout this manual. Each topic in this manual references the specific Helm values that apply to that configuration area.

Configuring an Elasticsearch Instance

Configure SystemLink Enterprise to access a remote Elasticsearch database to enhance scalability and performance.

You must follow these steps under the following conditions.

  • You are upgrading from a SystemLink Enterprise version before 2025-07.
  • You want to improve your search performance.
Note This feature is currently available only for the FileIngestion and Asset services.

Choosing an Elasticsearch Deployment

SystemLink uses Elasticsearch to improve search performance. You can use an Elasticsearch instance in the same Kubernetes cluster as your SystemLink Enterprise installation or an external instance.

Use the following table to choose the Elasticsearch deployment that best suits your use case.
Table 9. Elasticsearch Deployment Options
Deployment When to Use Details
SystemLink Elasticsearch Helm chart
  • You need your database in the same Kubernetes cluster as your SystemLink Enterprise installation.
  • Your organization is comfortable managing an Elasticsearch instance.
  • You want user autoprovisioning and user dedicated configurations for SystemLink Enterprise.

You can run this instance on existing Kubernetes worker nodes or dedicated worker nodes using taints and tolerations.

For more information and recommended resources, refer to Sizing Considerations when Deploying an Elasticsearch Instance.

Elastic Cloud You want to simplify database provisioning, operation, backup, and restore operations. For more information and recommended resources, refer to Sizing Considerations when Deploying an Elasticsearch Instance.

Configuring the SystemLink Elasticsearch Helm Chart with Enabled Autoprovisioning

To configure Elasticsearch for the first time, you must provision the passwords.

  1. Open the elasticsearch.yaml file.
  2. Set the sl-elasticsearch.usersProvisioning.enabled value to True.
  3. Open the elasticsearch-secrets.yaml file.
  4. Set the password for each index.
    Table 10. Indexes for Enabled Autoprovisioning
    Service User Password
    assetservicecdc assetscdc sl-elasticsearch.secrets.assetscdcPassword
    fileingestioncdc filescdc sl-elasticsearch.secrets.filescdcPassword
  5. Deploy Elasticsearch.

Configuring a Remote Elasticsearch Instance or the SystemLink Elasticsearch Helm Chart with Disabled Autoprovisioning

To configure Elasticsearch for the first time, you must provision the indexes.

  1. Open the systemlink-secrets.yaml file.
  2. Set the password for each index.
    Note Some services require privileges on multiple indexes. For example, if the files,files_* parameter is specified, the service requires privileges for the following indexes:
    • The files index.
    • All indexes that match the files_* pattern (where * is a wildcard).
    Table 11. Indexes for Disabled Autoprovisioning
    Service Database User Password
    assetservice assets,assets_* assetscdc assetservice.secrets.elasticsearch.password
    assetservicecdc assets,assets_* assetscdc assetservicecdc.secrets.elasticsearch.password
    fileingestion files,files_* filescdc fileingestion.secrets.elasticsearch.password
    fileingestioncdc files,files_* filescdc fileingestioncdc.secrets.elasticsearch.password
  3. Deploy Elasticsearch.

Sizing Considerations When Deploying an Elasticsearch Instance

Resource requirements are based on service usage. Refer to the following table for tested configurations at a specified scale when configuring resources based on your expected usage.

Configure the Elasticsearch instances to handle the scale of your data.

Note These resource requirements increase as Elasticsearch usage increases.
Table 12. Elasticsearch Instance Sizing Considerations
Usage Level Scale Nodes CPU RAM Persistence
Baseline 25000 assets 2 1 4 GB 1 GB
General 25000 assets and 25 million files 2 2 2 4 GB
High 25000 assets and 80 million files 4 2 4 GB 200 GB

Based on your scale, select and apply a configuration.

  1. Open the elasticsearch.yaml file.
  2. Set the sl-elasticsearch.elasticsearch.master.replicaCount value to the listed nodes.
    Note For an optimal configuration, the number of nodes must not be smaller than the highest configured number of primary shards.
  3. Set the sl-elasticsearch.elasticsearch.master.resources.requests.cpu value to the listed CPU.
  4. Set the sl-elasticsearch.elasticsearch.master.resources.requests.memory value and the sl-elasticsearch.elasticsearch.master.resources.limits.memory value to the listed RAM.
  5. Set the sl-elasticsearch.elasticsearch.master.persistence.size value to the listed persistence storage size.

Configuring the Number of Primary Shards

Optimize your SystemLink configuration by ensuring that each service contains fewer primary shards than the number of nodes in Elasticsearch.

The following table contains configurations that NI tested at specific scales for the services.

Table 13. Tested Service Configurations
Service Scale Primary shards
Asset Service 25000 assets 1
FileIngestion Service 25 million files 2
FileIngestion Service 80 million files 4
  1. Open the systemlink-values.yaml file.
  2. Set the number of shards for the following variables:
    • assetservicecdc.job.connectors.sink.elasticsearch.index.primaryShardsCount
    • fileingestioncdc.job.connectors.sink.elasticsearch.index.primaryShardsCount
  3. Save the systemlink-values.yaml file.
Note A shard configuration only works on the initial deployment. To change the configuration after the first deployment, you must manually delete the files index and assets index from Elasticsearch. Then you can redeploy the FileIngestionCDC application or AssetServiceCDC application.

Installing the Application

Install SystemLink Enterprise on the cluster.

The user who performs the installation does not need access to the full cluster. However, the user must have full access to the namespace created for the application.

For more details on Kubernetes permissions required for installation, refer to Required Kubernetes Permissions.
Note This topic assumes that you named the database certificate postgres.pem, but you can use any name. SystemLink Enterprise deploys the certificate as a ConfigMap resource.

Install SystemLink Enterprise

helm upgrade <release> oci://downloads.artifacts.ni.com/ni-docker/ni/helm-charts/systemlink --install --version <version> --namespace <namespace> --values systemlink-values.yaml --values systemlink-secrets.yaml --set-file database.postgresCertificate=postgres.pem --wait --timeout 20m0s
Table 14. SystemLink Enterprise Install Parameters
Parameter Description
release The name Helm assigns to the installed collection of software.
downloads.artifacts.ni.com/ni-docker The URL of the registry. If using a local mirror, replace this URL with the URL of the mirror registry.
version The specific version of the software to install.
namespace The namespace for the application.

This command waits for up to the configured timeout for the install to complete and for all resources to enter a ready state. The default timeout is 20 minutes. The timeout is conservative but installation times might vary due to a variety of factors. Adjust the timeout if needed.

Note You can install multiple instances of SystemLink Enterprise on the same cluster. To install multiple instances, repeat the preceding commands with a different namespace and different values for each instance. Cluster prerequisites install only once for all instances.

Validating the Install

Test that SystemLink Enterprise installed correctly.

You can validate a successful SystemLink Enterprise install by inspecting the readiness probes for the pods deployed by the SystemLink Enterprise Helm chart using either of the following methods:

  • Using an application, such as Lens.
  • Running the following command:
    kubectl describe pod <pod-name> -n <namespace>
If a pod does not enter the ready state and is continuously restarting after several minutes NI recommends debugging the pod. You can debug the pod by inspecting the log for the pod with the following command:
kubectl logs <pod-name> -n <namespace>