Several SystemLink Enterprise services require an object storage provider. SystemLink Enterprise supports the following storage providers:

SystemLink Enterprise the following file storage providers:
  • Amazon S3 Storage
  • Amazon S3 Compatible Storage
  • Azure Blob Storage
Note An Amazon S3 compatible file storage provider must implement the full Amazon S3 API. For more information, refer to the Amazon S3 API Reference. The data frame service does not support the GCS Amazon S3 interoperable XML API.

The parameters referenced in the following tables for Amazon S3 storage and Azure Blob storage are typically shared across multiple configurations. Sharing occurs through YAML anchor syntax in the Helm values files. This syntax provides a convenient way to share a common configuration throughout your values files. You can override individual references to these values with custom values.

Amazon S3 and Amazon S3 Compatible Storage Providers

Note You can encrypt objects in Amazon S3 storage using either SSE-S3 or SSE-KMS with a bucket key. For more information, refer to Protecting Amazon S3 Data with Encryption.

Set the following configuration in your aws-supplemental-values.yaml Helm configuration file or storage-values.yaml Helm configuration file.

You can configure secret references in the aws-secrets.yaml file, the storage-secrete.yaml file, or directly on the cluster.

Table 73. Configurable Parameters
Parameters Before the 2025-07 Release Parameters After the 2025-07 Release Details
Not applicable
  • dataframeservice.storage.type
  • fileingestion.storage.type
  • feedservice.storage.type
  • nbexecservice.storage.type
This value represents the service storage type. Set the value to s3.
  • dataframeservice.s3.port
  • fileingestion.s3.port
  • feedservice.s3.port
  • nbexecservice.s3.port
  • dataframeservice.storage.s3.port
  • fileingestion.storage.s3.port
  • feedservice.storage.s3.port
  • nbexecservice.storage.s3.port
This value represents the storage provider service port number.
  • dataframeservice.s3.host
  • fileingestion.s3.host
  • feedservice.s3.host
  • nbexecservice.s3.host
  • dataframeservice.storage.s3.host
  • fileingestion.storage.s3.host
  • feedservice.storage.s3.host
  • nbexecservice.storage.s3.host
This value represents the storage provider service hostname.
  • dataframeservice.s3.schemeName
  • fileingestion.s3.scheme
  • feedservice.s3.scheme
  • nbexecservice.s3.scheme
  • dataframeservice.storage.s3.schemeName
  • fileingestion.storage.s3.scheme
  • feedservice.storage.s3.scheme
  • nbexecservice.storage.s3.scheme
This value represents the storage provider service scheme. This value is typically https.
  • dataframeservice.s3.region
  • fileingestion.s3.region
  • feedservice.s3.region
  • nbexecservice.s3.region
  • dataframeservice.storage.s3.region
  • fileingestion.storage.s3.region
  • feedservice.storage.s3.region
  • nbexecservice.storage.s3.region
This value represents the AWS region the S3 bucket is located.
  • dataframeservice.sldremio.distStorage
Unchanged

Resolve the <ATTENTION> flags.

These settings configure the distributed storage that is required for the data frame service.

  • dataframeservice.storage.s3.auth.secretName
  • fileingestion.storage.s3.secretName
  • feedservice.storage.s3.secretName
  • nbexecservice.storage.s3.secretName
Unchanged Secret name for credentials used to connect to the storage provider service.

Beginning with the 2025-11 release, fileingestioncdc adds the following parameters.

Table 74. 2025-11 Release Parameters
Parameter Details
fileingestioncdc.highAvailability.storage.s3.port This value represents the port number of the storage provider service.
fileingestioncdc.highAvailability.storage.s3.scheme This value represents the scheme of the storage provider service. This value is typically https.

Connecting Services to S3 through IAM

Assign an IAM role to connect services to Amazon S3. Configure service accounts and IAM role annotations in your Helm values file.

  • Create a service account for each service by setting serviceAccount.create: true in your Helm values.
    Note Flink services do not require this configuration. The Flink Operator manages the service account.
  • Create an IAM policy with the following statement:
    "Action": [
      "s3:PutObject",
      "s3:ListBucket",
      "s3:GetObject",
      "s3:DeleteObject",
      "s3:AbortMultipartUpload"
    ],
    "Effect": "Allow",
    "Resource": [
      "<s3_bucket_ARN>/*",
      "<s3_bucket_ARN>"
    ]
    Note The <s3_bucket_ARN> placeholder represents the Amazon Resource Name for the S3 bucket of the service.
  • Create an IAM role that applies the IAM policy.
    Note Most IAM roles use the following naming convention: <release-name>-<service-name>-role. For example, systemlink-feedservice-role. Flink services share the same configuration as the Flink Operator and use: <release-name>-flink-role.
Table 75. Service Configurations
Service Configuration
DataFrame Service This service does not currently support IAM.
Feed Service
feedservice:
  storage:
    s3:
      authType: "AWS_WEB_IDENTITY_TOKEN"
  serviceAccount:
    annotations:
      eks.amazonaws.com/role-arn: "arn:aws:iam::<account-id>:role/<release-name>-feedservice-role"
File Ingestion Service
fileingestion:
  storage:
    s3:
      authType: "AWS_WEB_IDENTITY_TOKEN"
  serviceAccount:
    annotations:
      eks.amazonaws.com/role-arn: "arn:aws:iam::<account-id>:role/<release-name>-fileingestion-role"
File Ingestion CDC
fileingestioncdc:
  highAvailability:
    storage:
      s3:
        authType: "AWS_WEB_IDENTITY_TOKEN"
flinkoperator:
    flink-kubernetes-operator:
      jobServiceAccount:
        annotations:
          eks.amazonaws.com/role-arn: "arn:aws:iam::<account-id>:role/<release-name>-flink-role"
Notebook Execution Service
nbexecservice:
  storage:
    s3:
      authType: "AWS_WEB_IDENTITY_TOKEN"
  serviceAccount:
    annotations:
      eks.amazonaws.com/role-arn: "arn:aws:iam::<account-id>:role/<release-name>-executions-role"

Connecting Services to S3 using Access Keys

Access key authentication is required when IAM authentication is not available:

  • DataFrame Service: This service does not currently support IAM authentication.
  • S3-compatible storage providers: IAM authentication is only available for AWS S3.

In your systemlink-values.yaml or aws-supplemental-values.yaml file, specify the S3 connection parameters and secret reference:

feedservice:
  storage:
    s3:
      secretName: "feeds-s3-credentials"
      accessKeyIdName: "aws-access-key-id"
      accessKeyName: "aws-secret-access-key"
      authType: "ACCESS_KEY"
      bucket: "systemlink-feeds"
      scheme: "https://"
      host: "s3.amazonaws.com"
      port: 443
      region: "us-east-1"

In your systemlink-secrets.yaml or aws-secrets.yaml file, provide the access credentials:

feedservice:
  secrets:
    s3:
      accessKeyId: "<your-access-key-id>"
      accessKey: "<your-secret-access-key>"

The same pattern applies to other services when IAM authentication is not available.

Note When deploying on AWS with Amazon S3, NI recommends using IAM authentication where supported for improved security and credential management.

Azure Blob Storage Providers

Note For the Data Frame service storage account, you must disable blob soft delete and hierarchical namespace.

Set the following configuration in your azure-supplemental-values.yaml Helm configuration file or storage-values.yaml Helm configuration file.

You can configure secret references in the azure-secrets.yaml file, the storage-secrets.yaml file, or directly on the cluster.

Table 76. Configurable Parameters
Parameters Starting with the 2025-07 Release Details
  • dataframeservice.storage.type
  • fileingestion.storage.type
  • fileingestioncdc.highAvailability.storage.type
  • feedservice.storage.type
  • nbexecservice.storage.type
This value represents the storage type of the service. Set the value to azure.
  • dataframeservice.storage.azure.blobApiHost
  • fileingestion.storage.azure.blobApiHost
  • fileingestioncdc.highAvailability.storage.azure.blobApiHost
  • feedservice.storage.azure.blobApiHost
  • nbexecservice.storage.azure.blobApiHost

This value represents the host of the Azure Blob storage without the account name. For example, you can set the value to blob.core.windows.net or blob.core.usgovcloudapi.net.

If your storage does not use the default port, add the port to the end of the host. For example, blob.core.windows.net:1234.

  • dataframeservice.storage.azure.dataLakeApiHost

This value represents the host and the port of the Azure Data Lake Storage to connect to without the account name. For example, you can set the value to dfs.core.windows.net.

If your storage does not use the default port, add the port to the end of the host. For example: dfs.core.windows.net:1234.

  • dataframeservice.storage.azure.accountName
  • fileingestion.storage.azure.accountName
  • fileingestioncdc.highAvailability.storage.azure.accountName
  • feedservice.storage.azure.accountName
  • nbexecservice.storage.azure.accountName
This value represents the storage account for your service. NI recommends using different storage accounts for different services.

Connecting Services to Azure Blob Storage

To configure Azure Blob Storage authentication, you must configure both the values file and the secrets file.

Configure Azure Blob Storage authentication using the values file and secrets file.

In your azure-supplemental-values.yaml or storage-values.yaml file, specify the Azure storage parameters:

feedservice:
  storage:
    type: "azure"
    azure:
      accountName: "<your-azure-storage-account-name>"
      blobApiHost: "blob.core.windows.net"

In your azure-secrets.yaml file, provide the access credentials:

feedservice:
  secrets:
    azure:
      accessKey: "<your-azure-storage-access-key>"

The same pattern applies to other services.

Limits and Cost Considerations for File Storage

To adjust limits and cost considerations for file storage services, refer to the following configurations.

Table 77. File Storage Considerations
Consideration Configuration
Reduce storage costs
To clean up incomplete multipart uploads, configure your service. If you are using Amazon S3, configure the AbortIncompleteMultipartUpload value on your S3 buckets.
Note Azure storage automatically deletes uncommitted blocks after seven days. For other S3 compatible providers, refer to the provider documentation.
Adjust the number of files a single user can upload per second

Configure the fileingestion.rateLimits.upload value.

By default, the value is 3 files per second per user. By load balancing across replicas, the effective rate is higher than the specified rate.

Adjust the maximum file size that users can upload

Configure the fileingestion.uploadLimitGB value.

By default, the value is 2 GB.

Adjust the number of concurrent requests that a single replica can serve for ingesting data

Configure the dataframeservice.rateLimits.ingestion.requestLimit value.