Configuring Dremio
- Updated2025-04-25
- 1 minute(s) read
To avoid node resource contention between Dremio deployments and other deployments in the cluster complete the following steps.
-
Apply a taint named dremio with a value of
true and a NoSchedule effect.
kubectl: kubectl taint nodes <your-node-name> dremio=true:NoSchedule
-
Apply a label named dremio with a value of
true.
kubectl: kubectl label nodes <your-node-name> dremio=true
-
To clear pods that Kubernetes already scheduled to this node, manually drain
the node.
kubectl: kubectl drain --ignore-daemonsets <your-node-name>
- Open systemlink-values.yaml.
- Configure dataframeservice.sldremio.zookeeper.count to the number of nodes with the dremio label.
- Configure dataframeservice.sldremio.nodeSelector to dremio: "true".
-
Adjust the following parameters as-needed so that the tainted nodes can
accommodate the pods.
- dataframeservice.sldremio.coordinator.cpu
- dataframeservice.sldremio.coordinator.memory
- dataframeservice.sldremio.executor.cpu
- dataframeservice.sldremio.executor.memory
- dataframeservice.sldremio.executor.count
Note Reducing resource requests and executor counts substantially from the defaults might diminish DataFrame Service query performance.
Related Information
- Taints and Tolerations
- Assign Pods to Nodes
- Safely Drain a Node
- Configuring File Storage
Configure how SystemLink Enterprise stores files.