Abstract
Helm Template documentation master file, created by sphinx-quickstart on Sun Apr 28 15:35:08 2024. You can adapt this file completely to your liking, but it should at least contain the root toctree
directive.
Nautobot Helm Chart#
Repository Contents#
Indices and tables#
Usage#
You’ll need to provide either a MySQL or PostgreSQL server along with a Redis server to deploy successfully.
Install#
To install this chart follow these steps.
Create a namespace.
kubectl create ns nautobot
Install the unittest Helm plugin.
helm plugin install https://github.com/helm-unittest/helm-unittest
Run the unit tests.
helm unittest -f 'tests/*.yaml' .
You should see output similar to this.
### Chart [ nautobot ] . PASS nautobot Service Test Suite tests/service_test.yaml PASS nautobot ServiceAccount Test Suite tests/serviceaccount_test.yaml PASS nautobot StatefulSet Test Suite tests/statefulset_test.yaml Charts: 1 passed, 1 total Test Suites: 3 passed, 3 total Tests: 9 passed, 9 total Snapshot: 0 passed, 0 total Time: 92.722398ms
Install the chart with Helm.
helm -n nautobot install nautobot .
Run the tests included with Helm.
helm -n nautobot test nautobot
Uninstall#
This can be done in the usual way.
helm -n nautobot uninstall nautobot
Chart#
- apiVersion
API Version
The Helm API version to use for this chart.
apiVersion: v2
- appVersion
App Version
This is the version number of the application being deployed. This version number should be incremented each time you make changes to the application. Versions are not expected to follow Semantic Versioning.
They should reflect the version the application is using.
It is recommended to use it with quotes.
appVersion: "2.3.0"
- description
Description
A brief description of the Chart.
description: A Helm chart that will deploy Nautobot to a Kubernetes cluster.
- icon
Icon
A url or file path to an icon for the Chart’s application.
icon: file://./_static/img/nautobot.png
- name
Name
The name of the application or library provided by the chart.
name: nautobot
- type
Type
A chart can be either an ‘application’ or a ‘library’ chart.
Application charts are a collection of templates that can be packaged into versioned archives to be deployed.
Library charts provide useful utilities or functions for the chart developer. They’re included as a dependency of application charts to inject those utilities and functions into the rendering pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
- version
Version
This is the chart version. This version number should be incremented each time you make changes to the chart and its templates, including the app version. Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.3
Values#
- affinity
Default Values
Default values for the Helm Nautobot template.
This is a YAML-formatted file.
Declare variables to be passed into your templates.
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux
- autoscaling
Autoscaling
Configure auto scaling on the cluster.
autoscaling: enabled: false maxReplicas: 0 minReplicas: 0
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
- config
Nautobot Config
Configure the Nautobot deployment.
config: allowed_hosts: "*" caches_backend: django_redis.cache.RedisCache celery: broker: 'redis://redis.redis.svc.cluster.local:6379/0' livenessProbe: exec: command: - /opt/nautobot/bin/nautobot-server - celery - status # When to give up` and restart the container failureThreshold: 3 # Delay before the first probe is initiated initialDelaySeconds: 30 # How often to perform the probe periodSeconds: 10 # Minimum consecutive successes for the probe to # be considered successful after having failed successThreshold: 1 # When the probe times out timeoutSeconds: 5 img: repository: ghcr.io/edwardtheharris/helm-nautobot/celery tag: '0.0.2' name: celery-config results: 'redis://redis.redis.svc.cluster.local:6379/0' root: '/opt/celery' name: nautobot-config root: /opt/nautobot
- database
Database
Configure a relational database connection, MySQL and PostgreSQL are supported.
database: database: nautobot engine: django.db.backends.postgresql host: postgresql.postgresql.svc.cluster.local name: postgres password: "" port: '5432' secretfile: secrets/secrets.yaml timeout: '300' username: ""
- fullnameOverride
Full Name Override
Override the full name of the release.
fullnameOverride: "nautobot"
- image
Container image settings
Define the image, tag and repository to be deployed.
image: pullPolicy: Always repository: ghcr.io/edwardtheharris/helm-nautobot/nautobot # Overrides the image tag whose default is the chart appVersion. tag: '0.0.2' secret: name: ghcr data: ''
- imagePullSecrets
Image Pull Secrets
Secrets required to pull the deployment image.
imagePullSecrets: - name: ghcr
- ingress
Ingress
Configure Ingress for the service.
ingress: annotations: kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" className: "nginx" enabled: true hosts: - host: nautobot.svc.cluster.local paths: - path: / pathType: ImplementationSpecific tls: []
tls: - secretName: chart-example-tls hosts: - chart-example.local
- livenessProbe
Liveness Probe
Set a command to test for liveness.
livenessProbe: exec: command: - /opt/nautobot/bin/nautobot-server - celery - status # When to give up` and restart the container failureThreshold: 3 # Delay before the first probe is initiated initialDelaySeconds: 30 # How often to perform the probe periodSeconds: 10 # Minimum consecutive successes for the probe to # be considered successful after having failed successThreshold: 1 # When the probe times out timeoutSeconds: 5 img: repository: ghcr.io/edwardtheharris/helm-nautobot/celery tag: '0.0.2' name: celery-config results: 'redis://redis.redis.svc.cluster.local:6379/0' root: '/opt/celery' name: nautobot-config root: /opt/nautobot
- nameOverride
Name Override
Override the release name, but not quite all the way.
fullnameOverride: "nautobot"
- nodeSelector
Node Selector
Select nodes for workloads to run on.
nodeSelector: kubernetes.io/os: linux
- persistence
Persistence Configuration
Provision persistent storage if required.
persistence: name: nautobot-pvc size: 10Gi storageClass: csi-driver-lvm-linear
- podAnnotations
Pod Annotations
Apply these annotations to all pods.
podAnnotations: {}
- podLabels
Pod Labels
Apply these labels to all pods.
podLabels: app: nautobot
- podSecurityContext
Pod Security Context
Define security context for pods.
podSecurityContext: {}
podSecurityContext: fsGroup: 1000
- replicaCount
Replica Count
Deploy this many replicas by default.
replicaCount: 1
readinessProbe: exec: command: - pg_isready - -U - postgres failureThreshold: 3 # When to give up, marking the Pod as Unready initialDelaySeconds: 5 # Delay before the first probe is initiated, can be shorter than liveness probe periodSeconds: 5 # How often to perform the probe successThreshold: 1 # Minimum consecutive successes for the probe to be considered successful timeoutSeconds: 1 # When the probe times out
- resources
Resource Requests and Limits
Set requests and limits for workload resources.
resources: limits: cpu: 2 memory: 4096Mi
We usually recommend not to specify default resources and to leave this as a conscious choice for the user. This also increases chances charts run on environments with little resources, such as Minikube. If you do want to specify resources, uncomment the following lines, adjust them as necessary, and remove the curly braces after
resources:
.limits: cpu: 100m memory: 128Mi requests: cpu: 100m memory: 128Mi
- securityContext
Security Context
Set a security context on the workloads.
securityContext: {}
- service
Service
Define the service used to access the application.
service: enabled: true port: 8000 targetPort: 8000 type: ClusterIP
capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000
- serviceAccount
Service Account
When enabled, will create a Kubernetes Service Account during deployment.
serviceAccount: # Annotations to add to the service account annotations: sa.kubernetes.io/name: nautobot # Automatically mount a ServiceAccount's API credentials? automount: true # Specifies whether a service account should be created create: true # The name of the service account to use. # If not set and create is true, a name is generated using the fullname template name: "nautobot"
- sso
Single Sign On
Configuration for Single Sign On
sso: social_auth_github_key: '' social_auth_github_secret: ''
- superUser
Super User Config
When enabled, will create a super user account during deployment.
superUser: create: true email: nautobot@nautobot.svc.cluster.local password: "" secret: name: secret.superuser secretKey: '57evlrs^0pmu5#ys=9t6==lf6hdz&$1)qq-(%f1noo_b+nsy@f' token: "" username: ""
- tolerations
Tolerations
Define taints that will be tolerated to run workloads on nodes.
tolerations: []
- type
Type
May be set to
Deployment
orStatefulSet
.type: Deployment
- volumeMounts
Volume Mounts
Additional volumeMounts on the output Deployment definition.
volumeMounts: []
- mountPath: "/mnt/k8s/psql" name: data readOnly: false - name: foo mountPath: "/etc/foo" readOnly: true
- volumes
Volumes
Additional volumes on the output Deployment definition.
volumes: []
- name: foo secret: secretName: mysecret optional: false