Runecast Analyzer

This chart deploys Runecast Analyzer with nginx frontend proxy and PostgreSQL backend. For more information about Runecast Analyzer please visit the Runecast website.

Requirements

Installation and upgrade

To install Runecast Analyzer with the default settings, follow these steps:

  1. Add Runecast Helm repository to Helm repository list:

    helm repo add runecast https://helm.runecast.com/charts
  2. Install Runecast Analyzer

    helm upgrade --install runecast-analyzer runecast/runecast-analyzer

To modify the deployment, please use values.yaml file with the required changes or set the individual values directly on the command line. The list of values can be found in the section List of values or displayed by running the following command:

helm show values runecast/runecast-analyzer

You can find several examples of deployment in the Examples paragraph.

OS analysis

OS analysis in Runecast Analyzer requires additional components to be installed. We don’t install them by default to prevent wasting of resources in case you don’t use this capability.

Therefore, in K8s deployment, additional steps are needed besides the standard OS analysis configuration process.

The OS analysis service is contacted by OS agents and needs to be accessible from outside of the cluster. There are multiple ways how to achieve this and it always depends on your infrastructure. The main thing that you need to keep in mind is that the endpoint, where the OS agents will connect, needs to be trusted by the agents. This can be achieved either:

To deploy the application with OS analysis enabled:

  1. Prepare a hostname (FQDN or IP address) that will be used by the OS agents to reach the OS analysis service. The hostname should lead to the OS analysis service, which can be exposed on the nodes as a service type NodePort, LoadBalancer (if external loadbalancer is used), or via an Ingress.

  2. Deploy the application using a Helm chart with value global.osanalysis.enabled set to true and hostname specified in global.osanalysis.hostname.

  3. To configure the OS analysis, continue in the application. Use the hostname from the previous step in the Runecast Address field.

Please see the examples below for more information.

Upgrade

The upgrade can be performed with the same command like the installation described above. Please see any version-specific upgrade instructions below.

To 0.12.0

This version removes the PostgreSQL migration init container and introduces the default database management account. Before upgrading to this version please make sure you are currently on chart version 0.11.0 and above.

To 0.11.0

This version includes upgrade of the PostgreSQL database. Before upgrading to this version make sure you are currently on chart version 0.9 and above.

Examples

To find more information about changing deployment values, please see the Values Files section of the Helm User Guide.

Using ingress with SSL termination

The following example shows how to deploy the application and expose it via Nginx ingress with SSL termination. In production, you will surely use a certificate issued by a trusted certificate authority. Maybe you have a background process (like certmanager), that is issuing the certificates for you, in that case you might skip step 1.

  1. Create the secret runecast-analyzer-ingress-tls from the key and the certificate:

    kubectl create secret tls runecast-analyzer-ingress-tls --key </path/to/key_file> --cert </path/to/cert_file>

    To create a self-signed certificate for testing, you can use the following command:

    openssl req -x509 -nodes -newkey rsa:4096 -days 365 -keyout </path/to/key_file> -out </path/to/cert_file> -subj "/CN=localhost"
  2. Create the my-values.yaml file with the following content:

    global:
      hostname: rca.local.domain
    nginx:
      ingress:
        enabled: true
        tls:
          - secretName: runecast-analyzer-ingress-tls
            hosts:
              - rca.local.domain
  3. Install the application providing the custom my-values.yaml file:

    helm upgrade --install runecast-analyzer runecast/runecast-analyzer -f my-values.yaml

Running without persistent data

By default, the application is installed with Persistent Volumes. If you would like to test the application without persisting data, you can simply disable the persistence storage by setting the respective values:

helm upgrade --install runecast-analyzer runecast/runecast-analyzer \
             --set persistence.enabled=false \
             --set postgresql.persistence.enabled=false \
             --set imagescanning.persistence.enabled=false

Exposing via LoadBalancer on a secure port

  1. Create the secret runecast-analyzer-certificate from the key and the certificate:

    kubectl create secret tls runecast-analyzer-certificate --key </path/to/key_file> --cert </path/to/cert_file>
  2. Install the application providing the custom values on the command line:

    helm upgrade --install runecast-analyzer runecast/runecast-analyzer \
                 --set nginx.service.type="LoadBalancer" \
                 --set nginx.service.tls.enabled=true \
                 --set nginx.service.tls.existingSecretName="runecast-analyzer-certificate"

Installing with OS analysis enabled

Exposing the service via a Nodeport with self-signed certificate

The service endpoint certificate will be generated during installation and included when generating the OS installation package. The deployed OS agents will explicitly trust the certificate.

  1. Set up a hostname that will point to the K8s nodes where the OS analysis service will be exposed.

  2. Deploy the application with OS analysis enabled:

    helm upgrade --install runecast-analyzer runecast/runecast-analyzer \
                 --set global.osanalysis.enabled=true \
                 --set global.osanalysis.hostname="example.cluster.k8s" \
                 --set fleet.service.type=NodePort \
                 --set fleet.service.nodePort=31443

Exposing the service via Ingress with a custom certificate

The service will be accessed via ingress and the certificate on the ingress is issued from a CA. The ingress secret is either pre-created or automatically created with tools like certmanager.

  1. Set up a hostname that will point to the K8s nodes where the ingress controller is running.

  2. Deploy the application with OS analysis enabled:

    helm upgrade --install runecast-analyzer runecast/runecast-analyzer \
                 --set global.osanalysis.enabled=true \
                 --set global.osanalysis.hostname="example.cluster.k8s" \
                 --set fleet.ingress.enabled=true \
                 --set fleet.ingress.tlsSecretName=runecast-analyzer-fleet-ingress-tls
  3. Enable the OS analysis in the application and download the OS installation packages.

  4. Replace the osquery-ca.crt file in the installation packages with a CA chain of your CA.

List of values

Global values and Runecast

Value Default Description
global.imageRepository public.ecr.aws/runecast The repository to pull the images from, applicable to all subcharts.
global.imagePullSecrets [] imagePullSecrets can be used when the repository needs an authentication. Global will be used in all subcharts and are overriden on chart values level.
global.runecastAnalyzerServiceName runecast-analyzer Runecast Analyzer app service name, that cannot be overriden by nameOverride and fullnameOverride values. The value is required to be used in the nginx subchart and because of Helm limitataion, global value is used.
global.hostname Specify a hostname or IP address which can be used to reach Runecast by clients from outside of the cluster (users, vSphere plugin, other K8s clusters admission, etc.). This value will be used in e-mail reports and other places that will show the link to this instance.
global.osanalysis.enabled false Deploy the OS analysis components.
global.osanalysis.hostname Specify a hostname that can be used to reach the K8s nodes where the OS analysis service is running.
global.imagescanning.enabled true Enable the image scanning/validation webhook functionality.
global.podLabels {} Set additional labels to attach to all pods.
global.persistenceLabels {} Set additional labels to attach to all persistent volumes.
image.repository "" The repository to pull the application and psql images from, overrides the global value.
image.tag "" The application image tag whose default is the chart appVersion.
image.psqlTag "14" Sets the psql init container image tag.
imagePullPolicy Always Kubelet image pull policy.
imagePullSecrets [] List of secrets to use when pullling the image.
nameOverride "" Override the objects’ names, using release name as a prefix.
fullnameOverride "" Override the whole objects’ names (release name not used as a prefix).
serviceAccount.annotations {} Annotations to add to the service account.
serviceAccount.name "" The name of the service account to use. If not set and create is true, a name is generated using the name template.
podAnnotations {} Specifies whether to annotate the pod.
podLabels {} Specifies additional labels to attach to pod, overrides the global.podLabel value.
service.type ClusterIP Kubernetes service type.
service.port 8080 Kubernetes service port.
env [] Additional environment variables to set in the app container.
volumes [] Additional volumes to add to the app container.
volumeMounts [] Additional volumeMounts to add to the app container.
existingSecrets {} Existing secrets to use in the deployment.
existingSecrets.fleet.name Secret name.
existingSecrets.fleet.key Key name in the secret.
existingSecrets.postgresRcTablesOwner.name Secret name.
existingSecrets.postgresRcTablesOwner.key Key name in the secret.
resources.requests.cpu "1" CPU requests.
resources.requests.memory 2Gi Memory requests.
resources.limits.cpu "2" CPU limits.
resources.limits.memory 3Gi Memory limits.
persistence.enabled true Specifies whether to enable the data persistence.
persistence.annotations {} Specifies the persistent volume objects’ annotations.
persistence.labels {} Specifies the persistent volume objects’ labels.
persistence.size 10Gi The size of the persistent volume.
persistence.storageClass "" Use a specific storage class. If not specified, the default is used.
persistence.accessModes [ReadWriteOnce] Specifies the storage access modes, storage provider dependant.
nodeSelector {} Allows to schedule the pod on specific nodes.
affinity {} Allows more control of scheduling the pod on specific nodes.
tolerations [] Another way of controlling where the pod will be scheduled.

Database backend (PostgreSQL)

Value Default Description
postgresql.image.repository "" The repository to pull the postgres and postgres-migrate images from, overrides the global value.
postgresql.image.tag "" Overrides the postgresql image tag whose default is the subchart appVersion.
postgresql.imagePullPolicy Always Kubelet image pull policy.
postgresql.imagePullSecrets [] List of secrets to use when pullling the image.
postgresql.nameOverride "" Override the objects’ names, using release name as a prefix.
postgresql.fullnameOverride "" Override the whole objects’ names (release name not used as a prefix).
postgresql.serviceAccount.create true Specifies whether a service account should be created (default namespace account used if set to false).
postgresql.serviceAccount.annotations {} Annotations to add to the service account.
postgresql.serviceAccount.name "" The name of the service account to use. If not set and create is true, a name is generated using the fullname template.
postgresql.podAnnotations {} Specifies whether to annotate the pod.
postgresql.podLabels {} Specifies additional labels to attach to pod, overrides the global.podLabel value.
postgresql.service.type ClusterIP Kubernetes service type.
postgresql.service.port 5432 Kubernetes service port.
postgresql.env [] Additional environment variables.
postgresql.volumes [] Additional volumes.
postgresql.volumeMounts [] Additional volumeMounts.
postgresql.existingSecret {} Existing secret to use in the deployment.
postgresql.existingSecret.name Secret name.
postgresql.existingSecret.key Key name in the secret.
postgresql.resources.requests.cpu 100m CPU requests.
postgresql.resources.requests.memory 150M Memory requests.
postgresql.resources.limits.cpu "1" CPU limits.
postgresql.resources.limits.memory 500M Memory limits.
postgresql.persistence.enabled true Specifies whether to enable the data persistence.
postgresql.persistence.annotations {} Specifies the persistent volume objects’ annotations.
postgresql.persistence.labels {} Specifies the persistent volume objects’ labels.
postgresql.persistence.size 10G The size of the persistent volume.
postgresql.persistence.storageClass "" Use a specific storage class. If not specified, the default is used.
postgresql.persistence.accessModes [ReadWriteOnce] Specifies the storage access modes, storage provider dependant.
postgresql.nodeSelector {} Allows to schedule the pod on specific nodes.
postgresql.affinity {} Allows more control of scheduling the pod on specific nodes.
postgresql.tolerations [] Another way of controlling where the pod will be scheduled.

Reverse proxy frontend (nginx)

Value Default Description
nginx.image.repository "" The repository to pull the nginx image from, overrides the global value.
nginx.image.tag "" Overrides the image tag whose default is the subchart appVersion
nginx.imagePullPolicy Always Kubelet image pull policy.
nginx.imagePullSecrets [] List of secrets to use when pullling the image.
nginx.nameOverride "" Override the objects’ names, using release name as a prefix.
nginx.fullnameOverride "" Override the whole objects’ names (release name not used as a prefix).
nginx.serviceAccount.create true Specifies whether a service account should be created (default namespace account used if set to false).
nginx.serviceAccount.annotations {} Annotations to add to the service account.
nginx.serviceAccount.name "" The name of the service account to use. If not set and create is true, a name is generated using the fullname template.
nginx.podAnnotations {} Specifies whether to annotate the pod.
nginx.podLabels {} Specifies additional labels to attach to pod, overrides the global.podLabel value.
nginx.service.type ClusterIP Kubernetes service type.
nginx.service.port 9080 Kubernetes service port.
nginx.service.tls.enabled false Enables secure connection to the service.
nginx.service.tls.existingSecretName The name of the existing secret where certificate and key are stored. If not defined, will be automatically generated.
nginx.service.tls.ssl_protocols TLSv1.3 TLSv1.2; Nginx SSL protocols setting.
nginx.service.tls.ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!SHA256:!SHA384; Nginx SSL ciphers setting.
nginx.ingress.enabled false Specifies whether to enable ingress for accessing the application.
nginx.ingress.annotations {} Specifies the ingress object annotations.
nginx.ingress.tls [] Specifies whether to use secure connection to the ingress (recommended). Settings depend on the underlying infrastructure.
nginx.ingress.tls.secretName "" The name of the secret where certificate and key are stored.
nginx.ingress.tls.hosts [] List of hosts names.
nginx.resources.requests.cpu 100m CPU requests.
nginx.resources.requests.memory 100M Memory requests.
nginx.resources.limits.cpu 500m CPU limits.
nginx.resources.limits.memory 500M Memory limits.
nginx.nodeSelector {} Allows to schedule the pod on specific nodes.
nginx.affinity {} Allows more control of scheduling the pod on specific nodes.
nginx.tolerations [] Another way of controlling where the pod will be scheduled.

OS analysis

Value Default Description
fleet.image.repository "" The repository to pull the fleet image from, overrides the global value.
fleet.image.tag "" Overrides the image tag whose default is the subchart appVersion
fleet.imagePullPolicy Always Kubelet image pull policy.
fleet.imagePullSecrets [] List of secrets to use when pullling the image.
fleet.nameOverride "" Override the objects’ names, using release name as a prefix.
fleet.fullnameOverride "" Override the whole objects’ names (release name not used as a prefix).
fleet.serviceAccount.create true Specifies whether a service account should be created (default namespace account used if set to false).
fleet.serviceAccount.annotations {} Annotations to add to the service account.
fleet.serviceAccount.name "" The name of the service account to use. If not set and create is true, a name is generated using the fullname template.
fleet.podAnnotations {} Specifies whether to annotate the pod.
fleet.podLabels {} Specifies additional labels to attach to pod, overrides the global.podLabel value.
fleet.service.type ClusterIP NodePort or LoadBalancer, to access the OS analysis service from outside of the cluster without ingress. Kubernetes service type docs.
fleet.service.port 8443 Kubernetes service port.
fleet.service.nodePort "" Kubernetes service port to access on nodes.
fleet.ingress.enabled false Use ingress to expose OS analysis endpoint to OS agents. The global.osanalysis.hostname is used as the ingress hostname.
fleet.ingress.annotations {} Specifies the ingress object annotations.
fleet.ingress.tlsSecretName "" The name of the secret where certificate and key are stored. Can be omit if the certificate is handled by the ingress controller.
fleet.env [] Additional environment variables.
fleet.volumes [] Additional volumes.
fleet.volumeMounts [] Additional volumeMounts.
fleet.resources.requests.cpu 100m CPU requests.
fleet.resources.requests.memory 100M Memory requests.
fleet.resources.limits.cpu "1" CPU limits.
fleet.resources.limits.memory 4G Memory limits.
fleet.nodeSelector {} Allows to schedule the pod on specific nodes.
fleet.affinity {} Allows more control of scheduling the pod on specific nodes.
fleet.tolerations [] Another way of controlling where the pod will be scheduled.
fleet.mysql.image.repository "" The repository to pull the mysql and busybox images from, overrides the global value.
fleet.mysql.image.tag "" Overrides the mysql image tag whose default is the subchart appVersion.
fleet.mysql.image.busyboxTag "1.35.0" Sets the busybox image tag.
fleet.mysql.imagePullPolicy Always Kubelet image pull policy.
fleet.mysql.imagePullSecrets [] List of secrets to use when pullling the image.
fleet.mysql.nameOverride "" Override the objects’ names, using release name as a prefix.
fleet.mysql.fullnameOverride "" Override the whole objects’ names (release name not used as a prefix).
fleet.mysql.serviceAccount.create true Specifies whether a service account should be created (default namespace account used if set to false).
fleet.mysql.serviceAccount.annotations {} Annotations to add to the service account.
fleet.mysql.serviceAccount.name "" The name of the service account to use. If not set and create is true, a name is generated using the fullname template.
fleet.mysql.podAnnotations {} Specifies whether to annotate the pod.
fleet.mysql.podLabels {} Specifies additional labels to attach to pod, overrides the global.podLabel value.
fleet.mysql.service.type ClusterIP Kubernetes service type.
fleet.mysql.service.port 3306 Kubernetes service port.
fleet.mysql.env [] Additional environment variables.
fleet.mysql.volumes [] Additional volumes.
fleet.mysql.volumeMounts [] Additional volumeMounts.
fleet.mysql.existingSecret {} Existing secret to use in the deployment.
fleet.mysql.existingSecret.name Secret name.
fleet.mysql.existingSecret.key Key name in the secret.
fleet.mysql.existingSecret.rootKey Root key name in the secret.
fleet.mysql.resources.requests.cpu 100m CPU requests.
fleet.mysql.resources.requests.memory 256Mi Memory requests.
fleet.mysql.resources.limits.cpu 500m CPU limits.
fleet.mysql.resources.limits.memory 500M Memory limits.
fleet.mysql.persistence.enabled true Specifies whether to enable the data persistence.
fleet.mysql.persistence.annotations {} Specifies the persistent volume objects’ annotations.
fleet.mysql.persistence.labels {} Specifies the persistent volume objects’ labels.
fleet.mysql.persistence.size 10G The size of the persistent volume.
fleet.mysql.persistence.storageClass "" Use a specific storage class. If not specified, the default is used.
fleet.mysql.persistence.accessModes [ReadWriteOnce] Specifies the storage access modes, storage provider dependant.
fleet.mysql.nodeSelector {} Allows to schedule the pod on specific nodes.
fleet.mysql.affinity {} Allows more control of scheduling the pod on specific nodes.
fleet.mysql.tolerations [] Another way of controlling where the pod will be scheduled.
fleet.redis.image.repository "" The repository to pull the redis image from, overrides the global value.
fleet.redis.image.tag "" Overrides the image tag whose default is the subchart appVersion.
fleet.redis.imagePullPolicy Always Kubelet image pull policy.
fleet.redis.imagePullSecrets [] List of secrets to use when pullling the image.
fleet.redis.nameOverride "" Override the objects’ names, using release name as a prefix.
fleet.redis.fullnameOverride "" Override the whole objects’ names (release name not used as a prefix).
fleet.redis.serviceAccount.create true Specifies whether a service account should be created (default namespace account used if set to false).
fleet.redis.serviceAccount.annotations {} Annotations to add to the service account.
fleet.redis.serviceAccount.name "" The name of the service account to use. If not set and create is true, a name is generated using the fullname template.
fleet.redis.podAnnotations {} Specifies whether to annotate the pod.
fleet.redis.podLabels {} Specifies additional labels to attach to pod, overrides the global.podLabel value.
fleet.redis.service.type ClusterIP Kubernetes service type.
fleet.redis.service.port 6379 Kubernetes service port.
fleet.redis.resources.requests.cpu 100m CPU requests.
fleet.redis.resources.requests.memory 256Mi Memory requests.
fleet.redis.resources.limits.cpu 500m CPU limits.
fleet.redis.resources.limits.memory 500M Memory limits.
fleet.redis.nodeSelector {} Allows to schedule the pod on specific nodes.
fleet.redis.affinity {} Allows more control of scheduling the pod on specific nodes.
fleet.redis.tolerations [] Another way of controlling where the pod will be scheduled.

Image Scanning

Value Default Description
imagescanning.image.repository "" The repository to pull the imagescanning image from, overrides the global value.
imagescanning.image.tag "" Overrides the image tag whose default is the subchart appVersion
imagescanning.imagePullPolicy Always Kubelet image pull policy.
imagescanning.imagePullSecrets [] List of secrets to use when pullling the image.
imagescanning.nameOverride "" Override the objects’ names, using release name as a prefix.
imagescanning.fullnameOverride "" Override the whole objects’ names (release name not used as a prefix).
imagescanning.serviceAccount.create true Specifies whether a service account should be created (default namespace account used if set to false).
imagescanning.serviceAccount.annotations {} Annotations to add to the service account.
imagescanning.serviceAccount.name "" The name of the service account to use. If not set and create is true, a name is generated using the fullname template.
imagescanning.podAnnotations {} Specifies whether to annotate the pod.
imagescanning.podLabels {} Specifies additional labels to attach to pod, overrides the global.podLabel value.
imagescanning.resources.requests.cpu 100m CPU requests.
imagescanning.resources.requests.memory 500M Memory requests.
imagescanning.resources.limits.cpu 1 CPU limits.
imagescanning.resources.limits.memory 2Gi Memory limits.
imagescanning.persistence.enabled true Specifies whether to enable the data persistence (persists imagescanning cache through pod restarts).
imagescanning.persistence.annotations {} Specifies the persistent volume objects’ annotations.
imagescanning.persistence.labels {} Specifies the persistent volume objects’ labels.
imagescanning.persistence.size 500M The size of the persistent volume.
imagescanning.persistence.storageClass "" Use a specific storage class. If not specified, the default is used.
imagescanning.persistence.accessModes [ReadWriteOnce] Specifies the storage access modes, storage provider dependant.
imagescanning.nodeSelector {} Allows to schedule the pod on specific nodes.
imagescanning.affinity {} Allows more control of scheduling the pod on specific nodes.
imagescanning.tolerations [] Another way of controlling where the pod will be scheduled.

Usage

To find out how to configure and use the product, including the required permissions for scanning a Kubernetes cluster, please follow the official documentation available at https://docs.runecast.com/.