Skip to main content
Version: v1.8.0

Quick start

For information about versions compatibilty of the operator features with Kubernetes and Apache NiFi, let's have look of the version compatibility page

Getting Started

Cluster Setup

For local testing we recommend following one of the following setup guides:

Install kubectl

If you do not already have the CLI tool kubectl installed, please follow these instructions to install.

Configure kubectl

Configure kubectl to connect to your cluster by using kubectl config use-context my-cluster-name.

  • For GKE
    • Configure gcloud with gcloud auth login.
    • On the Google Cloud Console, the cluster page will have a Connect button, which will give a command to run locally that looks like
    gcloud container clusters get-credentials CLUSTER_NAME --zone ZONE_NAME --project PROJECT_NAME.
    • Use kubectl config get-contexts to show the contexts available.
    • Run kubectl config use-context ${gke context} to access the cluster from kubectl.
  • For EKS
    • Configure your AWS CLI to connect to your project.
    • Install eksctl
    • Run eksctl utils write-kubeconfig --cluster=${CLUSTER NAME} to make the context available to kubectl
    • Use kubectl config get-contexts to show the contexts available.
    • Run kubectl config use-context ${eks context} to access the cluster with kubectl.

Install cert-manager

The NiFiKop operator uses cert-manager for issuing certificates to users and and nodes, so you'll need to have it setup in case you want to deploy a secured cluster with authentication enabled. The minimum supported cert-manager version is v1.0.

# Install the CustomResourceDefinitions and cert-manager itself
kubectl apply -f \
https://github.com/jetstack/cert-manager/releases/download/v1.7.2/cert-manager.yaml

Deploy NiFiKop

You can deploy the operator using a Helm chart Helm chart:

To install an other version of the operator use helm install --name=nifikop --namespace=nifi --set operator.image.tag=x.y.z konpyutaika-incubator/nifikop

In the case where you don't want to deploy the crds using helm (--skip-crds), you have to deploy manually the crds:

kubectl apply -f https://raw.githubusercontent.com/konpyutaika/nifikop/master/config/crd/bases/nifi.konpyutaika.com_nificlusters.yaml
kubectl apply -f https://raw.githubusercontent.com/konpyutaika/nifikop/master/config/crd/bases/nifi.konpyutaika.com_nifiusers.yaml
kubectl apply -f https://raw.githubusercontent.com/konpyutaika/nifikop/master/config/crd/bases/nifi.konpyutaika.com_nifiusergroups.yaml
kubectl apply -f https://raw.githubusercontent.com/konpyutaika/nifikop/master/config/crd/bases/nifi.konpyutaika.com_nifidataflows.yaml
kubectl apply -f https://raw.githubusercontent.com/konpyutaika/nifikop/master/config/crd/bases/nifi.konpyutaika.com_nifiparametercontexts.yaml
kubectl apply -f https://raw.githubusercontent.com/konpyutaika/nifikop/master/config/crd/bases/nifi.konpyutaika.com_nifiregistryclients.yaml
kubectl apply -f https://raw.githubusercontent.com/konpyutaika/nifikop/master/config/crd/bases/nifi.konpyutaika.com_nifinodegroupautoscalers.yaml
kubectl apply -f https://raw.githubusercontent.com/konpyutaika/nifikop/master/config/crd/bases/nifi.konpyutaika.com_nificonnections.yaml
Conversion webhook

In case you keep the conversion webhook enabled (to handle the conversion of resources from v1alpha1 to v1) you will need to add the following settings to your yaml definition of CRDs:

...
annotations:
cert-manager.io/inject-ca-from: ${namespace}/${certificate_name}
...
spec:
...
conversion:
strategy: Webhook
webhook:
clientConfig:
service:
namespace: ${namespace}
name: ${webhook_service_name}
path: /convert
conversionReviewVersions:
- v1
- v1alpha1
...

Where:

  • namespace: is the namespace in which you will deploy your helm chart.
  • certificate_name: is ${helm release name}-webhook-cert
  • webhook_service_name: is ${helm release name}-webhook-cert

Now deploy the helm chart:

# You have to create the namespace before executing following command
helm install nifikop \
oci://ghcr.io/konpyutaika/helm-charts/nifikop \
--namespace=nifi \
--version 1.8.0 \
--set image.tag=v1.8.0-release \
--set resources.requests.memory=256Mi \
--set resources.requests.cpu=250m \
--set resources.limits.memory=256Mi \
--set resources.limits.cpu=250m \
--set namespaces={"nifi"}
note

Add the following parameter if you are using this instance to only deploy unsecured clusters: --set certManager.enabled=false

On OpenShift

The restricted SCC according to the DOC from OpenShift: "Denies access to all host features and requires pods to be run with a UID, and SELinux context that are allocated to the namespace." So in order to deploy NiFiKop on OpenShift we need to get the openshift.io/sa.scc.uid-range annotation of the namespace that we will deploy NiFiKop into.

Get the uid for the nifi namespace:

uid=$(kubectl get namespace nifi -o=jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}' | sed 's/\/10000$//' | tr -d '[:space:]')

Set RunAsUser on install with helm:

helm install nifikop \
nifikop \
--namespace=nifi \
--version 1.1.1 \
--set image.tag=v1.1.1-release \
--set resources.requests.memory=256Mi \
--set resources.requests.cpu=250m \
--set resources.limits.memory=256Mi \
--set resources.limits.cpu=250m \
--set namespaces={"nifi"} \
--set runAsUser=$uid