I have a NiFi 2.0.0 cluster (3 nodes) running in kubernetes now using NiFiKop (https://github.com/konpyutaika/nifikop) to deploy. So far it is working very nicely, and I love the new dark mode in the UI!  :)

Here is the configuration yaml that I used in case it helps anyone - comments appreciated:

apiVersion: nifi.konpyutaika.com/v1
kind: NifiCluster
metadata:
  name: testcluster
spec:
  service:
    headlessEnabled: true
    annotations:
      tyty: ytyt
    labels:
      cluster-name: testcluster
      tete: titi
  externalServices:
    - metadata:
        annotations:
          toto: tata
        labels:
          cluster-name: driver-testcluster
          titi: tutu
      name: driver-ip
      spec:
        portConfigs:
          - internalListenerName: http
            port: 8080
        type: LoadBalancer
  clusterImage: "apache/nifi:2.0.0"
  initContainerImage: "bash:5.2.2"
  oneNifiNodePerNode: true
  readOnlyConfig:
    nifiProperties:
      overrideConfigs: |
        nifi.sensitive.props.key=changeMechangeMe
        nifi.sensitive.props.algorithm=NIFI_PBKDF2_AES_GCM_256
        nifi.flowfile.repository.checkpoint.interval=2 mins
        nifi.flowfile.repository.always.sync=false
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
        nifi.queue.swap.threshold=200000
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
        nifi.content.claim.max.appendable.size=10 MB
nifi.content.repository.directory.default=../content_repository
        nifi.content.repository.archive.max.retention.period=1 days
        nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.backpressure.percentage=60%
        nifi.content.repository.archive.enabled=false
        nifi.content.repository.always.sync=false
        nifi.content.viewer.url=/nifi-content-viewer/
        nifi.content.repository.archive.cleanup.frequency=300 sec
nifi.provenance.repository.directory.default=../provenance_repository
        nifi.provenance.repository.max.storage.time=5 days
        nifi.provenance.repository.rollover.time=300 secs
        nifi.provenance.repository.rollover.size=256 MB
        nifi.provenance.repository.query.threads=2
        nifi.provenance.repository.index.threads=3
        nifi.provenance.repository.compress.on.rollover=false
        nifi.provenance.repository.always.sync=false
        nifi.provenance.repository.journal.count=16
        nifi.provenance.repository.index.shard.size=1024 MB
        nifi.provenance.repository.max.attribute.length=128
        nifi.ui.banner.text=Fireside_NiFi
nifi.nar.library.autoload.directory=/opt/nifi/nifi-current/nar_extensions

    bootstrapProperties:
        nifiJvmMemory: "38g"
  pod:
    annotations:
      toto: tata
    labels:
      cluster-name: testcluster
      titi: tutu
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 600
      timeoutSeconds: 60
      periodSeconds: 120
      successThreshold: 1
      failureThreshold: 5
    readinessProbe:
      exec:
        command:
            - bash
            - '-c'
            - curl -kv http://$(hostname -f):8080/nifi-api
      initialDelaySeconds: 600
      timeoutSeconds: 60
      periodSeconds: 120
      successThreshold: 1
      failureThreshold: 5
  nodeConfigGroups:
    default_group:
      imagePullPolicy: IfNotPresent
      isNode: true
      serviceAccountName: default
      provenanceStorage: "8000 GB"
      externalVolumeConfigs:
        - name: shared-storage
          nfs:
            server: hdfs.querymasters.com
            path: /ifs/data/nifi
            readOnly: false
          mountPath: /mnt
      storageConfigs:
        - mountPath: "/opt/nifi/data"
          name: data
          reclaimPolicy: Delete
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "netapp-ssd"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/content_repository"
          name: content
          reclaimPolicy: Delete
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "netapp-ssd"
            resources:
              requests:
                storage: 100Gi
        - mountPath: "/opt/nifi/flowfile_repository"
          name: flowfile
          reclaimPolicy: Delete
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "netapp-ssd"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/provenance_repository"
          name: provenance
          reclaimPolicy: Delete
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "netapp-ssd"
            resources:
              requests:
                storage: 8192Gi
        - mountPath: "/opt/nifi/nifi-current/nar_extensions"
          name: extensions
          reclaimPolicy: Delete
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "netapp-hdd"
            resources:
              requests:
                storage: 1Gi
      resourcesRequirements:
        limits:
          cpu: "12"
          memory: 52Gi
        requests:
          cpu: "3"
          memory: 49Gi
  nodes:
    - id: 1
      nodeConfigGroup: "default_group"
    - id: 2
      nodeConfigGroup: "default_group"
    - id: 3
      nodeConfigGroup: "default_group"
  propagateLabels: true
  nifiClusterTaskSpec:
    retryDurationMinutes: 10
  listenersConfig:
    internalListeners:
      - containerPort: 8080
        type: http
        name: http
      - containerPort: 6007
        type: cluster
        name: cluster
      - containerPort: 10000
        type: s2s
        name: s2s
      - containerPort: 9090
        type: prometheus
        name: prometheus
      - containerPort: 6342
        type: load-balance
        name: load-balance

-Joe

On 11/6/2024 2:51 AM, Dirk Olmes wrote:
On 11/5/24 22:24, Joe Obernberger wrote:
Looking forward to working with 2.0!  Are there instructions / helm chart / a nice way to deploy a nifi cluster to kubernetes? We're using 1.25 with the cetic helm chart (https://github.com/cetic/helm-nifi), which also deploys zookeeper.  It's my understanding that is no longer necessary as NiFi will now use Kubernetes for the same purpose.

Joe, you're touching an interesting topic here. We have just introduced Nifi at work using the cetic helm chart but had to learn along the way that it is no longer maintained. We had to fork it to add some settings required for our LDAP authentication so I already got my feet wet with it a bit.

However, building a new helm chart for Nifi 2.0 looks like quite an endeavour, too much for me alone. I've seen others mention helm/kubernetes deployments here before so maybe there's interest in working together on a Nifi 2.x helm chart?

-dirk


--
This email has been checked for viruses by AVG antivirus software.
www.avg.com

Reply via email to