Nathan

Not sure what read/write rates you'll get in these RAID-10 configs but
generally this seems like it should be fine (100s of MB/sec per node range
at least).  Whereas now you're seeing about 20MB/sec/node.  This is
definitely very low.

If you review
http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-2-6-nar/1.12.0/org.apache.nifi.processors.kafka.pubsub.ConsumeKafkaRecord_2_6/index.html
then you'll see that we do actually capture attributes such as kafka.topic
and so on.  Flowfiles would also be properly grouped by that.  What I'm not
positive of is it could handle reading from multiple topics at the same
time while also honoring and determining each of their distinct schemas.
Would need to test/verify that scenario to be sure.  If you do have a bunch
of topics and they could grow/change then keeping this single processor
approach makes sense.  If you can go the route of one ConsumeKafkaRecord
per topic then obviously that would work well.

Not seeing your flow though I cannot be certain where the bottleneck(s)
exist and provide guidance.  But this is without a doubt a vital skill to
achieving maximum performance.

You'd have to show/share a ton more details for folks here to be helpful in
walking through the full design.  Or explain the end to end flow.

As an additional food for thought if the flows are indeed 'from kafka -> do
stuff -> back to kafka' this is likely a great use case for stateless-nifi.

Thanks

On Wed, Sep 23, 2020 at 10:43 AM <nathan.engl...@bt.com> wrote:

> Hi Joe,
>
>
>
> Thanks for getting back to me so quickly.
>
>
>
> Our disk setup is as follows:
>
> Path
>
> Storage Type
>
> Format
>
> Capacity
>
> Content
>
> /
>
> 100GB OS SSD
>
> ext4
>
> 89.9GB
>
> OS, NiFi install, Logs
>
> /data/1/
>
> 2 x 4TB SAS Hard Drives in RAID 1
>
> ext4
>
> 3.7TB
>
> Database and Flowfile Repos
>
> /data/2/
>
> 8 x 4TB SAS Hard Drives in RAID 10
>
> ext4
>
> 14.6TB
>
> Content Repo
>
> /data/3/
>
> 2 x 4TB SAS Hard Drives in RAID 1
>
> ext4
>
> 3.7TB
>
> Provence Repo
>
> /ssd
>
> 1 x 4TB PCIe NVMe SSD
>
> ext4
>
> 3.7TB
>
> Content Repo (Used instead of /data/2/ as a test), to see if CPU was
> bottlenecked by Disk operations.
>
>
>
> I will certainly take a look at those. One question with the consume
> record processor is how I would consume from multiple topics and ensure the
> correct Avro schema is used to deserialise the message? We have 1:1 mapping
> of schemas to topics. At the moment the ConsumeKafka processor is reading
> from all topics in one consumer. I’m assuming the attribute kafka.topic
> attribute doesn’t exist at this stage? We use the Avro Schema Registry
> Controller as we don’t have a schema registry in place yet.
>
>
>
> Kind Regards,
>
>
>
> Nathan
>
>
>
> *From:* Joe Witt [mailto:joe.w...@gmail.com]
> *Sent:* 23 September 2020 17:33
> *To:* users@nifi.apache.org
> *Subject:* Re: NiFi V1.9.2 Performance
>
>
>
> Nathan
>
>
>
> You have plenty powerful machines to hit super high speeds but what I
> cannot tell is how the disks are setup/capability and layout wise and
> relative to our three repos of importance.  You'll need to share those
> details.
>
>
>
> That said, the design of the flow matters.  The Kafka processors that
> aren't record oriented will perform poorly unless they're acquiring data in
> their natural batches as they arrive from kafka.  In short, use the record
> oriented processors from Kafka.  In it you can even deal with the fact you
> want to go from AVRO to Json and so on.  These processors have a tougher
> learning curve but they perform extremely well and we have powerful
> processors to go along with them for common patterns.
>
>
>
> You absolutely should be able to get to the big numbers you have seen.  It
> requires great flow design (powerful machines are secondary).
>
>
>
> Thanks
>
>
>
> On Wed, Sep 23, 2020 at 9:26 AM <nathan.engl...@bt.com> wrote:
>
> Hi All,
>
>
>
> We’ve got a NiFi 3 Node Cluster running on 3 x 40 CPU, 256GB RAM (32G Java
> Heap) servers. However, we have only been able to achieve a consumption of
> ~9.48GB Consumption Compressed (38.53GB Uncompressed) over 5 minutes, with
> a production rate of ~16.84GB out of the cluster over  5 mins. This is much
> lower than we were expecting based on what we have read. With this
> throughput we see a CPU load ~32 on all nodes, so we know there isn’t much
> else we can get out of the CPU).
>
>
>
> We have also tried SSDs, Raided and Unraided HDDs for the content repo
> storage, but they haven’t made a difference to the amount we can process.
>
>
>
> The process is as follows:
>
> 1.       Our flow reads from Kafka Compressed (Maximum of 2000 records
> per file). It then converts them from Avro to JSON. (ConsumeKafka_0_10 à
> UpdateAttribute à ConvertRecord)
>
> 2.       Depending on which topic the flow file is consumed from, we then
> send the message to one of 10 potential process groups, each containing
> between 3 and 5 processors within the process groups. (RouteOnAttribute à
> Relevant Processing Group containing JoltTransformJSON and several custom
> processors we have made).
>
> 3.       Finally, we produce the flow file content back to one of several
> Kafka topics, based on the input topic name in Avro format with Snappy
> compression on the Kafka topic.
>
>
>
> Inspecting the queued message counts, it indicates that the Jolt
> Transforms are taking the time to process (Large queues before JOLT
> processors, small or no queues afterwards). But I’m not sure why this is
> any worse than the rest of the processors as the event duration is less
> than a second when inspecting in provenance? We have tuned the number of
> concurrent tasks, duration and schedules to get the performance we have so
> far.
>
>
>
> I’m not sure if there is anything anyone could recommend or suggest to try
> and make improvements? We need to achieve a rate around 5x of what it’s
> currently processing with the same number of nodes. We are running out of
> ideas on how to accomplish this and may have to consider alternatives.
>
>
>
> Kind Regards,
>
>
>
> Nathan
>
>

Reply via email to