Hi Henrique
Did you lift up the OS UDP Receive Buffer Size? Per default it’s way too small
under linux.
sysctl -w net.core.rmem_max=16777216
With netstat -su (“Udp:” Section) you can check whether you have UDP buffer
issues…
Cheers Josef
From: Henrique Nascimento
Reply to:
Hi all,
Thanks in advance for any help.
I have a ListenUdp processor receiving Syslog messages, but when i
compare the results of the processor with a linux TCPDUMP i noticed that
the processor drops a lot of the data, with no warning/error. (Well, its
UDP...)
I already tried a lot of
https://calcite.apache.org/docs/reference.html
On Wed, Sep 23, 2020 at 3:24 PM Mike Thomsen wrote:
>
> Asmath,
>
> I would check the Apache Calcite docs to see what syntax is supported.
> I ran into a minor head-scratcher there as well a few months ago when
> some date function I was expecting
Asmath,
I would check the Apache Calcite docs to see what syntax is supported.
I ran into a minor head-scratcher there as well a few months ago when
some date function I was expecting turned out to not be implemented
yet.
Mike
On Wed, Sep 23, 2020 at 3:04 PM KhajaAsmath Mohammed
wrote:
>
> Hi,
Hi,
I am looking for some information on how to check datatypes of the data and
load transform them accordingly. I am okay to use any other processor to.
My req:
Check if column is Integer, if integer then load to _INT column else null
value
Check if column length is > 256, if more than 256
Nathan
Not sure what read/write rates you'll get in these RAID-10 configs but
generally this seems like it should be fine (100s of MB/sec per node range
at least). Whereas now you're seeing about 20MB/sec/node. This is
definitely very low.
If you review
Hi Joe,
Thanks for getting back to me so quickly.
Our disk setup is as follows:
Path
Storage Type
Format
Capacity
Content
/
100GB OS SSD
ext4
89.9GB
OS, NiFi install, Logs
/data/1/
2 x 4TB SAS Hard Drives in RAID 1
ext4
3.7TB
Database and Flowfile Repos
/data/2/
8 x 4TB SAS Hard Drives in RAID
Hello friends,
I'm trying to use the Java site-to-site client to receive data from a NiFi
"Output Port". It's been awhile since I have done this and was using the
"Spark-Receiver" as a general guideline.
When I attempt to create my Transaction object it is always NULL. My code
is simply
Nathan
You have plenty powerful machines to hit super high speeds but what I
cannot tell is how the disks are setup/capability and layout wise and
relative to our three repos of importance. You'll need to share those
details.
That said, the design of the flow matters. The Kafka processors that
Hi All,
We've got a NiFi 3 Node Cluster running on 3 x 40 CPU, 256GB RAM (32G Java
Heap) servers. However, we have only been able to achieve a consumption of
~9.48GB Consumption Compressed (38.53GB Uncompressed) over 5 minutes, with a
production rate of ~16.84GB out of the cluster over 5
10 matches
Mail list logo