Re: Error on InvokeHTTP

2024-01-12 Thread Joe Obernberger

0x20 is a space.  Maybe that's somewhere in your header?

On 1/12/2024 4:33 PM, James McMahon wrote:
I have a text flowfile that I am trying to send to a translation 
service on a remote EC2 instance from my nifi insurance on my EC2. I 
am failing with only this somewhat-cryptic error:


InvokeHTTP[id=a72e1727-3da0-1d6c-164b-e43c1426fd97] Routing to Failure 
due to exception: Unexpected char 0x20 at 6 in header name: Socket 
Write Timeout: java.lang.IllegalArgumentException: Unexpected char 
0x20 at 6 in header name: Socket Write Timeout

What does this mean? Is what I am sending from InvokeHTTP employing a header 
formatted in a way that is not expected?
I am using an InvokeHTTP version 1.16.3.

Has anyone experienced a similar error?


--
This email has been checked for viruses by AVG antivirus software.
www.avg.com

Re: Error on InvokeHTTP

2024-01-12 Thread Juan Pablo Gardella
it seems charset issue. if it is a json add charset=utf-8

On Fri, Jan 12, 2024, 6:33 PM James McMahon  wrote:

> I have a text flowfile that I am trying to send to a translation service
> on a remote EC2 instance from my nifi insurance on my EC2. I am failing
> with only this somewhat-cryptic error:
>
> InvokeHTTP[id=a72e1727-3da0-1d6c-164b-e43c1426fd97] Routing to Failure
> due to exception: Unexpected char 0x20 at 6 in header name: Socket Write
> Timeout: java.lang.IllegalArgumentException: Unexpected char 0x20 at 6 in
> header name: Socket Write Timeout
>
>
> What does this mean? Is what I am sending from InvokeHTTP employing a header 
> formatted in a way that is not expected?
>
>
> I am using an InvokeHTTP version 1.16.3.
>
> Has anyone experienced a similar error?
>
>
>
>
>


Error on InvokeHTTP

2024-01-12 Thread James McMahon
I have a text flowfile that I am trying to send to a translation service on
a remote EC2 instance from my nifi insurance on my EC2. I am failing with
only this somewhat-cryptic error:

InvokeHTTP[id=a72e1727-3da0-1d6c-164b-e43c1426fd97] Routing to Failure due
to exception: Unexpected char 0x20 at 6 in header name: Socket Write
Timeout: java.lang.IllegalArgumentException: Unexpected char 0x20 at 6 in
header name: Socket Write Timeout


What does this mean? Is what I am sending from InvokeHTTP employing a
header formatted in a way that is not expected?


I am using an InvokeHTTP version 1.16.3.

Has anyone experienced a similar error?


Re: Finding slow down in processing

2024-01-12 Thread Phillip Lord
Ditto...

@Aaron... so outside of the GenerateFlowFile -> PutFile, were there
additional components/dataflows handling data at the same time as the
"stress-test".  These will all share the same thread-pool.  So depending
upon your dataflow footprint and any variability regarding data volumes...
20 timer-driven threads could be exhausted pretty quickly.  This might
cause not only your "stress-test" to slow down but your other flows as well
as components might be waiting for available threads to do their jobs.

Thanks,
Phil

On Thu, Jan 11, 2024 at 3:44 PM Mark Payne  wrote:

> Aaron,
>
> Interestingly, up to version 1.21 of NiFi, if you increase the size of the
> thread pool, it increased immediately. But if you decreased the size of the
> thread pool, the decrease didn’t take effect until you restart NiFi. So
> that’s probably why you’re seeing the behavior you are. Even though you
> reset it to 10 or 20, it’s still running at 40.
>
> This was done to issues with Java many years ago, where it caused problems
> to decrease the thread pool size.  So just recently we updated NiFi to
> immediately scale down the thread pools as well.
>
> Thanks
> -Mark
>
>
> On Jan 11, 2024, at 1:35 PM, Aaron Rich  wrote:
>
> So the good news is it's working now. I know what I did but I don't know
> why it worked so I'm hoping others can enlighten me based on what I did.
>
> TL;DR - "turn it off/turn in on" for Max Timer Driven Thread Count fixed
> performance. Max Timer Driven Thread Count was set to 20. I changed it to
> 30 - performance increased. I changed to more to 40 - it increased. I moved
> it back to 20 - performance was still up and what it originally was before
> ever slowing down.
>
> (this is long to give background and details)
> NiFi version: 1.19.1
>
> NiFi was deployed into a Kubernetes cluster as a single instance - no NiFi
> clustering. We would set a CPU request of 4, and limit of 8, memory request
> of 8, limit of 12. The repos are all volumed mounted out to ssd.
>
> The original deployment was as described above and Max Timer Driven Thread
> Count was set to 20. We ran a very simple data flow
> (generatoeFile->PutFile) AFAP to try to stress as much as possible before
> starting our other data flows. That ran for a week with no issue doing
> 20K/5m.
> We turned on the other data flows and everything was processing as
> expected, good throughput rates and things were happy.
> Then the throughput dropped DRAMATICALLY to (instead of 11K/5m in an
> UpdateAttribute, it went to 350/5m) after 3 days. The data being processed
> did not change in volume/cadence/velocity/etc.
> Rancher Cluster explorer dashboards didn't show resources standing out as
> limiting or constraining.
> Tried restarting workload in Kubernetes, and data flows were slow right
> from start - so there wasn't a ramp up or any degradation over time - it
> was just slow to begin.
> Tried removing all the repos/state so NiFi came up clean incase it was the
> historical data that was issue - still slow from start.
> Tried changing node in Kube Cluster incase node was bad - still slow from
> start.
> Removed CPU limit (allowing NiFi to potentially use all 16 cores on node)
> from deployment to see if there was CPU throttling happening that I wasn't
> able to see on the Grafana dashboards - still slow from start.
> While NiFi was running, I changed the Max Timer Driven Thread Count from
> 20->30, performance picked up. Changed it again from 30->40, performance
> picked up. I changed from 40->10, performance stayed up. I changed from
> 10-20, performance stayed up and was at the original amount before slow
> down every happened.
>
> So end of the day, the Max Timer Driven Thread Count is at exactly what it
> was before but the performance changed. It's like something was "stuck".
> It's very, very odd to me to see things be fine, degrade for days and
> through multiple environment changes/debugging, and then return to fine
> when I change a parameter to a different value->back to original value.
> Effectively, I "turned it off/turned it on" with the Max Timer Driven
> Thread Count value.
>
> My question is - what is happening under the hood when the Max Timer
> Driven Thread Count is changed? What does that affect? Is there something I
> could look at from Kubernetes' side potentially that would relate to that
> value?
>
> Could an internal NiFi thread gotten stuck and changing that value rebuilt
> the thread pool? If that is even possible? If that is even possible, is any
> way to know what caused the thread to "get stuck" in the first place?
>
> Any insight would be greatly appreciated!
>
> Thanks so much for all the suggestions and help on this.
>
> -Aaron
>
>
>
> On Wed, Jan 10, 2024 at 1:54 PM Aaron Rich  wrote:
>
>> Hi Joe,
>>
>> Nothing is load balanced- it's all basic queues.
>>
>> Mark,
>> I'm using NiFi 1.19.1.
>>
>> nifi.performance.tracking.percentage sounds exactly what I might need.
>> I'll give that a shot.
>>
>> Richard,
>> I hadn't l