[ 
https://issues.apache.org/jira/browse/NIFI-5522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16582648#comment-16582648
 ] 

Diego Queiroz commented on NIFI-5522:
-------------------------------------

[~ottobackwards]. Thanks for your answer.

I made a small change to your flow, here it is:
 * [^HandleHttpRequest_Error_Template.xml]
 * I feel this problem is somewhat related with exceptions in the flow, so I 
used a ExecuteScript to force an Exception. It is not a requirement to have 
ExecuteScript with an uncatched exception in the flow (I have flows without any 
restricted component that present the same behavior), but the described problem 
appear to happen more often when I use this processor.

If you run it once, you won't experience any problem. But if you stress this 
flow with my script, it will act as described (I was able to reprodure in the 
first try).

> HandleHttpRequest enters in fault state and does not recover
> ------------------------------------------------------------
>
>                 Key: NIFI-5522
>                 URL: https://issues.apache.org/jira/browse/NIFI-5522
>             Project: Apache NiFi
>          Issue Type: Bug
>    Affects Versions: 1.7.0, 1.7.1
>            Reporter: Diego Queiroz
>            Priority: Critical
>              Labels: security
>         Attachments: HandleHttpRequest_Error_Template.xml, 
> image-2018-08-15-21-10-27-926.png, image-2018-08-15-21-10-33-515.png, 
> image-2018-08-15-21-11-57-818.png, image-2018-08-15-21-15-35-364.png, 
> image-2018-08-15-21-19-34-431.png, image-2018-08-15-21-20-31-819.png, 
> test_http_req_resp.xml
>
>
> HandleHttpRequest randomly enters in a fault state and does not recover until 
> I restart the node. I feel the problem is triggered when some exception 
> occurs (ex.: broken request, connection issues, etc), but I am usually able 
> to reproduce this behavior stressing the node with tons of simultaneous 
> requests:
> {{# example script to stress server}}
>  {{for i in `seq 1 10000`; do}}
>  {{   wget ‐T10 ‐t10 ‐qO‐ 'http://127.0.0.1:64080/'>/dev/null &}}
>  {{done}}
> When this happens, HandleHttpRequest start to return "HTTP ERROR 503 - 
> Service Unavailable" and does not recover from this state:
> !image-2018-08-15-21-10-33-515.png!
> If I try to stop the HandleHttpRequest processor, the running threads does 
> not terminate:
> !image-2018-08-15-21-11-57-818.png!
> If I force them to terminate, the listen port continue being bound by NiFi:
> !image-2018-08-15-21-15-35-364.png!
> If I try to connect again, I got a HTTP ERROR 500:
> !image-2018-08-15-21-19-34-431.png!
>  
> If I try to start the HandleHttpRequest processor again, it doesn't start 
> with the message:
>  * {{ERROR [Timer-Driven Process Thread-11] 
> o.a.n.p.standard.HandleHttpRequest 
> HandleHttpRequest[id=9bae326b-5ac3-3e9f-2dac-c0399d8f2ddb] 
> {color:#FF0000}*Failed to process session due to 
> org.apache.nifi.processor.exception.ProcessException: Failed to initialize 
> the server: org.apache.nifi.processor.exception.ProcessException: Failed to 
> initialize the server*{color}}}{\{ 
> org.apache.nifi.processor.exception.ProcessException: Failed to initialize 
> the server}}\{{ {{ at 
> org.apache.nifi.processors.standard.HandleHttpRequest.onTrigger(HandleHttpRequest.java:501)}}}}\{{
>  {{ at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)}}}}\{{
>  {{ at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)}}}}\{{
>  {{ at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)}}}}\{{
>  {{ at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)}}}}\{{
>  {{ at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}}}\{{
>  {{ at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}}}}\{{ {{ at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}}}}\{{
>  {{ at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}}}}\{{
>  {{ at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}}}\{{
>  {{ at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}}}\{{
>  {{ at java.lang.Thread.run(Thread.java:748)}}}}{{ {color:#FF0000}*Caused by: 
> java.net.BindException: Address already in use*{color}}}\{{ {{ at 
> sun.nio.ch.Net.bind0(Native Method)}}}}\{{ {{ at 
> sun.nio.ch.Net.bind(Net.java:433)}}}}\{{ {{ at 
> sun.nio.ch.Net.bind(Net.java:425)}}}}\{{ {{ at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)}}}}\{{
>  {{ at 
> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)}}}}\{{ {{ at 
> org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:298)}}}}\{{
>  {{ at 
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)}}}}\{{
>  {{ at 
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)}}}}\{{
>  {{ at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)}}}}\{{
>  {{ at org.eclipse.jetty.server.Server.doStart(Server.java:431)}}}}\{{ {{ at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)}}}}\{{
>  {{ at 
> org.apache.nifi.processors.standard.HandleHttpRequest.initializeServer(HandleHttpRequest.java:430)}}}}\{{
>  {{ at 
> org.apache.nifi.processors.standard.HandleHttpRequest.onTrigger(HandleHttpRequest.java:489)}}
>  \{ Unknown macro: { ... 11 common frames omitted}}}{{}}}
> !image-2018-08-15-21-20-31-819.png!
>  
> The only way to workaround this when it happens is chaging the port it 
> listens to or restarting NiFi service. I flagged this as a security issue 
> because it allows someone to cause a DoS to the service.
> I found several similar issues, but most of them are related with old 
> versions, I am can confirm this affects versions 1.7.0 and 1.7.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to