bandwidth with Site2site

2024-10-10 Thread e-sociaux
 

Hello,

We have a lot flowfiles stucks in the connection before s2s (raw mode).

We are not sure if it is network or NIFI issues.
 

The S2S uses tcp connection or wraps to https ?

Do you know how it is possible to test the bandwidth without NIFI ?
Between server A and B, we have only NIFI port opened.

If you have some tips, it will help me.

 

Thanks

Minh
 


Re: cluster nifi hosted in kubernetes

2024-10-02 Thread e-sociaux
Hi David,

 

Thanks for reponse. Yes, I've already read this article from exceptionFactory.


I will check with my kubernetes team.

 

Thanks

 

Envoyé: lundi 30 septembre 2024 à 20:39
De: "David Handermann" 
À: users@nifi.apache.org
Objet: Re: cluster nifi hosted in kubernetes

Hi Minh,

There are various approaches from vendors and open source projects
around running NiFi on Kubernetes.

The implementation work for NIFI-10975 improved framework support for
clustering on Kubernetes, specifically enabling leader election using
Kubernetes Leases instead of Apache ZooKeeper.

The following post provides some of the technical details behind that
implementation:

https://exceptionfactory.com/posts/2024/08/10/bringing-kubernetes-clustering-to-apache-nifi/

Deploying on Kubernetes requires a number of additional infrastructure
considerations when it comes to security and scalability.

Regards,
David Handermann

On Mon, Sep 30, 2024 at 6:38 AM  wrote:
>
>
> Hello all,
>
> Until now we hosted our clusters nifi from VM installed.
>
> We want now to create clusters nifi to kubernetes to have the possibility to scale up and down more easier than just with VM.
>
> Could you tell me it is possible with kubernetes and have we got some documentations ?
>
> I saw some works "Improve cluster configuration for dynamic scaling" in jira https://issues.apache.org/jira/browse/NIFI-5443 and release note about kubernetes
>
> [NIFI-10975] - Add Kubernetes Leader Election Manager and State Provider
>
>
> Thanks for helps if you have some informations for me.
>
> Regards
>
> Minh




 

 


cluster nifi hosted in kubernetes

2024-09-30 Thread e-sociaux
 

Hello all,

 

Until now we hosted our clusters nifi from VM installed.

 

We want now to create clusters nifi to kubernetes to have the possibility to scale up and down more easier than just with VM.

 

Could you tell me it is possible with kubernetes and have we got some documentations ?

 

I saw some works "Improve cluster configuration for dynamic scaling" in jira https://issues.apache.org/jira/browse/NIFI-5443 and release note about kubernetes

	[NIFI-10975] - Add Kubernetes Leader Election Manager and State Provider



 

Thanks for helps if you have some informations for me.

 

Regards 

 

Minh


Kubernetes + NIFI

2024-09-12 Thread e-sociaux
 

Hello nifi team.

 

Somebody has already deployed nifi cluster with Kubernetes ?
Do you need to develop some customs operators to manage the cluster ?

Have you got some tutos to do this thing ?
 

 

Thanks all

Regards 

 

Minh


Re: Received SHUTDOWN request from Bootstrap

2024-07-09 Thread e-sociaux
 


Thanks for reply Mark,

I checked in the nifi-app.log, we got 0 error :(

Only "INFO" message and the last messages are the same. 

 

Below the message from another node of cluster

2024-07-09 06:42:52,175 INFO [pool-2-thread-1] org.apache.nifi.BootstrapListener Received SHUTDOWN request from Bootstrap
2024-07-09 06:42:52,175 INFO [pool-2-thread-1] org.apache.nifi.NiFi Application Server shutdown started
2024-07-09 06:42:52,184 INFO [pool-2-thread-1] o.eclipse.jetty.server.AbstractConnector Stopped ServerConnector@352e5a82{SSL, (ssl, http/1.1)}{}
2024-07-09 06:42:52,184 INFO [pool-2-thread-1] org.eclipse.jetty.server.session node0 Stopped scavenging


 

Very hard to find the root cause ..


 

Envoyé: lundi 8 juillet 2024 à 16:56
De: "Mark Payne" 
À: "users@nifi.apache.org" 
Objet: Re: Received SHUTDOWN request from Bootstrap


Hey Minh,
 

Try looking in the nifi-app.log, not the nifi-bootstrap.log

 

Thanks

-Mark
 

On Jul 8, 2024, at 10:41 AM, e-soci...@gmx.fr wrote:
 




 


The only except INFO, there are only this line in bootstrap.log

 

[root@ope-nifi-dfp-vm-19 nifi]# grep -v INFO nifi-bootstrap.log
2024-07-08 07:13:39,883 WARN [main] org.apache.nifi.bootstrap.Command NiFi PID [71123] shutdown not completed after 20 seconds: Killing process
2024-07-08 07:14:08,425 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: An illegal reflective access operation has occurred
2024-07-08 07:14:08,425 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Illegal reflective access by jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1 (file:/appl/nifi/nifi-current/work/nar/framework/nifi-framework-nar-1.25.0.nar-unpacked/NAR-INF/bundled-dependencies/xodus-environment-2.0.1.jar) to method sun.nio.ch.FileChannelImpl.setUninterruptible()
2024-07-08 07:14:08,425 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Please consider reporting this to the maintainers of jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1
2024-07-08 07:14:08,425 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
2024-07-08 07:14:08,425 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: All illegal access operations will be denied in a future release
2024-07-08 07:14:10,725 ERROR [NiFi logging handler] org.apache.nifi.StdErr Log configuration loaded from default internal file.
2024-07-08 08:14:38,813 WARN [main] org.apache.nifi.bootstrap.Command NiFi PID [2630870] shutdown not completed after 20 seconds: Killing process
2024-07-08 08:15:05,321 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: An illegal reflective access operation has occurred
2024-07-08 08:15:05,321 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Illegal reflective access by jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1 (file:/appl/nifi/nifi-current/work/nar/framework/nifi-framework-nar-1.25.0.nar-unpacked/NAR-INF/bundled-dependencies/xodus-environment-2.0.1.jar) to method sun.nio.ch.FileChannelImpl.setUninterruptible()
2024-07-08 08:15:05,321 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Please consider reporting this to the maintainers of jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1
2024-07-08 08:15:05,321 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
2024-07-08 08:15:05,321 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: All illegal access operations will be denied in a future release
2024-07-08 08:15:07,451 ERROR [NiFi logging handler] org.apache.nifi.StdErr Log configuration loaded from default internal file.
2024-07-08 11:35:15,954 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: An illegal reflective access operation has occurred
2024-07-08 11:35:15,954 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Illegal reflective access by jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1 (file:/appl/nifi/nifi-current/work/nar/framework/nifi-framework-nar-1.25.0.nar-unpacked/NAR-INF/bundled-dependencies/xodus-environment-2.0.1.jar) to method sun.nio.ch.FileChannelImpl.setUninterruptible()
2024-07-08 11:35:15,954 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Please consider reporting this to the maintainers of jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1
2024-07-08 11:35:15,954 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
2024-07-08 11:35:15,954 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: All illegal access operations will be denied in a future release
2024-07-08 11:35:17,916 ERROR [NiFi logging handler] org.apache.nifi.StdErr Log configuration loaded from default interna

Re: Received SHUTDOWN request from Bootstrap

2024-07-08 Thread e-sociaux
 


The only except INFO, there are only this line in bootstrap.log

 

[root@ope-nifi-dfp-vm-19 nifi]# grep -v INFO nifi-bootstrap.log
2024-07-08 07:13:39,883 WARN [main] org.apache.nifi.bootstrap.Command NiFi PID [71123] shutdown not completed after 20 seconds: Killing process
2024-07-08 07:14:08,425 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: An illegal reflective access operation has occurred
2024-07-08 07:14:08,425 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Illegal reflective access by jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1 (file:/appl/nifi/nifi-current/work/nar/framework/nifi-framework-nar-1.25.0.nar-unpacked/NAR-INF/bundled-dependencies/xodus-environment-2.0.1.jar) to method sun.nio.ch.FileChannelImpl.setUninterruptible()
2024-07-08 07:14:08,425 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Please consider reporting this to the maintainers of jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1
2024-07-08 07:14:08,425 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
2024-07-08 07:14:08,425 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: All illegal access operations will be denied in a future release
2024-07-08 07:14:10,725 ERROR [NiFi logging handler] org.apache.nifi.StdErr Log configuration loaded from default internal file.
2024-07-08 08:14:38,813 WARN [main] org.apache.nifi.bootstrap.Command NiFi PID [2630870] shutdown not completed after 20 seconds: Killing process
2024-07-08 08:15:05,321 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: An illegal reflective access operation has occurred
2024-07-08 08:15:05,321 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Illegal reflective access by jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1 (file:/appl/nifi/nifi-current/work/nar/framework/nifi-framework-nar-1.25.0.nar-unpacked/NAR-INF/bundled-dependencies/xodus-environment-2.0.1.jar) to method sun.nio.ch.FileChannelImpl.setUninterruptible()
2024-07-08 08:15:05,321 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Please consider reporting this to the maintainers of jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1
2024-07-08 08:15:05,321 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
2024-07-08 08:15:05,321 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: All illegal access operations will be denied in a future release
2024-07-08 08:15:07,451 ERROR [NiFi logging handler] org.apache.nifi.StdErr Log configuration loaded from default internal file.
2024-07-08 11:35:15,954 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: An illegal reflective access operation has occurred
2024-07-08 11:35:15,954 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Illegal reflective access by jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1 (file:/appl/nifi/nifi-current/work/nar/framework/nifi-framework-nar-1.25.0.nar-unpacked/NAR-INF/bundled-dependencies/xodus-environment-2.0.1.jar) to method sun.nio.ch.FileChannelImpl.setUninterruptible()
2024-07-08 11:35:15,954 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Please consider reporting this to the maintainers of jetbrains.exodus.io.FileDataWriter$Companion$setUninterruptibleMethod$1
2024-07-08 11:35:15,954 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
2024-07-08 11:35:15,954 ERROR [NiFi logging handler] org.apache.nifi.StdErr WARNING: All illegal access operations will be denied in a future release
2024-07-08 11:35:17,916 ERROR [NiFi logging handler] org.apache.nifi.StdErr Log configuration loaded from default internal file.

 

Envoyé: lundi 8 juillet 2024 à 15:18
De: "David Handermann" 
À: users@nifi.apache.org
Objet: Re: Received SHUTDOWN request from Bootstrap

Hi Minh,

The log snippet shows standard behavior for NiFi when receiving the
stop command from the nifi.sh control script. There may be some
additional details in nifi-bootstrap.log, but those few lines from
nifi-app.log describe expected behavior.

Regards,
David Handermann

On Mon, Jul 8, 2024 at 6:34 AM  wrote:
>
> Hello all,
>
> How it is possible to have more details about this error ?
>
> Here the last logs we got before NIFI shutdown without another Error in the nifi-app.log:
>
> 2024-07-08 10:01:57,988 INFO [pool-2-thread-1] org.apache.nifi.BootstrapListener Received SHUTDOWN request from Bootstrap
> 2024-07-08 10:01:57,988 INFO [pool-2-thread-1] org.apache.nifi.NiFi Application Server shutdown started
> 2024-07-08 10:01:57,994 INFO [pool-2-thread-1] o.eclipse.jetty.server.AbstractConnector Stopped ServerConnector@315b5913{SSL,

Received SHUTDOWN request from Bootstrap

2024-07-08 Thread e-sociaux
Hello all,

 

How it is possible to have more details about this error ?

 

Here the last logs we got before NIFI shutdown without another Error in the nifi-app.log:

 

2024-07-08 10:01:57,988 INFO [pool-2-thread-1] org.apache.nifi.BootstrapListener Received SHUTDOWN request from Bootstrap
2024-07-08 10:01:57,988 INFO [pool-2-thread-1] org.apache.nifi.NiFi Application Server shutdown started
2024-07-08 10:01:57,994 INFO [pool-2-thread-1] o.eclipse.jetty.server.AbstractConnector Stopped ServerConnector@315b5913{SSL, (ssl, http/1.1)}{ope-nifi-dfp-vm-19.dfp.ope.euwest1-1.gcp.renault.fr:9091}
2024-07-08 10:01:57,994 INFO [pool-2-thread-1] org.eclipse.jetty.server.session node0 Stopped scavenging

Thanks for help

Minh 


Re: Nifi on RockyLinux

2024-06-17 Thread e-sociaux
 


Hello Deepak,

 

We have migrated all our nifi instances to Rocky Linux 8 and until now no got any issue.

 

Regards

 

Minh

 

Envoyé: vendredi 14 juin 2024 à 19:59
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Nifi on RockyLinux


Deepak
 

In Apache we don't have mechanisms to verify/validate that but indications are that it should work just fine.

 

Thanks

 


On Fri, Jun 14, 2024 at 10:47 AM Chirthani, Deepak Reddy  wrote:





Hi,

 

Writing this email to get a confirmation for if Apache Nifi can be supported on the Linux Distribution System RockyLinux?

 

Thanks

Deepak

The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited.








 

 


Performance Impact about "Concurrent Tasks" in input Port

2024-06-11 Thread e-sociaux
Hello all,

Somebody could tell me what is impacted when we increase the number of "Concurrent Tasks" in the inputPort ?

 

Regards

 

Minh 


Processor for Firestore GCP

2024-05-02 Thread e-sociaux
Hello all,

I'm looking for the connection and fetch data from FIRESTORE GCP.

Have we something in NIFI to do this thing ?

Thanks

 

Minh 

 

 


Re: java.lang.OutOfMemoryError: Java heap space : NIFI 1.23.2

2024-03-19 Thread e-sociaux
Hello Joe,

 

Thanks for the response.

But I found the solution. 

 

Like the table size is around 7,6GB (avro) for 161000 rows 

And I setup Fetch size : 20 so the processor ExecuteSQL try to fetch all datas.

 

Like the heap size is setup to 8GB

I have reduced the "Fetch Size" to 1 and it is worked.

 

Regards

 
 

Envoyé: mardi 19 mars 2024 à 14:53
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: java.lang.OutOfMemoryError: Java heap space : NIFI 1.23.2


Hello

 

The key output is

 


java.lang.OutOfMemoryError: Java heap space


Review batch property options to limit response sizes in the database calls.

 

Thanks

 

 

On Tue, Mar 19, 2024 at 6:15 AM  wrote:




Hello 

 

I got the executeSQL processor does the sql command "select * from public.table1"

It is a postgresql database.

 

Here the end of properties of processor.

 



Max Wait Time 0 seconds



Normalize Table/Column Names false



Use Avro Logical Types false



Compression Format NONE



Default Decimal Precision 10



Default Decimal Scale 0



Max Rows Per Flow File 0



Output Batch Size 0



Fetch Size: 20



Set Auto Commit :false



 

 

The same sql command works in nifi 1.16.3 with the same configuration

 

I don't know why it is failed. Thanks

 

Need you help .. there is a strange error :



2024-03-19 12:58:37,683 ERROR [Load-Balanced Client Thread-6] org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task
java.lang.OutOfMemoryError: Java heap space
2024-03-19 12:58:37,684 ERROR [Load-Balanced Client Thread-2] org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task
java.lang.OutOfMemoryError: Java heap space
        at java.base/java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:1024)
        at java.base/java.util.concurrent.CopyOnWriteArraySet.iterator(CopyOnWriteArraySet.java:389)
        at org.apache.nifi.controller.queue.clustered.client.async.nio.NioAsyncLoadBalanceClientTask.run(NioAsyncLoadBalanceClientTask.java:54)
        at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
2024-03-19 12:58:48,940 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: SUSPENDED
2024-03-19 12:58:49,347 ERROR [Timer-Driven Process Thread-9] org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task
java.lang.OutOfMemoryError: Java heap space
2024-03-19 12:58:49,351 INFO [NiFi Web Server-5835] org.apache.nifi.web.server.RequestLog 138.21.169.37 - CN=admin.plants, OU=NIFI [19/Mar/2024:12:58:48 +] "GET /nifi-api/flow/cluster/summary HTTP/1.1" 200 104 "https://nifi-01:9091/nifi/?processGroupId=0f426c92-018e-1000--3fca1e11&componentIds=" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36"
2024-03-19 12:58:48,559 ERROR [Load-Balanced Client Thread-3] org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task
java.lang.OutOfMemoryError: Java heap space
2024-03-19 12:58:49,745 INFO [NiFi Web Server-5419] org.apache.nifi.web.server.RequestLog 138.21.169.37 - - [19/Mar/2024:12:58:49 +] "POST /rb_bf28073qyu?type=js3&sn=v_4_srv_8_sn_B1E58A3741D2949DD454A88FF8A4BAF3_perc_10_ol_0_mul_1_app-3A4e195de4d0714591_1_app-3A44074a8878754fd3_1_app-3Ad1603d0792f56d4b_1_app-3A8092cfc902bb1761_1_rcs-3Acss_1&svrid=8&flavor=post&vi=QKHPLOMPDHAQNPHMHHTUOHHKWJHFRNCG-0&modifiedSince=1710851904127&rf=https%3A%2F%2Fnifi-01%3A9091%2Fnifi%2F%3FprocessGroupId%3D0f426c92-018e-1000--3fca1e11%26componentIds%3D&bp=3&app=d1603d0792f56d4b&crc=1268897524&en=7xpdnw1j&end=1 HTTP/1.1" 200 109 "https://nifi-01:9091/nifi/?processGroupId=0f426c92-018e-1000--3fca1e11&componentIds=" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36"
2024-03-19 12:58:57,419 ERROR [Timer-Driven Process Thread-5] org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task
java.lang.OutOfMemoryError: Java heap space
2024-03-19 12:58:50,209 WARN [NiFi Web Server-5689] o.a.n.c.l.e.CuratorLeaderElectionManager Unable to determine leader for role 'Cluster Coordinator'; returning null
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /nifi/clu_quality_2/leaders/Cluster Coordinator
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
        at org

java.lang.OutOfMemoryError: Java heap space : NIFI 1.23.2

2024-03-19 Thread e-sociaux
Hello 

 

I got the executeSQL processor does the sql command "select * from public.table1"

It is a postgresql database.

 

Here the end of properties of processor.

 



Max Wait Time 0 seconds



Normalize Table/Column Names false



Use Avro Logical Types false



Compression Format NONE



Default Decimal Precision 10



Default Decimal Scale 0



Max Rows Per Flow File 0



Output Batch Size 0



Fetch Size: 20



Set Auto Commit :false



 

 

The same sql command works in nifi 1.16.3 with the same configuration

 

I don't know why it is failed. Thanks

 

Need you help .. there is a strange error :



2024-03-19 12:58:37,683 ERROR [Load-Balanced Client Thread-6] org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task
java.lang.OutOfMemoryError: Java heap space
2024-03-19 12:58:37,684 ERROR [Load-Balanced Client Thread-2] org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task
java.lang.OutOfMemoryError: Java heap space
        at java.base/java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:1024)
        at java.base/java.util.concurrent.CopyOnWriteArraySet.iterator(CopyOnWriteArraySet.java:389)
        at org.apache.nifi.controller.queue.clustered.client.async.nio.NioAsyncLoadBalanceClientTask.run(NioAsyncLoadBalanceClientTask.java:54)
        at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
2024-03-19 12:58:48,940 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: SUSPENDED
2024-03-19 12:58:49,347 ERROR [Timer-Driven Process Thread-9] org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task
java.lang.OutOfMemoryError: Java heap space
2024-03-19 12:58:49,351 INFO [NiFi Web Server-5835] org.apache.nifi.web.server.RequestLog 138.21.169.37 - CN=admin.plants, OU=NIFI [19/Mar/2024:12:58:48 +] "GET /nifi-api/flow/cluster/summary HTTP/1.1" 200 104 "https://nifi-01:9091/nifi/?processGroupId=0f426c92-018e-1000--3fca1e11&componentIds=" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36"
2024-03-19 12:58:48,559 ERROR [Load-Balanced Client Thread-3] org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task
java.lang.OutOfMemoryError: Java heap space
2024-03-19 12:58:49,745 INFO [NiFi Web Server-5419] org.apache.nifi.web.server.RequestLog 138.21.169.37 - - [19/Mar/2024:12:58:49 +] "POST /rb_bf28073qyu?type=js3&sn=v_4_srv_8_sn_B1E58A3741D2949DD454A88FF8A4BAF3_perc_10_ol_0_mul_1_app-3A4e195de4d0714591_1_app-3A44074a8878754fd3_1_app-3Ad1603d0792f56d4b_1_app-3A8092cfc902bb1761_1_rcs-3Acss_1&svrid=8&flavor=post&vi=QKHPLOMPDHAQNPHMHHTUOHHKWJHFRNCG-0&modifiedSince=1710851904127&rf=https%3A%2F%2Fnifi-01%3A9091%2Fnifi%2F%3FprocessGroupId%3D0f426c92-018e-1000--3fca1e11%26componentIds%3D&bp=3&app=d1603d0792f56d4b&crc=1268897524&en=7xpdnw1j&end=1 HTTP/1.1" 200 109 "https://nifi-01:9091/nifi/?processGroupId=0f426c92-018e-1000--3fca1e11&componentIds=" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36"
2024-03-19 12:58:57,419 ERROR [Timer-Driven Process Thread-5] org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task
java.lang.OutOfMemoryError: Java heap space
2024-03-19 12:58:50,209 WARN [NiFi Web Server-5689] o.a.n.c.l.e.CuratorLeaderElectionManager Unable to determine leader for role 'Cluster Coordinator'; returning null
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /nifi/clu_quality_2/leaders/Cluster Coordinator
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
        at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2480)
        at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:243)
        at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:232)
        at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:94)
        at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:229)
        at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:220)
        at org.apache.curator.framework.imps.GetChildrenBuil

Re: Track dataflow with counter / updatecounter / statefull

2024-02-23 Thread e-sociaux
Hello Lehel,

 

Yes, I have already try to use UpdateAttribute with "Store State Locally".

The issue I saw it is when the same dataflow run multiple time the value "token" is increase all the times

I need to have the "counter" reset to Zero when the dataflow is consider to done.

 

Also, I have tested the Notify and Wait, but I remember it is not working in the cluster environment.

 

Anyway, I'm trying to deal with these informations.

 

Thanks

 

Minh

 
 

Envoyé: mercredi 21 février 2024 à 16:49
De: "Lehel Boér" 
À: "users@nifi.apache.org" 
Objet: Re: Track dataflow with counter / updatecounter / statefull



Hi,

 

In NiFi, counters serve monitoring purposes but are not directly accessible through _expression_ language, and their values are lost upon restart. Instead, I suggest using an UpdateAttribute processor for this task. Configure it to 'Store State Locally' with an initial value of 0. Add a dynamic attribute, let's name it 'counter', using the _expression_ ${getStateValue("token"):plus(1)}. This _expression_ appends the incremented value to every flowfile, increasing it by 1 each time. Next, utilize the RouteOnAttributeProcessor to route flowfiles based on conditions such as '${token:equals(n)}'. Finally, use the ReplaceText processor to replace the flowfile content with the desired text intended for transmission to your Pub/Sub.

 

You can find more about trigger-based data processing in NiFi  here.

 

Best Regards,

Lehel

 


From: e-soci...@gmx.fr 
Sent: Wednesday, February 21, 2024 7:59
To: usersnifi.apache.org 
Subject: Track dataflow with counter / updatecounter / statefull

 




 

Hello all,

 

Somebody has already use the Counter or UpdateCounter ?

 

My use case is sending a message (success) to topic pubsub if my nifi dataflow has consider to finish.

 

The input of dataflow is for example 3 flowfiles (do so treatements) and the ouput will be push to the GCS Bucket.

I want to have a message in the topic pubsub when I'm sure to have thoses 3 files in the bucket.

 

I'm trying use the updateCounter or statefull with updateAttribute processor but the "counter" is the owner of the processor so it is a bit difficult to update this counter.

 

Have you any information for me about this thing ?

 

Thanks all

 

Minh








 

 


Track dataflow with counter / updatecounter / statefull

2024-02-21 Thread e-sociaux
 

Hello all,

 

Somebody has already use the Counter or UpdateCounter ?

 

My use case is sending a message (success) to topic pubsub if my nifi dataflow has consider to finish.

 

The input of dataflow is for example 3 flowfiles (do so treatements) and the ouput will be push to the GCS Bucket.

I want to have a message in the topic pubsub when I'm sure to have thoses 3 files in the bucket.

 

I'm trying use the updateCounter or statefull with updateAttribute processor but the "counter" is the owner of the processor so it is a bit difficult to update this counter.

 

Have you any information for me about this thing ?

 

Thanks all

 

Minh


Error MQSerie

2024-01-29 Thread e-sociaux
 

Hi all, 

I don't how explain this error.

 

We use the MQV9.2 as producer and consume by NIFI 1.19.1 with processor ConsumeJMS and drivers :

 

 ls -rtl mqm_ibm
total 11384
-rw-r--r-- 1 nifi nifi 2111220 Jun 26  2018 ojdbc-11.2.0.3-jdk16.jar
-rw-r--r-- 1 nifi nifi   88374 Jun 26  2018 com.ibm.mq.pcf-7.5.0.2.jar
-rw-r--r-- 1 nifi nifi 3078069 Jun 26  2018 com.ibm.mqjms-7.5.0.2.jar
-rw-r--r-- 1 nifi nifi 2774466 Jun 26  2018 com.ibm.mq.jmqi-7.5.0.2.jar
-rw-r--r-- 1 nifi nifi  271021 Jun 26  2018 com.ibm.mq.headers-7.5.0.2.jar
-rw-r--r-- 1 nifi nifi  783096 Jun 26  2018 com.ibm.mq.commonservices-7.5.0.2.jar
-rw-r--r-- 1 nifi nifi  430028 Jun 26  2018 com.ibm.mq-7.5.0.2.jar
-rw-r--r-- 1 nifi nifi 2101638 Jun 26  2018 com.ibm.dhbcore-7.5.0.2.jar
[root@s11282tos jars]# 

We got this error for information, we already have added more log space but always has this error: 

Message=EXTERNAL UPDATE Application RSE Alert Receiving Time 261223 134521 Alert Log path logicielmqmvarmqmqmgrsFRRORSE1errorsAMQERR01.LOG

 

Original Message Text Time(2023-12-26T124521.753Z) ArithInsert1(74592) ArithInsert2(9117) CommentInsert1(org.apache.nifi.NiFi) CommentInsert2(mqm) C

ommentInsert3( CHANNEL(CH1) CONNAME(IP NIFI) APPLDESC(IBM MQ Channel)) AMQ7487I Application org.apache.nifi.NiFi was preventing log space from being released.

 

EXPLANATION A long running transaction was detected this message is intended to help identify the application associated with this long running transaction. Message AMQ7469 or AMQ7485 has been issued indicating if the transaction was rolled back or rolled forward in the log to allow the log space to be released. Message AMQ7486 has been issued identifying the transaction context of the transaction that was rolled back or rolled forwards. The application associated with this transaction was running with Pid 74592 Tid 9117 under application name org.apache.nifi.NiFi and under user identifier mqm. The following application context may also be useful in identifying the application causing this behaviour CHANNEL(CH1) CONNAME(IP NIFI) APPLDESC(IBM MQ Channel).

 

This message can be correllated with the previous AMQ7486 message in the queue manager error logs. ACTION Identify the application responsible for the long running unit of work and ensure this application is creating and completing transactions in a timely manner. If the application is working as expected it may be appropriate to increase the size of the queue manager recovery log. - amqatmpa.c 927 logicielmqmvarmqmqmgrsQUEUEerrorsAMQERR01.LOG [/code]


We suppose there are some issues with "COMMIT" so I check the propertie "Acknowledgement Mode=DUPS_OK_ACKNOWLEDGE (3)"

What is the difference with "CLIENT_ACKNOWLEDGE (2)" ?

Could we know how many messages consume by NIFI before it commit to the MQServer ?
Could we said to NIFI to commit every 100 or 1000 messages for example ?

Thanks For help

Regards 

Minh


Content repository error

2024-01-09 Thread e-sociaux
 

Hello all,

 

Somebody has already seen this error and how to correct it ?

 





Unable to write flowfile content to content repository container repo0 due to archive file size constraints; waiting for archive cleanup. Total number of files currently archived = 1




 

my filesystem is up to 50%

 

/dev/mapper/vgdata-lv_nifi_ctrepo 288213508 163978400 124235108  57% /data/nifi/content_repository

And got a flowfile with 117GB way in the nifi before writing to the putGCSObject

 

Thanks for help

 

Minh 



Re: Hardware requirement for NIFI instance

2024-01-08 Thread e-sociaux
Hello Mark,

 

I shared the screenshot about ExecuteSQL properties.

 

This 3 properties together work very well for PostGRESQL in the NIFI with 2CPU/8Gb.

 

Without the properties "Set Auto Commit" to FALSE, the NIFI stuck and crash.

 

Thanks a lot for help

 

Minh 

 

 
 

Envoyé: vendredi 5 janvier 2024 à 15:52
De: "Mark Payne" 
À: "users@nifi.apache.org" 
Objet: Re: Hardware requirement for NIFI instance


Thanks for following up. That actually makes sense. I don’t think Output Batch Size will play a very big role here. But Fetch Size, if I understand correctly, is essentially telling the JDBC Driver “Here’s how many rows you should pull back at once.” And so it’s going to buffer all of those rows into memory until it has written out all of them.
 

So if you set Fetch Size = 0, it’s going to pull back all rows in your database into memory. To be honest, I cannot imagine a single scenario where that’s desirable. We should probably set the default to something reasonable like 1,000 or 10,000 at most. And in 2.0, where we have the ability to migrate old configurations we should automatically change any config that has Fetch Size of 0 to the default value.

 

@Matt Burgess, et al., any concerns with that?

 

Thanks

-Mark

 
 

On Jan 5, 2024, at 9:45 AM, e-soci...@gmx.fr wrote:
 




So after some tests, here the result perhaps could help someone. 

 

With nifi (2CPU / 8Go Ram)


I have tested with these couples properties :

 

> 1 executeSQL with "select * from table"

Output Batch Size : 1

Fetch Size : 10 

 


> 2 executeSQL with "select * from table"


Output Batch Size : 1

Fetch Size : 20 

 


> 2 executeSQL with "select * from table"


Output Batch Size : 1

Fetch Size : 40 





and started 5 executeSQL in the same time

 

The 5 processors work perfectly and receive 5 avro files with same size.

And during the test, the memory is stable and the Web UI works perfectly

 

 

FAILED TEST "OUT OF MEMORY" if the properties are :

 


> 1 executeSQL with "select * from table"

Output Batch Size : 0

Fetch Size : 0




Regards 

 

 


Envoyé: vendredi 5 janvier 2024 à 08:12
De: "Matt Burgess" 
À: users@nifi.apache.org
Objet: Re: Hardware requirement for NIFI instance

You may not need to merge if your Fetch Size is set appropriately. For
your case I don't recommend setting Max Rows Per Flow File because you
still have to wait for all the results to be processed before the
FlowFile(s) get sent "downstream". Also if you set Output Batch Size
you can't use Merge downstream as ExecuteSQL will send FlowFiles
downstream before it knows the total count.

If you have a NiFi cluster and not a standalone instance you MIGHT be
able to represent your complex query using GenerateTableFetch and use
a load-balanced connection to grab different "pages" of the table in
parallel with ExecuteSQL. Those can be merged later as long as you get
all the FlowFiles back to a single node. Depending on how complex your
query is then it's a long shot but I thought I'd mention it just in
case.

Regards,
Matt


On Thu, Jan 4, 2024 at 1:41 PM Pierre Villard
 wrote:
>
> You can merge multiple Avro flow files with MergeRecord with an Avro Reader and an Avro Writer
>
> Le jeu. 4 janv. 2024 à 22:05,  a écrit :
>>
>> And the important thing for us it has only one avro file by table.
>>
>> So it is possible to merge avro files to one avro file ?
>>
>> Regards
>>
>>
>> Envoyé: jeudi 4 janvier 2024 à 19:01
>> De: e-soci...@gmx.fr
>> À: users@nifi.apache.org
>> Cc: users@nifi.apache.org
>> Objet: Re: Hardware requirement for NIFI instance
>>
>> Hello all,
>>
>> Thanks a lot for the reply.
>>
>> So for more details.
>>
>> All the properties for the ExecuteSQL are set by default, except "Set Auto Commit: false".
>>
>> The sql command could not be more simple than "select * from ${db.table.fullname}"
>>
>> The nifi version is 1.16.3 and 1.23.2
>>
>> I have also test the same sql command in the another nifi (8 cores/ 16G Ram) and it is working.
>> The result is the avro file with 1.6GB
>>
>> The detail about the output flowfile :
>>
>> executesql.query.duration
>> 245118
>> executesql.query.executiontime
>> 64122
>> executesql.query.fetchtime
>> 180996
>> executesql.resultset.index
>> 0
>> executesql.row.count
>> 14961077
>>
>> File Size
>> 1.62 GB
>>
>> Regards
>>
>> Minh
>>
>>
>> Envoyé: jeudi 4 janvier 2024 à 17:18
>> De: "Matt Burgess" 
>> À: users@nifi.apache.org
>> Objet: Re: Hardware requirement for NIFI instance
>> If I remember correctly, the default Fetch Size for Postgresql is to
>> get all the rows at once, which can certainly cause the problem.
>> Perhaps try setting Fetch Size to something like 1000 or so and see if
>> that alleviates the problem.
>>
>> Regards,
>> Matt
>>
>> On Thu, Jan 4, 2024 at 8:48 AM Etienne Jouvin  wrote:
>> >
>> > Hello.
>> >
>> > I also think the problem is more about the processor, I guess ExecuteSQL.
>> >
>> > Should play with batch configuration and commit flag to commit interm

Re: Hardware requirement for NIFI instance

2024-01-05 Thread e-sociaux
So after some tests, here the result perhaps could help someone. 

 

With nifi (2CPU / 8Go Ram)


I have tested with these couples properties :

 

> 1 executeSQL with "select * from table"

Output Batch Size : 1

Fetch Size : 10 

 


> 2 executeSQL with "select * from table"


Output Batch Size : 1

Fetch Size : 20 

 


> 2 executeSQL with "select * from table"


Output Batch Size : 1

Fetch Size : 40 





and started 5 executeSQL in the same time

 

The 5 processors work perfectly and receive 5 avro files with same size.

And during the test, the memory is stable and the Web UI works perfectly

 

 

FAILED TEST "OUT OF MEMORY" if the properties are :

 


> 1 executeSQL with "select * from table"

Output Batch Size : 0

Fetch Size : 0




Regards 

 

 


Envoyé: vendredi 5 janvier 2024 à 08:12
De: "Matt Burgess" 
À: users@nifi.apache.org
Objet: Re: Hardware requirement for NIFI instance

You may not need to merge if your Fetch Size is set appropriately. For
your case I don't recommend setting Max Rows Per Flow File because you
still have to wait for all the results to be processed before the
FlowFile(s) get sent "downstream". Also if you set Output Batch Size
you can't use Merge downstream as ExecuteSQL will send FlowFiles
downstream before it knows the total count.

If you have a NiFi cluster and not a standalone instance you MIGHT be
able to represent your complex query using GenerateTableFetch and use
a load-balanced connection to grab different "pages" of the table in
parallel with ExecuteSQL. Those can be merged later as long as you get
all the FlowFiles back to a single node. Depending on how complex your
query is then it's a long shot but I thought I'd mention it just in
case.

Regards,
Matt


On Thu, Jan 4, 2024 at 1:41 PM Pierre Villard
 wrote:
>
> You can merge multiple Avro flow files with MergeRecord with an Avro Reader and an Avro Writer
>
> Le jeu. 4 janv. 2024 à 22:05,  a écrit :
>>
>> And the important thing for us it has only one avro file by table.
>>
>> So it is possible to merge avro files to one avro file ?
>>
>> Regards
>>
>>
>> Envoyé: jeudi 4 janvier 2024 à 19:01
>> De: e-soci...@gmx.fr
>> À: users@nifi.apache.org
>> Cc: users@nifi.apache.org
>> Objet: Re: Hardware requirement for NIFI instance
>>
>> Hello all,
>>
>> Thanks a lot for the reply.
>>
>> So for more details.
>>
>> All the properties for the ExecuteSQL are set by default, except "Set Auto Commit: false".
>>
>> The sql command could not be more simple than "select * from ${db.table.fullname}"
>>
>> The nifi version is 1.16.3 and 1.23.2
>>
>> I have also test the same sql command in the another nifi (8 cores/ 16G Ram) and it is working.
>> The result is the avro file with 1.6GB
>>
>> The detail about the output flowfile :
>>
>> executesql.query.duration
>> 245118
>> executesql.query.executiontime
>> 64122
>> executesql.query.fetchtime
>> 180996
>> executesql.resultset.index
>> 0
>> executesql.row.count
>> 14961077
>>
>> File Size
>> 1.62 GB
>>
>> Regards
>>
>> Minh
>>
>>
>> Envoyé: jeudi 4 janvier 2024 à 17:18
>> De: "Matt Burgess" 
>> À: users@nifi.apache.org
>> Objet: Re: Hardware requirement for NIFI instance
>> If I remember correctly, the default Fetch Size for Postgresql is to
>> get all the rows at once, which can certainly cause the problem.
>> Perhaps try setting Fetch Size to something like 1000 or so and see if
>> that alleviates the problem.
>>
>> Regards,
>> Matt
>>
>> On Thu, Jan 4, 2024 at 8:48 AM Etienne Jouvin  wrote:
>> >
>> > Hello.
>> >
>> > I also think the problem is more about the processor, I guess ExecuteSQL.
>> >
>> > Should play with batch configuration and commit flag to commit intermediate FlowFile.
>> >
>> > The out of memory exception makes me believe the full table is retrieved, and if it is huge the FlowFile content is very large.
>> >
>> >
>> >
>> >
>> > Le jeu. 4 janv. 2024 à 14:37, Pierre Villard  a écrit :
>> >>
>> >> It should be memory efficient so I think this is likely a configuration aspect of your processor. Can you share the configuration for all properties?
>> >> As a side note: if NiFi ran out of memory, you'd always want to restart it because you are never sure what's the state of the JVM after an OOME.
>> >>
>> >> Le jeu. 4 janv. 2024 à 17:26,  a écrit :
>> >>>
>> >>>
>> >>> Hello all,
>> >>>
>> >>> Who could help me to determine the cpu/memory need for nifi instance to fetch the data from Postgresql hosted in google ?
>> >>>
>> >>> We got this error :
>> >>> ==> Error : executesql.error.message
>> >>> Ran out of memory retrieving query results.
>> >>>
>> >>> The procesor ExecuteSQL has this config : Set Auto Commit ==> false
>> >>> driver Jar to use : postgresql-42.7.1.jar
>> >>> Java version : jdk-11.0.19
>> >>>
>> >>> Table information :
>> >>> rows number : 14958836
>> >>> fields number : 20
>> >>>
>> >>> Linux Rocky8
>> >>>
>> >>> Architecture: x86_64
>> >>> CPU op-mode(s): 32-bit, 64-bit
>> >>> Byte Order: Little Endian
>> >>> CPU(s): 2
>> >>> On-li

Re: Hardware requirement for NIFI instance

2024-01-04 Thread e-sociaux
And the important thing for us it has only one avro file by table. 

 

So it is possible to merge avro files to one avro file ?

 

Regards 

 
 

Envoyé: jeudi 4 janvier 2024 à 19:01
De: e-soci...@gmx.fr
À: users@nifi.apache.org
Cc: users@nifi.apache.org
Objet: Re: Hardware requirement for NIFI instance



 


Hello all,

 

Thanks a lot for the reply.

 

So for more details.

 

All the properties for the ExecuteSQL are set by default, except "Set Auto Commit:  false".

 

The sql command could not be more simple than "select * from ${db.table.fullname}"

 

The nifi version is 1.16.3 and 1.23.2

 

I have also test the same sql command in the another nifi (8 cores/ 16G Ram) and it is working.

The result is the avro file with 1.6GB

 

The detail about the output flowfile :

 

executesql.query.duration
245118
executesql.query.executiontime
64122
executesql.query.fetchtime
180996
executesql.resultset.index
0
executesql.row.count
14961077


 

File Size

1.62 GB


 


Regards 

 

Minh

 


 

Envoyé: jeudi 4 janvier 2024 à 17:18
De: "Matt Burgess" 
À: users@nifi.apache.org
Objet: Re: Hardware requirement for NIFI instance

If I remember correctly, the default Fetch Size for Postgresql is to
get all the rows at once, which can certainly cause the problem.
Perhaps try setting Fetch Size to something like 1000 or so and see if
that alleviates the problem.

Regards,
Matt

On Thu, Jan 4, 2024 at 8:48 AM Etienne Jouvin  wrote:
>
> Hello.
>
> I also think the problem is more about the processor, I guess ExecuteSQL.
>
> Should play with batch configuration and commit flag to commit intermediate FlowFile.
>
> The out of memory exception makes me believe the full table is retrieved, and if it is huge the FlowFile content is very large.
>
>
>
>
> Le jeu. 4 janv. 2024 à 14:37, Pierre Villard  a écrit :
>>
>> It should be memory efficient so I think this is likely a configuration aspect of your processor. Can you share the configuration for all properties?
>> As a side note: if NiFi ran out of memory, you'd always want to restart it because you are never sure what's the state of the JVM after an OOME.
>>
>> Le jeu. 4 janv. 2024 à 17:26,  a écrit :
>>>
>>>
>>> Hello all,
>>>
>>> Who could help me to determine the cpu/memory need for nifi instance to fetch the data from Postgresql hosted in google ?
>>>
>>> We got this error :
>>> ==> Error : executesql.error.message
>>> Ran out of memory retrieving query results.
>>>
>>> The procesor ExecuteSQL has this config : Set Auto Commit ==> false
>>> driver Jar to use : postgresql-42.7.1.jar
>>> Java version : jdk-11.0.19
>>>
>>> Table information :
>>> rows number : 14958836
>>> fields number : 20
>>>
>>> Linux Rocky8
>>>
>>> Architecture: x86_64
>>> CPU op-mode(s): 32-bit, 64-bit
>>> Byte Order: Little Endian
>>> CPU(s): 2
>>> On-line CPU(s) list: 0,1
>>> Thread(s) per core: 2
>>> Core(s) per socket: 1
>>> Socket(s): 1
>>> NUMA node(s): 1
>>> Vendor ID: GenuineIntel
>>> BIOS Vendor ID: Google
>>> CPU family: 6
>>> Model: 85
>>> Model name: Intel(R) Xeon(R) CPU @ 2.80GHz
>>> Stepping: 7
>>> CPU MHz: 2800.286
>>> BogoMIPS: 5600.57
>>> Hypervisor vendor: KVM
>>> Virtualization type: full
>>> L1d cache: 32K
>>> L1i cache: 32K
>>> L2 cache: 1024K
>>> L3 cache: 33792K
>>> NUMA node0 CPU(s): 0,1
>>>
>>> Memory : 8GB
>>>
>>> Thanks for you helps
>>>
>>> Minh




 

 







Re: Hardware requirement for NIFI instance

2024-01-04 Thread e-sociaux
 


Hello all,

 

Thanks a lot for the reply.

 

So for more details.

 

All the properties for the ExecuteSQL are set by default, except "Set Auto Commit:  false".

 

The sql command could not be more simple than "select * from ${db.table.fullname}"

 

The nifi version is 1.16.3 and 1.23.2

 

I have also test the same sql command in the another nifi (8 cores/ 16G Ram) and it is working.

The result is the avro file with 1.6GB

 

The detail about the output flowfile :

 

executesql.query.duration
245118
executesql.query.executiontime
64122
executesql.query.fetchtime
180996
executesql.resultset.index
0
executesql.row.count
14961077


 

File Size

1.62 GB


 


Regards 

 

Minh

 


 

Envoyé: jeudi 4 janvier 2024 à 17:18
De: "Matt Burgess" 
À: users@nifi.apache.org
Objet: Re: Hardware requirement for NIFI instance

If I remember correctly, the default Fetch Size for Postgresql is to
get all the rows at once, which can certainly cause the problem.
Perhaps try setting Fetch Size to something like 1000 or so and see if
that alleviates the problem.

Regards,
Matt

On Thu, Jan 4, 2024 at 8:48 AM Etienne Jouvin  wrote:
>
> Hello.
>
> I also think the problem is more about the processor, I guess ExecuteSQL.
>
> Should play with batch configuration and commit flag to commit intermediate FlowFile.
>
> The out of memory exception makes me believe the full table is retrieved, and if it is huge the FlowFile content is very large.
>
>
>
>
> Le jeu. 4 janv. 2024 à 14:37, Pierre Villard  a écrit :
>>
>> It should be memory efficient so I think this is likely a configuration aspect of your processor. Can you share the configuration for all properties?
>> As a side note: if NiFi ran out of memory, you'd always want to restart it because you are never sure what's the state of the JVM after an OOME.
>>
>> Le jeu. 4 janv. 2024 à 17:26,  a écrit :
>>>
>>>
>>> Hello all,
>>>
>>> Who could help me to determine the cpu/memory need for nifi instance to fetch the data from Postgresql hosted in google ?
>>>
>>> We got this error :
>>> ==> Error : executesql.error.message
>>> Ran out of memory retrieving query results.
>>>
>>> The procesor ExecuteSQL has this config : Set Auto Commit ==> false
>>> driver Jar to use : postgresql-42.7.1.jar
>>> Java version : jdk-11.0.19
>>>
>>> Table information :
>>> rows number : 14958836
>>> fields number : 20
>>>
>>> Linux Rocky8
>>>
>>> Architecture: x86_64
>>> CPU op-mode(s): 32-bit, 64-bit
>>> Byte Order: Little Endian
>>> CPU(s): 2
>>> On-line CPU(s) list: 0,1
>>> Thread(s) per core: 2
>>> Core(s) per socket: 1
>>> Socket(s): 1
>>> NUMA node(s): 1
>>> Vendor ID: GenuineIntel
>>> BIOS Vendor ID: Google
>>> CPU family: 6
>>> Model: 85
>>> Model name: Intel(R) Xeon(R) CPU @ 2.80GHz
>>> Stepping: 7
>>> CPU MHz: 2800.286
>>> BogoMIPS: 5600.57
>>> Hypervisor vendor: KVM
>>> Virtualization type: full
>>> L1d cache: 32K
>>> L1i cache: 32K
>>> L2 cache: 1024K
>>> L3 cache: 33792K
>>> NUMA node0 CPU(s): 0,1
>>>
>>> Memory : 8GB
>>>
>>> Thanks for you helps
>>>
>>> Minh




 

 


Hardware requirement for NIFI instance

2024-01-04 Thread e-sociaux
 

Hello all,

 

Who could help me to determine the cpu/memory need for nifi instance to fetch the data from Postgresql hosted in google ?

 

We got this error :


==> Error : executesql.error.message
Ran out of memory retrieving query results.

 

The procesor ExecuteSQL has this config : Set Auto Commit ==> false

driver Jar to use : postgresql-42.7.1.jar
Java version : jdk-11.0.19


 

Table information :

    rows number : 14958836
    fields number : 20

 

Linux Rocky8

 

Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              2
On-line CPU(s) list: 0,1
Thread(s) per core:  2
Core(s) per socket:  1
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
BIOS Vendor ID:      Google
CPU family:          6
Model:               85
Model name:          Intel(R) Xeon(R) CPU @ 2.80GHz
Stepping:            7
CPU MHz:             2800.286
BogoMIPS:            5600.57
Hypervisor vendor:   KVM
Virtualization type: full
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
L3 cache:            33792K
NUMA node0 CPU(s):   0,1

 

Memory : 8GB
 

Thanks for you helps

Minh



Listenhttp 1.23.2 / JDK jdk-11.0.17

2023-12-01 Thread e-sociaux
Hello all,

 

It was working with NIFI 1.18.0 and JDK 1.8.0_311

 

Have you got some issues with ListenHTTP with the length of file receive by NIFI ?


Error : "Form is larger than max length 20"

 

Do you know how correct it ?

 

Regards

 

Minh 
 

java.lang.IllegalStateException: Form is larger than max length 20
java.lang.IllegalStateException: Form is larger than max length 20
        at org.eclipse.jetty.server.Request.getParts(Request.java:2490)
        at org.eclipse.jetty.server.Request.getParts(Request.java:2420)
        at org.apache.nifi.processors.standard.servlets.ListenHTTPServlet.handleMultipartRequest(ListenHTTPServlet.java:279)
        at org.apache.nifi.processors.standard.servlets.ListenHTTPServlet.doPost(ListenHTTPServlet.java:249)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:554)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:505)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
        at org.eclipse.jetty.server.Server.handle(Server.java:516)
        at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)
        at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
        at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:555)
        at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:410)
        at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:164)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
        at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
        at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
        at java.base/java.lang.Thread.run(Thread.java:834)

 

 

 


Re: RE: invokeHTTP SSL error NIFI : 1.23.2

2023-11-30 Thread e-sociaux
Hi Steve,

I found the error :)
It seems block by the security team...

"This category is not allowed by the company IS security policies."
 


Regards

 

Minh 

 

Envoyé: lundi 27 novembre 2023 à 16:47
De: e-soci...@gmx.fr
À: users@nifi.apache.org, stephen.hindma...@bt.com
Cc: users@nifi.apache.org
Objet: Re: RE: invokeHTTP SSL error NIFI : 1.23.2



Thanks, for the informations, I will recheck from my side

 

Minh

 
 

Envoyé: lundi 27 novembre 2023 à 16:38
De: "stephen.hindmarch.bt.com via users" 
À: users@nifi.apache.org
Objet: RE: invokeHTTP SSL error NIFI : 1.23.2




Minh,

 

I would be reluctant to simply trust an external web site’s certificate directly unless I personally knew the operators and could vouch for their reputation, and knew the reason they did not have a signed certificate. The point about Root CAs is that are supposed to perform the required diligence to ensure the site is trustworthy, at least ensuring the site is operated by the legal owners of the domain they are using.

 

The easiest tool to use to check the certificate and its trust chain is your own browser. Using Edge for example I can click to check that the site is secure and there is a simple tool for viewing the trust chain. I can see this web site uses a certificate that was issued on Tuesday, 3 October 2023 at 12:46:54 BST. Did your problems with connecting start about then?

 

The certificate has a wildcard subject (CN = *.reputation.com) and signed by a Let’s Encrypt signing certificate:

 

Subject: CN = R3, O = Let's Encrypt, C = US

Serial: 00:91:2B:08:4A:CF:0C:18:A7:53:F6:D6:2E:25:A7:5F:5A

Validity: 04/09/2020, 01:00:00 BST to 15/09/2025, 17:00:00 BST

 

This in turn is signed by a root certificate

 

Subject: CN = ISRG Root X1, O = Internet Security Research Group, C = US

Serial: 00:82:10:CF:B0:D2:40:E3:59:44:63:E0:BB:63:82:8B:00

Validity: 04/06/2015, 12:04:38 BST to 04/06/2035, 12:04:38 BST

 

It is this last certificate that you probably have in your CA certs file as “letsencryptisrgx1” (Let’s Encrypt ISRG X1 !?!). If the subject, serial and validity dates for your copy match these details here (or better yet, the details you discover yourself through your own investigation – why trust me eh?) then you indeed have the right root cert. The question then is why the HTTP processor cannot form the trust chain from the web site to the root cert just like my Edge browser has managed to do. 

 

Are there any security patches for the JVM which might update the cacerts? Are you sure you are referencing the right cacerts file from NiFi? Does NiFi have read permission on the file? What happens if you GET a page from a different HTTPS site, like  https://www.microsoft.com/ do you have any trust issues then? Can you run some commands with openssl on the command line of your NiFi server host to check the trust of the site and output the process verbosely?

 

Sorry if I cannot be more help, I just had to step in before you went away with the belief that the solution to external certificate issues is to blindly trust individual certificates.

 

Steve Hindmarch

 


From: Etienne Jouvin 
Sent: Monday, November 27, 2023 2:28 PM
To: users@nifi.apache.org
Subject: Re: invokeHTTP SSL error NIFI : 1.23.2


 



What do you  mean by : "Let Encrypt CA" is already setup ?



If this is only the root certificate for Let Encrypt, then you do not have the certificate for api-eu.reputation.com in your truststore.



 



I may say something wrong, but if I had to do it, I will create a new truststore and not use the one from JDK, use the SSL Context service to configure the truststore location and password.



 



 


 



Le lun. 27 nov. 2023 à 13:58,  a écrit :






Hello, 



 



It is not working 



 



I have used : 



 



# true | openssl s_client -showcerts -connect api-eu.reputation.com:443



  



I saw it is manage by Let Encrypt



 



And the let Encrypt CA is already setup in the file 11.0.17/lib/security/cacerts => alias name: letsencryptisrgx1 [jdk]



 



so I configure SSL controler base on filename "11.0.17/lib/security/cacerts" in the truststore 



 



But always failed ..



 



 




Envoyé: lundi 27 novembre 2023 à 11:00
De: "Etienne Jouvin" 
À: users@nifi.apache.org
Objet: Re: invokeHTTP SSL error NIFI : 1.23.2




Oh I did not get this is an external api. 


 



Yes because it is https, you should import the certificate.



There was an update of OKHttpClient, which is more restrictive regarding certificate.



  



Le lun. 27 nov. 2023 à 10:52,  a écrit :






Hello



 



Thank for reply, the weird thing it is until now, I don't use SSL context and it is working.



 



Good anyway, I will get the server certificate and add it in the truststore and configure invokeHTTP to user SSL context also



 



Thanks



 



Minh



  


  



Envoyé: lundi 27 novembre 2023 à 10:48
De: "Etienne Jouvin" 
À

Re: RE: invokeHTTP SSL error NIFI : 1.23.2

2023-11-27 Thread e-sociaux
Thanks, for the informations, I will recheck from my side

 

Minh

 
 

Envoyé: lundi 27 novembre 2023 à 16:38
De: "stephen.hindmarch.bt.com via users" 
À: users@nifi.apache.org
Objet: RE: invokeHTTP SSL error NIFI : 1.23.2




Minh,

 

I would be reluctant to simply trust an external web site’s certificate directly unless I personally knew the operators and could vouch for their reputation, and knew the reason they did not have a signed certificate. The point about Root CAs is that are supposed to perform the required diligence to ensure the site is trustworthy, at least ensuring the site is operated by the legal owners of the domain they are using.

 

The easiest tool to use to check the certificate and its trust chain is your own browser. Using Edge for example I can click to check that the site is secure and there is a simple tool for viewing the trust chain. I can see this web site uses a certificate that was issued on Tuesday, 3 October 2023 at 12:46:54 BST. Did your problems with connecting start about then?

 

The certificate has a wildcard subject (CN = *.reputation.com) and signed by a Let’s Encrypt signing certificate:

 

Subject: CN = R3, O = Let's Encrypt, C = US

Serial: 00:91:2B:08:4A:CF:0C:18:A7:53:F6:D6:2E:25:A7:5F:5A

Validity: 04/09/2020, 01:00:00 BST to 15/09/2025, 17:00:00 BST

 

This in turn is signed by a root certificate

 

Subject: CN = ISRG Root X1, O = Internet Security Research Group, C = US

Serial: 00:82:10:CF:B0:D2:40:E3:59:44:63:E0:BB:63:82:8B:00

Validity: 04/06/2015, 12:04:38 BST to 04/06/2035, 12:04:38 BST

 

It is this last certificate that you probably have in your CA certs file as “letsencryptisrgx1” (Let’s Encrypt ISRG X1 !?!). If the subject, serial and validity dates for your copy match these details here (or better yet, the details you discover yourself through your own investigation – why trust me eh?) then you indeed have the right root cert. The question then is why the HTTP processor cannot form the trust chain from the web site to the root cert just like my Edge browser has managed to do. 

 

Are there any security patches for the JVM which might update the cacerts? Are you sure you are referencing the right cacerts file from NiFi? Does NiFi have read permission on the file? What happens if you GET a page from a different HTTPS site, like  https://www.microsoft.com/ do you have any trust issues then? Can you run some commands with openssl on the command line of your NiFi server host to check the trust of the site and output the process verbosely?

 

Sorry if I cannot be more help, I just had to step in before you went away with the belief that the solution to external certificate issues is to blindly trust individual certificates.

 

Steve Hindmarch

 


From: Etienne Jouvin 
Sent: Monday, November 27, 2023 2:28 PM
To: users@nifi.apache.org
Subject: Re: invokeHTTP SSL error NIFI : 1.23.2


 



What do you  mean by : "Let Encrypt CA" is already setup ?



If this is only the root certificate for Let Encrypt, then you do not have the certificate for api-eu.reputation.com in your truststore.



 



I may say something wrong, but if I had to do it, I will create a new truststore and not use the one from JDK, use the SSL Context service to configure the truststore location and password.



 



 


 



Le lun. 27 nov. 2023 à 13:58,  a écrit :






Hello, 



 



It is not working 



 



I have used : 



 



# true | openssl s_client -showcerts -connect api-eu.reputation.com:443



  



I saw it is manage by Let Encrypt



 



And the let Encrypt CA is already setup in the file 11.0.17/lib/security/cacerts => alias name: letsencryptisrgx1 [jdk]



 



so I configure SSL controler base on filename "11.0.17/lib/security/cacerts" in the truststore 



 



But always failed ..



 



 




Envoyé: lundi 27 novembre 2023 à 11:00
De: "Etienne Jouvin" 
À: users@nifi.apache.org
Objet: Re: invokeHTTP SSL error NIFI : 1.23.2




Oh I did not get this is an external api. 


 



Yes because it is https, you should import the certificate.



There was an update of OKHttpClient, which is more restrictive regarding certificate.



  



Le lun. 27 nov. 2023 à 10:52,  a écrit :






Hello



 



Thank for reply, the weird thing it is until now, I don't use SSL context and it is working.



 



Good anyway, I will get the server certificate and add it in the truststore and configure invokeHTTP to user SSL context also



 



Thanks



 



Minh



  


  



Envoyé: lundi 27 novembre 2023 à 10:48
De: "Etienne Jouvin" 
À: users@nifi.apache.org
Objet: Re: invokeHTTP SSL error NIFI : 1.23.2




Hello; 


 



For sure, the certificate for the target server is not valid.



We had this issue also, because in the certificate the alias was missing.



Check your certificate, and I guess you will have to generate it again, import it in the truststore.



 



Regards



  



Re: invokeHTTP SSL error NIFI : 1.23.2

2023-11-27 Thread e-sociaux
Hello, 

 

It is not working 

 

I have used : 

 

# true | openssl s_client -showcerts -connect api-eu.reputation.com:443

 

I saw it is manage by Let Encrypt

 

And the let Encrypt CA is already setup in the file 11.0.17/lib/security/cacerts => alias name: letsencryptisrgx1 [jdk]

 

so I configure SSL controler base on filename "11.0.17/lib/security/cacerts" in the truststore 

 

But always failed ..

 

 


Envoyé: lundi 27 novembre 2023 à 11:00
De: "Etienne Jouvin" 
À: users@nifi.apache.org
Objet: Re: invokeHTTP SSL error NIFI : 1.23.2


Oh I did not get this is an external api.
 

Yes because it is https, you should import the certificate.

There was an update of OKHttpClient, which is more restrictive regarding certificate.

 


Le lun. 27 nov. 2023 à 10:52,  a écrit :




Hello

 

Thank for reply, the weird thing it is until now, I don't use SSL context and it is working.

 

Good anyway, I will get the server certificate and add it in the truststore and configure invokeHTTP to user SSL context also

 

Thanks

 

Minh

 
 

Envoyé: lundi 27 novembre 2023 à 10:48
De: "Etienne Jouvin" 
À: users@nifi.apache.org
Objet: Re: invokeHTTP SSL error NIFI : 1.23.2


Hello;
 

For sure, the certificate for the target server is not valid.

We had this issue also, because in the certificate the alias was missing.

Check your certificate, and I guess you will have to generate it again, import it in the truststore.

 

Regards

 


Le lun. 27 nov. 2023 à 10:28,  a écrit :




 

Hello all,

 

Since I've upgraded the nifi version from 1.18 to 1.23.2 - Java Version 11.0.17

I got the error concerning the invokeHTTP (GET https://api-eu.reputation.com/v3/ ..) even if I setup SSL Context or not

 

Do you have informations about what has changed between the 2 nifi version ?

 

In 1.18.0 this url  (GET https://api-eu.reputation.com/v3/ ..) working with no issue

 

Thanks for Helps

 

Minh

 

2023-11-27 09:21:09,710 ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.standard.InvokeHTTP InvokeHTTP[id=da03ad8a-5a88-344c-a9b6-b88efb2e871b] Request Processing failed: StandardFlowFileRecord[uuid=2d75e8bc-1d2c-4d7d-938f-23c10bd5128d,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1701076362668-643397, container=repo0, section=325], offset=9405, length=165],offset=120,name=b8de3009-45e3-48e7-855d-8b252275f259,size=45]
javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:369)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:312)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:307)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
        at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:478)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:456)
        at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:199)
        at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
        at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1382)
        at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1295)
        at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:416)
        at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:388)
        at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.kt:379)
        at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.kt:337)
        at okhttp3.internal.connection.RealConnection.connect(RealConnection.kt:209)
        at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.kt:226)
        at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.kt:106)
        at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.kt:74)
        at okhttp3.internal.connection.RealCall.initExchange$okhttp(RealCall.kt:255)
        at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:32)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.cache.CacheInterceptor.intercep

Re: invokeHTTP SSL error NIFI : 1.23.2

2023-11-27 Thread e-sociaux
Hello

 

Thank for reply, the weird thing it is until now, I don't use SSL context and it is working.

 

Good anyway, I will get the server certificate and add it in the truststore and configure invokeHTTP to user SSL context also

 

Thanks

 

Minh

 
 

Envoyé: lundi 27 novembre 2023 à 10:48
De: "Etienne Jouvin" 
À: users@nifi.apache.org
Objet: Re: invokeHTTP SSL error NIFI : 1.23.2


Hello;
 

For sure, the certificate for the target server is not valid.

We had this issue also, because in the certificate the alias was missing.

Check your certificate, and I guess you will have to generate it again, import it in the truststore.

 

Regards

 


Le lun. 27 nov. 2023 à 10:28,  a écrit :




 

Hello all,

 

Since I've upgraded the nifi version from 1.18 to 1.23.2 - Java Version 11.0.17

I got the error concerning the invokeHTTP (GET https://api-eu.reputation.com/v3/ ..) even if I setup SSL Context or not

 

Do you have informations about what has changed between the 2 nifi version ?

 

In 1.18.0 this url  (GET https://api-eu.reputation.com/v3/ ..) working with no issue

 

Thanks for Helps

 

Minh

 

2023-11-27 09:21:09,710 ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.standard.InvokeHTTP InvokeHTTP[id=da03ad8a-5a88-344c-a9b6-b88efb2e871b] Request Processing failed: StandardFlowFileRecord[uuid=2d75e8bc-1d2c-4d7d-938f-23c10bd5128d,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1701076362668-643397, container=repo0, section=325], offset=9405, length=165],offset=120,name=b8de3009-45e3-48e7-855d-8b252275f259,size=45]
javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:369)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:312)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:307)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
        at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:478)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:456)
        at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:199)
        at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
        at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1382)
        at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1295)
        at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:416)
        at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:388)
        at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.kt:379)
        at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.kt:337)
        at okhttp3.internal.connection.RealConnection.connect(RealConnection.kt:209)
        at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.kt:226)
        at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.kt:106)
        at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.kt:74)
        at okhttp3.internal.connection.RealCall.initExchange$okhttp(RealCall.kt:255)
        at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:32)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201)
        at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154)
        at org.apache.nifi.processors.standard.InvokeHTTP.onTrigger(InvokeHTTP.java:951)
        at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.

invokeHTTP SSL error NIFI : 1.23.2

2023-11-27 Thread e-sociaux
 

Hello all,

 

Since I've upgraded the nifi version from 1.18 to 1.23.2 - Java Version 11.0.17

I got the error concerning the invokeHTTP (GET https://api-eu.reputation.com/v3/ ..) even if I setup SSL Context or not

 

Do you have informations about what has changed between the 2 nifi version ?

 

In 1.18.0 this url  (GET https://api-eu.reputation.com/v3/ ..) working with no issue

 

Thanks for Helps

 

Minh

 

2023-11-27 09:21:09,710 ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.standard.InvokeHTTP InvokeHTTP[id=da03ad8a-5a88-344c-a9b6-b88efb2e871b] Request Processing failed: StandardFlowFileRecord[uuid=2d75e8bc-1d2c-4d7d-938f-23c10bd5128d,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1701076362668-643397, container=repo0, section=325], offset=9405, length=165],offset=120,name=b8de3009-45e3-48e7-855d-8b252275f259,size=45]
javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:369)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:312)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:307)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
        at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:478)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:456)
        at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:199)
        at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
        at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1382)
        at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1295)
        at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:416)
        at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:388)
        at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.kt:379)
        at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.kt:337)
        at okhttp3.internal.connection.RealConnection.connect(RealConnection.kt:209)
        at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.kt:226)
        at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.kt:106)
        at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.kt:74)
        at okhttp3.internal.connection.RealCall.initExchange$okhttp(RealCall.kt:255)
        at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:32)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201)
        at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154)
        at org.apache.nifi.processors.standard.InvokeHTTP.onTrigger(InvokeHTTP.java:951)
        at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
        at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1361)
        at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:247)
        at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
        at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
        at java

Re: ListSFTP Processor CRON doesn't start

2023-11-16 Thread e-sociaux
Hello Quentin,

 

In the normal way, if the processor is started correctly, you have this logs

 

My schedule ==> 0 0/2 * * * ?

 

==> 2023-11-16 10:28:00,002 DEBUG [Timer-Driven Process Thread-8] o.a.nifi.processors.standard.ListSFTP ListSFTP[id=7edea667-2e42-39c5-7252-bda580ba683a] Returning CLUSTER State: StandardStateMap[version=400220, values={}]

 

Minh


 

Envoyé: jeudi 16 novembre 2023 à 10:46
De: e-soci...@gmx.fr
À: users@nifi.apache.org
Cc: users@nifi.apache.org
Objet: Re: ListSFTP Processor CRON doesn't start



Hello Quentin,

 

It seems strange to have log at 06h00 PM, it is not related to your CRON strategy.

 

What is you have you do this CRON "«0 15 1/1 * * ? *»


 

Anyway, logs seems saying it is running once time.

Have you got "clear state" for ListSFTP to be sure ?

 

 


Regards

 

Minh 


Envoyé: jeudi 16 novembre 2023 à 10:32
De: "Quentin HORNEMAN GUTTON" 
À: users@nifi.apache.org
Objet: Re: ListSFTP Processor CRON doesn't start



Hello Mark,

 

I’m using a NiFi 1.13.2 cluster with a CRON strategy «0 15 1 1/1 * ? *».

I will add the QuartzSchedulingAgent on logback.

 

I recently add specific log for ListSFTP processor using this logback configuration : 

 


 

    

        /var/log/nifi/listsftp-events.log

        

            /var/log/nifi/listsftp-events_%d{-MM-dd_HH}.%i.log

            5MB

            30

        

        

            %date %level [%thread] %logger{40} %msg%n

        

    

    

 

    

    

        

    

    


 

Actually I have this log about a ListSFTP processor which should have run at 6:00pm yesterday : 

 

./listsftp-events_2023-11-15_18.0.log:1:2023-11-15 18:00:00,684 DEBUG [Timer-Driven Process Thread-7] o.a.nifi.processors.standard.ListSFTP ListSFTP[id=488f2a4a-b9a8-3505-905e-db8366542b25] has chosen to yield its resources; will not be scheduled to run again for 1000 milliseconds.

 

What does this message mean? If I understand correctly, it didn't start. How can I find out the reason?

 

Best regards,

 

Quentin HORNEMAN GUTTON


 

 

 

Le mar. 14 nov. 2023 à 15:27, Mark Payne  a écrit :

Hi Quentin,

What is the CRON schedule that you configured? What version of NiFi are you running?

You’ll not see any debug related logs for that Processor by changing its log level, as the Processor is not responsible for scheduling itself. But you can enable DEBUG level logs for org.apache.nifi.controller.scheduling.QuartzSchedulingAgent and that will provide a debug log each time the Processor runs, indicating when it’s expected to run again.

Thanks
-Mark


> On Nov 14, 2023, at 2:28 AM, Quentin HORNEMAN GUTTON  wrote:
>
> Hello,
>
> I am facing an issue that I am having difficulty resolving. I have a ListSFTP processor that is supposed to run every day at 12:15 AM, but it is not launching. I added TRACE logs to this type of processor, but since it is not launching, I cannot determine what is happening. If I change the launch time of the processor (for example, to 04:00 PM), it launches successfully. This is a NiFi cluster running on Redhat. Does anyone have an idea of how I can identify the root cause of this processor not launching ?
>
> Best regards,
 







 

 







Re: ListSFTP Processor CRON doesn't start

2023-11-16 Thread e-sociaux
Hello Quentin,

 

It seems strange to have log at 06h00 PM, it is not related to your CRON strategy.

 

What is you have you do this CRON "«0 15 1/1 * * ? *»


 

Anyway, logs seems saying it is running once time.

Have you got "clear state" for ListSFTP to be sure ?

 

 


Regards

 

Minh 


Envoyé: jeudi 16 novembre 2023 à 10:32
De: "Quentin HORNEMAN GUTTON" 
À: users@nifi.apache.org
Objet: Re: ListSFTP Processor CRON doesn't start



Hello Mark,

 

I’m using a NiFi 1.13.2 cluster with a CRON strategy «0 15 1 1/1 * ? *».

I will add the QuartzSchedulingAgent on logback.

 

I recently add specific log for ListSFTP processor using this logback configuration : 

 


 

    

        /var/log/nifi/listsftp-events.log

        

            /var/log/nifi/listsftp-events_%d{-MM-dd_HH}.%i.log

            5MB

            30

        

        

            %date %level [%thread] %logger{40} %msg%n

        

    

    

 

    

    

        

    

    


 

Actually I have this log about a ListSFTP processor which should have run at 6:00pm yesterday : 

 

./listsftp-events_2023-11-15_18.0.log:1:2023-11-15 18:00:00,684 DEBUG [Timer-Driven Process Thread-7] o.a.nifi.processors.standard.ListSFTP ListSFTP[id=488f2a4a-b9a8-3505-905e-db8366542b25] has chosen to yield its resources; will not be scheduled to run again for 1000 milliseconds.

 

What does this message mean? If I understand correctly, it didn't start. How can I find out the reason?

 

Best regards,

 

Quentin HORNEMAN GUTTON


 

 

 

Le mar. 14 nov. 2023 à 15:27, Mark Payne  a écrit :

Hi Quentin,

What is the CRON schedule that you configured? What version of NiFi are you running?

You’ll not see any debug related logs for that Processor by changing its log level, as the Processor is not responsible for scheduling itself. But you can enable DEBUG level logs for org.apache.nifi.controller.scheduling.QuartzSchedulingAgent and that will provide a debug log each time the Processor runs, indicating when it’s expected to run again.

Thanks
-Mark


> On Nov 14, 2023, at 2:28 AM, Quentin HORNEMAN GUTTON  wrote:
>
> Hello,
>
> I am facing an issue that I am having difficulty resolving. I have a ListSFTP processor that is supposed to run every day at 12:15 AM, but it is not launching. I added TRACE logs to this type of processor, but since it is not launching, I cannot determine what is happening. If I change the launch time of the processor (for example, to 04:00 PM), it launches successfully. This is a NiFi cluster running on Redhat. Does anyone have an idea of how I can identify the root cause of this processor not launching ?
>
> Best regards,
 







 

 


Re: ListSFTP Processor CRON doesn't start

2023-11-14 Thread e-sociaux
Hello

 

Thanks Mark

 

What could be a good way to debug "google processor"

 

     (not working)

    


 

Thanks

 

 

Envoyé: mardi 14 novembre 2023 à 16:11
De: "Mark Payne" 
À: "users@nifi.apache.org" 
Objet: Re: ListSFTP Processor CRON doesn't start


Hi Minh,
 

No - you can configure logging for any Java class, more or less. So that would equate to probably tens of thousands of possible loggers that you could configure.

Of course, they are hierarchical, though, so you could configure, for example, “org.apache.nifi.processors” and that should affect all processors. You could also go another level down, and configure perhaps for “org.apache.nifi.processors.aws” or “org.apache.nifi.processors.aws.s3”.

 

Thanks

-Mark

 
 

On Nov 14, 2023, at 9:37 AM,  e-soci...@gmx.fr wrote:
 




 


Hello Mark,

 

Have we got the documentation about exhaustive list about logger we got have in NIFI ?


 

Regards 

 

Minh 


Envoyé: mardi 14 novembre 2023 à 15:25
De: "Mark Payne" 
À: "users" 
Objet: Re: ListSFTP Processor CRON doesn't start

Hi Quentin,

What is the CRON schedule that you configured? What version of NiFi are you running?

You’ll not see any debug related logs for that Processor by changing its log level, as the Processor is not responsible for scheduling itself. But you can enable DEBUG level logs for org.apache.nifi.controller.scheduling.QuartzSchedulingAgent and that will provide a debug log each time the Processor runs, indicating when it’s expected to run again.

Thanks
-Mark


> On Nov 14, 2023, at 2:28 AM, Quentin HORNEMAN GUTTON  wrote:
>
> Hello,
>
> I am facing an issue that I am having difficulty resolving. I have a ListSFTP processor that is supposed to run every day at 12:15 AM, but it is not launching. I added TRACE logs to this type of processor, but since it is not launching, I cannot determine what is happening. If I change the launch time of the processor (for example, to 04:00 PM), it launches successfully. This is a NiFi cluster running on Redhat. Does anyone have an idea of how I can identify the root cause of this processor not launching ?
>
> Best regards,
 




 

 













Re: ListSFTP Processor CRON doesn't start

2023-11-14 Thread e-sociaux
 


Hello Mark,

 

Have we got the documentation about exhaustive list about logger we got have in NIFI ?


 

Regards 

 

Minh 


Envoyé: mardi 14 novembre 2023 à 15:25
De: "Mark Payne" 
À: "users" 
Objet: Re: ListSFTP Processor CRON doesn't start

Hi Quentin,

What is the CRON schedule that you configured? What version of NiFi are you running?

You’ll not see any debug related logs for that Processor by changing its log level, as the Processor is not responsible for scheduling itself. But you can enable DEBUG level logs for org.apache.nifi.controller.scheduling.QuartzSchedulingAgent and that will provide a debug log each time the Processor runs, indicating when it’s expected to run again.

Thanks
-Mark


> On Nov 14, 2023, at 2:28 AM, Quentin HORNEMAN GUTTON  wrote:
>
> Hello,
>
> I am facing an issue that I am having difficulty resolving. I have a ListSFTP processor that is supposed to run every day at 12:15 AM, but it is not launching. I added TRACE logs to this type of processor, but since it is not launching, I cannot determine what is happening. If I change the launch time of the processor (for example, to 04:00 PM), it launches successfully. This is a NiFi cluster running on Redhat. Does anyone have an idea of how I can identify the root cause of this processor not launching ?
>
> Best regards,
 




 

 


Re: Bug : PutGCSObject : Version NIFI 1.23.2

2023-11-13 Thread e-sociaux
Hello,

 

Seems not working, I don't have any logs concerning google/putgcsObjects in the nifi-app.log 

 

Missing something ?

 

Minh


 

Envoyé: vendredi 10 novembre 2023 à 15:36
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


You'd just need to add


 

in the file, for example next to the one like



 

And you should see the debug logs in nifi-app.log

 

 


Le ven. 10 nov. 2023 à 15:31,  a écrit :




Hello Pierre,

 

I'm not developper, so it is a bit confuse for me.

 

I'm trying to temporarily push this config in the logback.xml but it is not working.

 

 

.

.

.








 










.

.

 


It is missing something ?

 

Thanks a lot for your help

 

Regards 

 

Minh


Envoyé: mercredi 8 novembre 2023 à 16:52
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


I believe that if you temporarily enable debug logs on the GCP client library (via logback.xml on com.google.cloud.storage) you'd be able to confirm this.
 


Le mer. 8 nov. 2023 à 16:43,  a écrit :




Hi Pierre,

 

I confirm, it works with Proxy. 

 

But how we are sure that the data are transfert from the "private connection" and not through the proxy...

 

Regards

 
 

Envoyé: mercredi 8 novembre 2023 à 16:36
De: e-soci...@gmx.fr
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2



Hi Pierre,

 

But it seems weird because when I put the wrong "storage api url : https://storage-euwest1.p.googleapis.com/storage/v1/"

 

We got the wrong url but has the correct hostname


 

Not Found POST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumable

 

Envoyé: mercredi 8 novembre 2023 à 16:29
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


IIRC, but not in a position to confirm right now, the right API to set would be https://storage-euwest1.p.googleapis.com but your host should still be able to resolve googleapis.com (I believe this can work through proxy as well).
 


Le mer. 8 nov. 2023 à 16:16,  a écrit :




Hi 

 

I have checked the DNS entry for "storage-euwest1.p.googleapis.com" it seems OK

From the VM nifi, in the terminal, I could reach the port 443 it works fine.

 

I check with the devops google team also.

 

Thanks


 

Envoyé: mercredi 8 novembre 2023 à 15:40
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Ok sounds good. I do think this is the correct path to be taking then.

 

I think though there might be a dns configuration needed.  Im not quite sure but I am aware of users that rely on this value heavily.

 

Thanks

 

On Wed, Nov 8, 2023 at 7:34 AM  wrote:




Hello Joe

 

Until now we leave the "storage api url" empty by default, but we use "proxy configuration service" because we are behind the corporate network.

 

Since some month ago we have configured "private connection" with google.

So I have setup "storage api url" with the private url and leave empty "proxy configuration service"

 

Regards 

 
 

Envoyé: mercredi 8 novembre 2023 à 14:31
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Minh

 

Do you need to override that value?  Did Google specify a different URL for you or you have a private connection?

 

If you dont set that value what happens?

 

Thanks

 

On Wed, Nov 8, 2023 at 4:57 AM  wrote:




Hello,

 

I'm trying to user the overwriting storage url parameter in the nifi version 1.23.2.

 

Do you know what is the correct parameter to set for the "Storage API URL" in the processor PutGCSObject ?

 

If I setup "https://storage-euwest1.p.googleapis.com/storage/v1/"

It got this error :

 


PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=404 Not FoundPOST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumableNot Found: com.google.cloud.storage.StorageException: 404 Not Found


 


If I setup "https://storage-euwest1.p.googleapis.com"

It got this error :




PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=www.googleapis.com: com.google.cloud.storage.StorageException: www.googleapis.com- Caused by: java.net.UnknownHostException: www.googleapis.com



 

Thanks for help

 

Minh 












 

 










 

 









 

 














 

 









 

 


Re: Bug : PutGCSObject : Version NIFI 1.23.2

2023-11-10 Thread e-sociaux
Hello Pierre,

 

I'm not developper, so it is a bit confuse for me.

 

I'm trying to temporarily push this config in the logback.xml but it is not working.

 

 

.

.

.








 










.

.

 


It is missing something ?

 

Thanks a lot for your help

 

Regards 

 

Minh


Envoyé: mercredi 8 novembre 2023 à 16:52
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


I believe that if you temporarily enable debug logs on the GCP client library (via logback.xml on com.google.cloud.storage) you'd be able to confirm this.
 


Le mer. 8 nov. 2023 à 16:43,  a écrit :




Hi Pierre,

 

I confirm, it works with Proxy. 

 

But how we are sure that the data are transfert from the "private connection" and not through the proxy...

 

Regards

 
 

Envoyé: mercredi 8 novembre 2023 à 16:36
De: e-soci...@gmx.fr
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2



Hi Pierre,

 

But it seems weird because when I put the wrong "storage api url : https://storage-euwest1.p.googleapis.com/storage/v1/"

 

We got the wrong url but has the correct hostname


 

Not Found POST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumable

 

Envoyé: mercredi 8 novembre 2023 à 16:29
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


IIRC, but not in a position to confirm right now, the right API to set would be https://storage-euwest1.p.googleapis.com but your host should still be able to resolve googleapis.com (I believe this can work through proxy as well).
 


Le mer. 8 nov. 2023 à 16:16,  a écrit :




Hi 

 

I have checked the DNS entry for "storage-euwest1.p.googleapis.com" it seems OK

From the VM nifi, in the terminal, I could reach the port 443 it works fine.

 

I check with the devops google team also.

 

Thanks


 

Envoyé: mercredi 8 novembre 2023 à 15:40
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Ok sounds good. I do think this is the correct path to be taking then.

 

I think though there might be a dns configuration needed.  Im not quite sure but I am aware of users that rely on this value heavily.

 

Thanks

 

On Wed, Nov 8, 2023 at 7:34 AM  wrote:




Hello Joe

 

Until now we leave the "storage api url" empty by default, but we use "proxy configuration service" because we are behind the corporate network.

 

Since some month ago we have configured "private connection" with google.

So I have setup "storage api url" with the private url and leave empty "proxy configuration service"

 

Regards 

 
 

Envoyé: mercredi 8 novembre 2023 à 14:31
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Minh

 

Do you need to override that value?  Did Google specify a different URL for you or you have a private connection?

 

If you dont set that value what happens?

 

Thanks

 

On Wed, Nov 8, 2023 at 4:57 AM  wrote:




Hello,

 

I'm trying to user the overwriting storage url parameter in the nifi version 1.23.2.

 

Do you know what is the correct parameter to set for the "Storage API URL" in the processor PutGCSObject ?

 

If I setup "https://storage-euwest1.p.googleapis.com/storage/v1/"

It got this error :

 


PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=404 Not FoundPOST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumableNot Found: com.google.cloud.storage.StorageException: 404 Not Found


 


If I setup "https://storage-euwest1.p.googleapis.com"

It got this error :




PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=www.googleapis.com: com.google.cloud.storage.StorageException: www.googleapis.com- Caused by: java.net.UnknownHostException: www.googleapis.com



 

Thanks for help

 

Minh 












 

 










 

 









 

 














 

 


PutGCSObject version 1.23.2

2023-11-09 Thread e-sociaux
 

Hi all, 

 

It is strange for the putGCSObject with properties "GZIP Compression Enabled"

 

The description said the most time is better to setup at "false" but by default is "true"

 

"Signals to the GCS Blob Writer whether GZIP compression during transfer is desired. False means do not gzip and can boost performance in many cases."

 

What is the explaination about "true" ?

 

Regards 

 

Minh 


Re: Bug : PutGCSObject : Version NIFI 1.23.2

2023-11-09 Thread e-sociaux
Hi Pierre,

 

Thanks for this information.

 

Minh

 
 

Envoyé: mercredi 8 novembre 2023 à 19:57
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


So I believe the client library, when using the private endpoint, is making multiple calls, the first one is on the private endpoint and that's why you get a 404 when setting
https://storage-euwest1.p.googleapis.com/storage/v1/

As you can see, it's making a call to a wrong URL (twice /storage/v1): https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumable

This is why you get the 404.

When you set the right endpoint you don't get the 404 but later on the client library has to make a call to googleapis.com (if I recall correctly this part) and that's why you got the unknown host exception.

 

 


 


Le mer. 8 nov. 2023 à 16:52, Pierre Villard  a écrit :


I believe that if you temporarily enable debug logs on the GCP client library (via logback.xml on com.google.cloud.storage) you'd be able to confirm this.
 


Le mer. 8 nov. 2023 à 16:43,  a écrit :




Hi Pierre,

 

I confirm, it works with Proxy. 

 

But how we are sure that the data are transfert from the "private connection" and not through the proxy...

 

Regards

 
 

Envoyé: mercredi 8 novembre 2023 à 16:36
De: e-soci...@gmx.fr
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2



Hi Pierre,

 

But it seems weird because when I put the wrong "storage api url : https://storage-euwest1.p.googleapis.com/storage/v1/"

 

We got the wrong url but has the correct hostname


 

Not Found POST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumable

 

Envoyé: mercredi 8 novembre 2023 à 16:29
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


IIRC, but not in a position to confirm right now, the right API to set would be https://storage-euwest1.p.googleapis.com but your host should still be able to resolve googleapis.com (I believe this can work through proxy as well).
 


Le mer. 8 nov. 2023 à 16:16,  a écrit :




Hi 

 

I have checked the DNS entry for "storage-euwest1.p.googleapis.com" it seems OK

From the VM nifi, in the terminal, I could reach the port 443 it works fine.

 

I check with the devops google team also.

 

Thanks


 

Envoyé: mercredi 8 novembre 2023 à 15:40
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Ok sounds good. I do think this is the correct path to be taking then.

 

I think though there might be a dns configuration needed.  Im not quite sure but I am aware of users that rely on this value heavily.

 

Thanks

 

On Wed, Nov 8, 2023 at 7:34 AM  wrote:




Hello Joe

 

Until now we leave the "storage api url" empty by default, but we use "proxy configuration service" because we are behind the corporate network.

 

Since some month ago we have configured "private connection" with google.

So I have setup "storage api url" with the private url and leave empty "proxy configuration service"

 

Regards 

 
 

Envoyé: mercredi 8 novembre 2023 à 14:31
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Minh

 

Do you need to override that value?  Did Google specify a different URL for you or you have a private connection?

 

If you dont set that value what happens?

 

Thanks

 

On Wed, Nov 8, 2023 at 4:57 AM  wrote:




Hello,

 

I'm trying to user the overwriting storage url parameter in the nifi version 1.23.2.

 

Do you know what is the correct parameter to set for the "Storage API URL" in the processor PutGCSObject ?

 

If I setup "https://storage-euwest1.p.googleapis.com/storage/v1/"

It got this error :

 


PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=404 Not FoundPOST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumableNot Found: com.google.cloud.storage.StorageException: 404 Not Found


 


If I setup "https://storage-euwest1.p.googleapis.com"

It got this error :




PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=www.googleapis.com: com.google.cloud.storage.StorageException: www.googleapis.com- Caused by: java.net.UnknownHostException: www.googleapis.com



 

Thanks for help

 

Minh 












 

 










 

 









 

 
















 

 


Re: Bug : PutGCSObject : Version NIFI 1.23.2

2023-11-08 Thread e-sociaux
Hi Pierre,

 

I confirm, it works with Proxy. 

 

But how we are sure that the data are transfert from the "private connection" and not through the proxy...

 

Regards

 
 

Envoyé: mercredi 8 novembre 2023 à 16:36
De: e-soci...@gmx.fr
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2



Hi Pierre,

 

But it seems weird because when I put the wrong "storage api url : https://storage-euwest1.p.googleapis.com/storage/v1/"

 

We got the wrong url but has the correct hostname


 

Not Found POST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumable

 

Envoyé: mercredi 8 novembre 2023 à 16:29
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


IIRC, but not in a position to confirm right now, the right API to set would be https://storage-euwest1.p.googleapis.com but your host should still be able to resolve googleapis.com (I believe this can work through proxy as well).
 


Le mer. 8 nov. 2023 à 16:16,  a écrit :




Hi 

 

I have checked the DNS entry for "storage-euwest1.p.googleapis.com" it seems OK

From the VM nifi, in the terminal, I could reach the port 443 it works fine.

 

I check with the devops google team also.

 

Thanks


 

Envoyé: mercredi 8 novembre 2023 à 15:40
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Ok sounds good. I do think this is the correct path to be taking then.

 

I think though there might be a dns configuration needed.  Im not quite sure but I am aware of users that rely on this value heavily.

 

Thanks

 

On Wed, Nov 8, 2023 at 7:34 AM  wrote:




Hello Joe

 

Until now we leave the "storage api url" empty by default, but we use "proxy configuration service" because we are behind the corporate network.

 

Since some month ago we have configured "private connection" with google.

So I have setup "storage api url" with the private url and leave empty "proxy configuration service"

 

Regards 

 
 

Envoyé: mercredi 8 novembre 2023 à 14:31
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Minh

 

Do you need to override that value?  Did Google specify a different URL for you or you have a private connection?

 

If you dont set that value what happens?

 

Thanks

 

On Wed, Nov 8, 2023 at 4:57 AM  wrote:




Hello,

 

I'm trying to user the overwriting storage url parameter in the nifi version 1.23.2.

 

Do you know what is the correct parameter to set for the "Storage API URL" in the processor PutGCSObject ?

 

If I setup "https://storage-euwest1.p.googleapis.com/storage/v1/"

It got this error :

 


PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=404 Not FoundPOST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumableNot Found: com.google.cloud.storage.StorageException: 404 Not Found


 


If I setup "https://storage-euwest1.p.googleapis.com"

It got this error :




PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=www.googleapis.com: com.google.cloud.storage.StorageException: www.googleapis.com- Caused by: java.net.UnknownHostException: www.googleapis.com



 

Thanks for help

 

Minh 












 

 










 

 









 

 







Re: Bug : PutGCSObject : Version NIFI 1.23.2

2023-11-08 Thread e-sociaux
Hi Pierre,

 

But it seems weird because when I put the wrong "storage api url : https://storage-euwest1.p.googleapis.com/storage/v1/"

 

We got the wrong url but has the correct hostname


 

Not Found POST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumable

 

Envoyé: mercredi 8 novembre 2023 à 16:29
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


IIRC, but not in a position to confirm right now, the right API to set would be https://storage-euwest1.p.googleapis.com but your host should still be able to resolve googleapis.com (I believe this can work through proxy as well).
 


Le mer. 8 nov. 2023 à 16:16,  a écrit :




Hi 

 

I have checked the DNS entry for "storage-euwest1.p.googleapis.com" it seems OK

From the VM nifi, in the terminal, I could reach the port 443 it works fine.

 

I check with the devops google team also.

 

Thanks


 

Envoyé: mercredi 8 novembre 2023 à 15:40
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Ok sounds good. I do think this is the correct path to be taking then.

 

I think though there might be a dns configuration needed.  Im not quite sure but I am aware of users that rely on this value heavily.

 

Thanks

 

On Wed, Nov 8, 2023 at 7:34 AM  wrote:




Hello Joe

 

Until now we leave the "storage api url" empty by default, but we use "proxy configuration service" because we are behind the corporate network.

 

Since some month ago we have configured "private connection" with google.

So I have setup "storage api url" with the private url and leave empty "proxy configuration service"

 

Regards 

 
 

Envoyé: mercredi 8 novembre 2023 à 14:31
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Minh

 

Do you need to override that value?  Did Google specify a different URL for you or you have a private connection?

 

If you dont set that value what happens?

 

Thanks

 

On Wed, Nov 8, 2023 at 4:57 AM  wrote:




Hello,

 

I'm trying to user the overwriting storage url parameter in the nifi version 1.23.2.

 

Do you know what is the correct parameter to set for the "Storage API URL" in the processor PutGCSObject ?

 

If I setup "https://storage-euwest1.p.googleapis.com/storage/v1/"

It got this error :

 


PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=404 Not FoundPOST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumableNot Found: com.google.cloud.storage.StorageException: 404 Not Found


 


If I setup "https://storage-euwest1.p.googleapis.com"

It got this error :




PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=www.googleapis.com: com.google.cloud.storage.StorageException: www.googleapis.com- Caused by: java.net.UnknownHostException: www.googleapis.com



 

Thanks for help

 

Minh 












 

 










 

 









 

 


Re: Bug : PutGCSObject : Version NIFI 1.23.2

2023-11-08 Thread e-sociaux
Hi 

 

I have checked the DNS entry for "storage-euwest1.p.googleapis.com" it seems OK

From the VM nifi, in the terminal, I could reach the port 443 it works fine.

 

I check with the devops google team also.

 

Thanks


 

Envoyé: mercredi 8 novembre 2023 à 15:40
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Ok sounds good. I do think this is the correct path to be taking then.

 

I think though there might be a dns configuration needed.  Im not quite sure but I am aware of users that rely on this value heavily.

 

Thanks

 

On Wed, Nov 8, 2023 at 7:34 AM  wrote:




Hello Joe

 

Until now we leave the "storage api url" empty by default, but we use "proxy configuration service" because we are behind the corporate network.

 

Since some month ago we have configured "private connection" with google.

So I have setup "storage api url" with the private url and leave empty "proxy configuration service"

 

Regards 

 
 

Envoyé: mercredi 8 novembre 2023 à 14:31
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Minh

 

Do you need to override that value?  Did Google specify a different URL for you or you have a private connection?

 

If you dont set that value what happens?

 

Thanks

 

On Wed, Nov 8, 2023 at 4:57 AM  wrote:




Hello,

 

I'm trying to user the overwriting storage url parameter in the nifi version 1.23.2.

 

Do you know what is the correct parameter to set for the "Storage API URL" in the processor PutGCSObject ?

 

If I setup "https://storage-euwest1.p.googleapis.com/storage/v1/"

It got this error :

 


PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=404 Not FoundPOST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumableNot Found: com.google.cloud.storage.StorageException: 404 Not Found


 


If I setup "https://storage-euwest1.p.googleapis.com"

It got this error :




PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=www.googleapis.com: com.google.cloud.storage.StorageException: www.googleapis.com- Caused by: java.net.UnknownHostException: www.googleapis.com



 

Thanks for help

 

Minh 












 

 










 

 


Re: Bug : PutGCSObject : Version NIFI 1.23.2

2023-11-08 Thread e-sociaux
Hello Joe

 

Until now we leave the "storage api url" empty by default, but we use "proxy configuration service" because we are behind the corporate network.

 

Since some month ago we have configured "private connection" with google.

So I have setup "storage api url" with the private url and leave empty "proxy configuration service"

 

Regards 

 
 

Envoyé: mercredi 8 novembre 2023 à 14:31
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Bug : PutGCSObject : Version NIFI 1.23.2


Minh

 

Do you need to override that value?  Did Google specify a different URL for you or you have a private connection?

 

If you dont set that value what happens?

 

Thanks

 

On Wed, Nov 8, 2023 at 4:57 AM  wrote:




Hello,

 

I'm trying to user the overwriting storage url parameter in the nifi version 1.23.2.

 

Do you know what is the correct parameter to set for the "Storage API URL" in the processor PutGCSObject ?

 

If I setup "https://storage-euwest1.p.googleapis.com/storage/v1/"

It got this error :

 


PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=404 Not FoundPOST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumableNot Found: com.google.cloud.storage.StorageException: 404 Not Found


 


If I setup "https://storage-euwest1.p.googleapis.com"

It got this error :




PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=www.googleapis.com: com.google.cloud.storage.StorageException: www.googleapis.com- Caused by: java.net.UnknownHostException: www.googleapis.com



 

Thanks for help

 

Minh 












 

 


Bug : PutGCSObject : Version NIFI 1.23.2

2023-11-08 Thread e-sociaux
Hello,

 

I'm trying to user the overwriting storage url parameter in the nifi version 1.23.2.

 

Do you know what is the correct parameter to set for the "Storage API URL" in the processor PutGCSObject ?

 

If I setup "https://storage-euwest1.p.googleapis.com/storage/v1/"

It got this error :

 


PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=404 Not Found
POST https://storage-euwest1.p.googleapis.com/upload/storage/v1/storage/v1/b/xxx/o?name=test/minh.txt&uploadType=resumable
Not Found: com.google.cloud.storage.StorageException: 404 Not Found




 


If I setup "https://storage-euwest1.p.googleapis.com"

It got this error :




PutGCSObject[id=aea7e07c-018b-1000-] Failure completing upload flowfile=test/minh.txt bucket=xxx key=test/minh.txt reason=www.googleapis.com: com.google.cloud.storage.StorageException: www.googleapis.com
- Caused by: java.net.UnknownHostException: www.googleapis.com



 

Thanks for help

 

Minh 




Re: Analyze : "Diagnostics on Stop" logs

2023-10-28 Thread e-sociaux
 


Hello Joe,

 

It is a VM with Rocky 8 as OS. We use the systemd to start/stop nifi process

 

The nifi process is die. I could see with the journalctl -u nifi

 



Oct 27 14:53:11 server01 systemd[1]: Started Apache NiFi.

Oct 28 03:31:04 server01 systemd[1]: nifi.service: Supervising process 1399442 which is not our child. We'll most likely not notice when it exits.

Oct 28 07:20:40 server01 bash[1399411]: Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

Oct 28 07:20:40 server01 bash[1399411]: NiFi PID [1399442] shutdown started

Oct 28 07:20:40 server01 bash[1399411]: NiFi PID [1399442] shutdown in progress...

Oct 28 07:21:00 server01 bash[1399411]: NiFi has not finished shutting down after 20 seconds. Killing process.

Oct 28 07:21:04 server01 systemd[1]: nifi.service: Main process exited, code=exited, status=143/n/a

Oct 28 07:21:04 server01 systemd[1]: nifi.service: Failed with result 'exit-code'.




 

And I don't see the "Java Heap Space" error in the nifi-app.log so it is a bit weird.

It is why I'm trying to find some informations in the diagnosticts logs

 

Minh


 

Envoyé: samedi 28 octobre 2023 à 23:35
De: "Joe Witt" 
À: users@nifi.apache.org
Objet: Re: Analyze : "Diagnostics on Stop" logs


Hello
 

We need to better understand what it means to say the node stopped.  Did the nifi process die?  The nifi-app.log should almost certainly expose something interesting.  Is this on a Linux box, or in a container, etc. How is it run and what precisely stopped?

Thanks

 


On Sat, Oct 28, 2023 at 2:28 PM  wrote:





Hello all,

 

I have activated the "diagnosticts on stop" in the nifi.properties.

https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#nifi_diagnostics

My node has stop twice time now but I don't find anything.

 

Have we got some documentations about the log of diagnosticts to find the root cause/ the processor cause the stop ?

 

Regards 

 

Minh


 









 

 


Analyze : "Diagnostics on Stop" logs

2023-10-28 Thread e-sociaux

Hello all,

 

I have activated the "diagnosticts on stop" in the nifi.properties.

https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#nifi_diagnostics

My node has stop twice time now but I don't find anything.

 

Have we got some documentations about the log of diagnosticts to find the root cause/ the processor cause the stop ?

 

Regards 

 

Minh


 


Re: RE: How find the root cause "Node disconnect to the cluster"

2023-10-27 Thread e-sociaux
Hello

 

Thanks for your reply, I think I'm doing something like you to find the roort cause but it is often I need to check a lot of thing.

 

For exemple, when I saw the "Java heap space" in the nifi-app.log 
It is not easy to see what processor has cause this issue.

 

Anyway thanks

 

Regards 


 

Envoyé: vendredi 27 octobre 2023 à 09:52
De: "Isha Lamboo" 
À: "users@nifi.apache.org" 
Objet: RE: How find the root cause "Node disconnect to the cluster"




Hi Minh,

 

You should have messages in the nifi-app.log about the node failing to respond within the configured cluster comms timeout in nifi.properties. You may want to increase that and see if it reduces the number of disconnect events.

 

In my experience, a disconnected node typically happens when the CPU load gets too high or there are too many threads waiting their turn because of slow processors. If your nodes are short on RAM, garbage collection may also play a role.

 

Typically, you’ll see other symptoms before the disconnection like a slow interface, intermittent errors when navigating, etc. When those occur, check the Node Status History from the main menu for these items: 


	CPU Load: should not be higher than the number of CPUs per node (This is not CPU usage % but the number of threads wanting to run).
	Heap utilization: should probably look like a sawtooth pattern and not stay above 90-95% for any amount of time.
	Open Files: should be well lower than the limit.
	If Heap utilization is consistently high, check the garbage collection times. These should be a small fraction of the uptime. The collection time in the Cluster view is easier to read than in the Node Status history, but the 24 hour view in the history may give you an idea of when the problem occurs.


 

If one or more of these indicate problems, you can often see in the summary window which processors are causing problems by sorting on the various columns, especially total task duration, which may for example show that a processor is taking 40 minutes per 5 minute window because it’s taking 8 threads for itself all the time.

These may just be waiting for a slow external system to time out, so can show high values even if the CPU use is not so high.

 

I hope this helps. I haven’t found a bulletproof way to diagnose these issues yet in NiFi, especially on clusters run many processors. Maybe someone else has a more focused way.

 

Regards,

 

Isha

 



Van: e-soci...@gmx.fr 
Verzonden: vrijdag 27 oktober 2023 09:16
Aan: users@nifi.apache.org
Onderwerp: How find the root cause "Node disconnect to the cluster"



 



 



Hello all,



 



Since I'm working with NIFI, I got a lot of difficulty to find the root cause why the node is disconnected to the cluster.



I check nifi-app.log nifi-bootstrap.log but it is a bit complicated to find the correct information.



 



Somebody has some informations for me ?



 



Regards 



 



Minh 









 

 


How find the root cause "Node disconnect to the cluster"

2023-10-27 Thread e-sociaux
 

Hello all,

 

Since I'm working with NIFI, I got a lot of difficulty to find the root cause why the node is disconnected to the cluster.

I check nifi-app.log nifi-bootstrap.log but it is a bit complicated to find the correct information.

 

Somebody has some informations for me ?

 

Regards 

 

Minh 


Timezone Summer Winter Time

2023-10-12 Thread e-sociaux
 

Hello all,

 

Somebody has already done some configuration for schedule the processor with the particular (CET/France) timezone even if the NIFI is running in the UTC ?

 

Here the definition of summer/winter time

    Metropolitan France uses Central European Time (heure d'Europe centrale, UTC+01:00) as its standard time, and observes 
    Central European Summer Time (heure d'été d'Europe centrale, UTC+02:00) from the last Sunday in March to the last Sunday in October. 

 

I need to start the processor UTC-1h  for winter and UTC-2h for summer

 

I have been thinking about cron sur last Sunday March and October, it is not enough to run this processor every day 

 

If you have some tips, it will helpfull

 

Thanks

 

Minh


Re: NIFI-REGISTRY Migration

2023-09-29 Thread e-sociaux
Hello Pierre,

 

Thanks for new option.

 

So it means, the dataflow_1 register in the registry_A, will be register in the same way in registry_B after migration ?

Or needs we also use the URL aliasing ?

 

Regards 


 

Envoyé: vendredi 29 septembre 2023 à 12:41
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: NIFI-REGISTRY Migration


Hi,
the 

You could also consider using the recently added Toolkit CLI optihons to export all / import all from Registry A to B.

 

Pierre

 


Le ven. 29 sept. 2023 à 12:13, Lars Winderling  a écrit :


Hi Minh,

you could try to employ URL aliasing as explained in the docs. This way, the registry will store only an alias instead of the actual registry address. When migrating to another host, make sure to assign the same alias to it, and it should be able to retrieve versioning information properly.
Another option would be to use a persistent host name (same A/ entries, or a CNAME).
This way, I have migrated our registry already a few times.

I hope this helps.
Best,
Lars
 
 

On 23-09-29 10:44, e-soci...@gmx.fr wrote:



 

Hello all,

 

Someone has already migrate their nifi-registry from hostA to hostB (nifi-registry) and keep track the versionning for the nifi client points to the hostB ?

 

Have we got some documentations about migration nifi-registry ?

 

regards 

 

Minh 










 

 


NIFI-REGISTRY Migration

2023-09-29 Thread e-sociaux
 

Hello all,

 

Someone has already migrate their nifi-registry from hostA to hostB (nifi-registry) and keep track the versionning for the nifi client points to the hostB ?

 

Have we got some documentations about migration nifi-registry ?

 

regards 

 

Minh 


Re: GCS processors : endpoint Override URL: " storage-xyz.p.googleapis.com"

2023-09-19 Thread e-sociaux
 


Hello, 

 

Sorry, I have not correctly checked in the release note. 

Great new, thanks a lot

 

regards

 

Minh 

 

Envoyé: mardi 19 septembre 2023 à 11:38
De: "Pierre Villard" 
À: users@nifi.apache.org
Objet: Re: GCS processors : endpoint Override URL: " storage-xyz.p.googleapis.com"


Hi,
 

Isn't it available starting from NiFi 1.22?
https://issues.apache.org/jira/browse/NIFI-11439


 


Le mar. 19 sept. 2023 à 09:44,  a écrit :




 

Hello all,

 

Is it possible to add an option in the processor GCS so we can write the custom url for storage.googleapis.com something like "storage-xyz.p.googleapis.com"?

 

I know it is possible for the "S3 Amazon processor" :



	
		
			Endpoint Override URL
			Endpoint Override URL
			 
			 
			Endpoint URL to use instead of the AWS default including scheme, host, port, and path. The AWS libraries select an endpoint URL based on the AWS region, but this property overrides the selected endpoint URL, allowing use with other S3-compatible endpoints.
			Supports _expression_ Language: true (will be evaluated using variable registry only)
		
	



 

 

Like, we use the google option to have "Private Service Connect"

 


"Access Google APIs through endpoints

This document explains how to use Private Service Connect endpoints to connect to Google APIs. Instead of sending API requests to the publicly available IP addresses for service endpoints such as storage.googleapis.com, you can send the requests to the internal IP address of an endpoint."

 

https://cloud.google.com/vpc/docs/configure-private-service-connect-apis

 

Regards 

 

Minh 










 

 


GCS processors : endpoint Override URL: " storage-xyz.p.googleapis.com"

2023-09-19 Thread e-sociaux
 

Hello all,

 

Is it possible to add an option in the processor GCS so we can write the custom url for storage.googleapis.com something like "storage-xyz.p.googleapis.com"?

 

I know it is possible for the "S3 Amazon processor" :



	
		
			Endpoint Override URL
			Endpoint Override URL
			 
			 
			Endpoint URL to use instead of the AWS default including scheme, host, port, and path. The AWS libraries select an endpoint URL based on the AWS region, but this property overrides the selected endpoint URL, allowing use with other S3-compatible endpoints.
			Supports _expression_ Language: true (will be evaluated using variable registry only)
		
	



 

 

Like, we use the google option to have "Private Service Connect"

 


"Access Google APIs through endpoints

This document explains how to use Private Service Connect endpoints to connect to Google APIs. Instead of sending API requests to the publicly available IP addresses for service endpoints such as storage.googleapis.com, you can send the requests to the internal IP address of an endpoint."

 

https://cloud.google.com/vpc/docs/configure-private-service-connect-apis

 

Regards 

 

Minh 



Re: NiFi hanging during large sql query

2023-09-11 Thread e-sociaux
 



Hello Mike,

 

Could you please give me the details about the resolution ?

Have you change something in the processor or just changing the sql command ?

 

Regards


 

Envoyé: samedi 2 septembre 2023 à 00:00
De: "Mike Thomsen" 
À: users@nifi.apache.org
Objet: NiFi hanging during large sql query

I have a three node cluster with an executesqlrecord processor with primary execution only. The sql it runs is a straight forward select on a table with about 44m records. If I leave it running, after about 10 min the node becomes unresponsive and leaves the cluster. The query runs just fine in jetbrains data grip on that postgresql server, so I don’t think it’s anything weird with the db or query. Any ideas about what could be causing this? Even with a high limit like 5m records the query doesn’t lock up the NiFi node.

Sent from my iPhone




 

 


Re: Help : LoadBalancer

2023-09-11 Thread e-sociaux
 


Hello Jens,

 

Yes it is a bit too long to read but very interesting.

Thanks a lot for this information. I'm not sure can do this thing in my work.

I do simple for testing and use only one Haproxy.

 

Thanks

 

Minh

 

 

Envoyé: jeudi 7 septembre 2023 à 18:58
De: "Jens M. Kofoed" 
À: users@nifi.apache.org
Objet: Re: Help : LoadBalancer


Hi Minh
 

Sorry for the long reply :-)

 

If you only have one load balancer in front of your NiFi cluster, you introduce a one-point of failure component. For High Availability (HA), you can have 2 nodes with load balancing in front of your NiFi cluster. proxy-01 will have one ip-address and proxy-02 will have another ip-address. You can then create 2 dns records pointing nifi-proxy to both proxy-01 and proxy-02. This will give you some kind of HA, but you will rely on the capability of dns to do dns round robin between the 2 records. If one node goes down, dns don't know any thing about this and will continue to reply with dns responces to a dead node. So you can have situations where half of the request to your nifi-proxy will time outs becouse of a dead node.

 

Instead of using dns round robin your can use keepalived on linux. This is a small program which use a 3th ip-address, a so called virtual ip (vip). You will have to look at the documentation for keepalived, on how to configure this: https://www.keepalived.org/

You need to make some small adjustment to linux, to allow it to bind to none existing ip-addresses.

You configure each node in keepalived with a start weight. In my setup I have configured node-1 with a weight of 100, and node-2 a weight of 99. Keepalived will be configured so the to keepalived instances of the nodes can talk together and sends keepalive signals to each other with there weight. Based on the weight it receive from other keepalived nodes and its own, it will decide if it should change state to master or backup. The node with the highst weight will be master and the master node will add the vip to the node. Now you create only one dns record for nifi-proxy pointing it to the vip. and all request will go to only one HAProxy which will load balance your trafic to the NiFi nodes.

You can also configure keepalived to use a script to tell if the service you are running at the host is alive. In this case HAProxy. I have created a small script which curl the stats page of HAProxy and looks if  my "Backend/nifi-ui-nodes" is up. If it's up the script just exits with exit code 0 (OK) otherwise it will exit with exits code 1 (error). In the configuration of keepalived, you configure what should happen with the weight the script fails. I have configured the check script to adjust the weight with -10 in case of an error. So if HAProxy at node-01 will die, or lost network connection to all NiFi nodes the check script will fail, and the weight will be 90 (100-10). Node-01 will receive a keep-alive signal from node-02 with a weight of 99 and therefore will change state to backup and remove the vip from the host. Node-2 will receive a keep-alive signal from node-01 with a weight of 90, and since its own weight is 99 and is bigger it will change state into master and add the vip to the host. Now it will be node-02 which wil receive all request and load balance all trafic to your NiFi-nodes.

 

Once again, sorry for the long reply. You don't need 2 HAProxy nodes, one can do the job. But it will be one point of failure. You can also just use dns round robin to point to two haproxy nodes, or dive in to the use of keepalived.

 

Kind regards

Jens M. Kofoed  

 

      

     

 


Den tors. 7. sep. 2023 kl. 13.32 skrev :




 


Hello Jens

 

Thanks a lot for haproxy conf.

 


Could you give more details about this point :

 


- I have 2 proxy nodes, which is running in a HA setup with keepalived and with a vip.- 

- I have a dns record nifi-cluster01.foo.bar pointing to the vip address to keepalived

 

Thanks 

 

Minh



Envoyé: jeudi 7 septembre 2023 à 11:29
De: "Jens M. Kofoed" 
À: users@nifi.apache.org
Objet: Re: Help : LoadBalancer


Hi
 

I have a 3 node cluster running behind a HAProxy setup.

My haproxy.cfg looks like this:
global
    log stdout format iso local1 debug # rfc3164, rfc5424, short, raw, (iso)
    log stderr format iso local0 err # rfc3164, rfc5424, short, raw, (iso)
    hard-stop-after 30s

defaults
    log global
    mode http
    option httplog
    option dontlognull
    timeout connect 5s
    timeout client 50s
    timeout server 15s

frontend nifi-ui
    bind *:8443
    bind *:443
    mode tcp
    option tcplog
    default_backend nifi-ui-nodes

backend nifi-ui-nodes
    mode tcp
    balance roundrobin
    stick-table type ip size 200k expire 30m
    stick on src
    option httpchk
    http-check send meth GET uri / ver HTTP/1.1 hdr Host nifi-cluster01.foo.bar
    server C01N01 nifi-c01n01.foo.bar:8443 check check-ssl verify none inter 5s downinter 5s fall 2 rise 3
    s

Re: [NIFI] ExecuteScript

2023-09-11 Thread e-sociaux
Hello Quentin,

 

I also used the python with ExecuteScript but it is not native and not really supported.

 

The better thing it is use executeGroovy processor.

 

Minh

 
 

Envoyé: vendredi 8 septembre 2023 à 22:55
De: "Rafael Fracasso" 
À: users@nifi.apache.org
Objet: Re: [NIFI] ExecuteScript


As far as I know, you cannot use any external library on executescript, only the native one on jython env.
 

But you can call native python to execute your script using executestreamcommand

 


On Fri, Sep 8, 2023 at 6:15 AM Quentin HORNEMAN GUTTON  wrote:


Hello everyone,
 

I would like to know if it is possible to use the Etree or ElementPath library in an ExecuteScript processor with the "Python" script engine (which is Jython), or if it is possible to install modules for the Jython environment ?

 

I'm using NiFi 1.20.0

 

Best regards,








 

 


Re: Help : LoadBalancer

2023-09-07 Thread e-sociaux
 


Hello Jens

 

Thanks a lot for haproxy conf.

 


Could you give more details about this point :

 


- I have 2 proxy nodes, which is running in a HA setup with keepalived and with a vip.- 

- I have a dns record nifi-cluster01.foo.bar pointing to the vip address to keepalived

 

Thanks 

 

Minh



Envoyé: jeudi 7 septembre 2023 à 11:29
De: "Jens M. Kofoed" 
À: users@nifi.apache.org
Objet: Re: Help : LoadBalancer


Hi
 

I have a 3 node cluster running behind a HAProxy setup.

My haproxy.cfg looks like this:
global
    log stdout format iso local1 debug # rfc3164, rfc5424, short, raw, (iso)
    log stderr format iso local0 err # rfc3164, rfc5424, short, raw, (iso)
    hard-stop-after 30s

defaults
    log global
    mode http
    option httplog
    option dontlognull
    timeout connect 5s
    timeout client 50s
    timeout server 15s

frontend nifi-ui
    bind *:8443
    bind *:443
    mode tcp
    option tcplog
    default_backend nifi-ui-nodes

backend nifi-ui-nodes
    mode tcp
    balance roundrobin
    stick-table type ip size 200k expire 30m
    stick on src
    option httpchk
    http-check send meth GET uri / ver HTTP/1.1 hdr Host nifi-cluster01.foo.bar
    server C01N01 nifi-c01n01.foo.bar:8443 check check-ssl verify none inter 5s downinter 5s fall 2 rise 3
    server C01N02 nifi-c01n02.foo.bar:8443 check check-ssl verify none inter 5s downinter 5s fall 2 rise 3
    server C01N03 nifi-c01n03.foo.bar:8443 check check-ssl verify none inter 5s downinter 5s fall 2 rise 3

 

I have 2 proxy nodes, which is running in a HA setup with keepalived and with a vip.

I have a dns record nifi-cluster01.foo.bar pointing to the vip address to keepalived.

 

In your nifi-properties files you would have so set a proxy host address: nifi.web.proxy.host: "nifi-cluster01.foo.bar:8443"

 

This setup is working for me.

 

Kind regards

Jens M. Kofoed

 

 

 


Den ons. 6. sep. 2023 kl. 16.17 skrev Minh HUYNH :




Hello Juan

 

Not sure if you understand my point of view ?

 

It got a cluster nifi01/nifi02/nifi03

 

I try to use unique url for instance https://nifi_clu01:9091/nifi, this link point to the randomly nifi01/nifi02/nifi03

 

Regards 

 


 

Envoyé: mercredi 6 septembre 2023 à 16:05
De: "Juan Pablo Gardella" 
À: users@nifi.apache.org
Objet: Re: Help : LoadBalancer



List all servers you need.

 


server server1 "${NIFI_INTERNAL_HOST1}":8443 ssl


server server2 "${NIFI_INTERNAL_HOST2}":8443 ssl



 


On Wed, Sep 6, 2023 at 10:35 AM Minh HUYNH  wrote:





Thanks a lot for reply.

 

Concerning redirection for one node. It is ok we got it.

 

But how configure nifi and haproxy to point the cluster node, for instance cluster nodes "nifi01, nifi02, nifi03"

 

regards 

 

Minh

 


 
 

Envoyé: mercredi 6 septembre 2023 à 15:29
De: "Juan Pablo Gardella" 
À: users@nifi.apache.org
Objet: Re: Help : LoadBalancer


I did that multiple times. Below is how I configured it:
 

frontend http-in

# bind ports section 

acl prefixed-with-nifi path_beg /nifi

use_backend nifi if prefixed-with-nifi

option forwardfor
 

backend nifi

server server1 "${NIFI_INTERNAL_HOST}":8443 ssl

 

 


On Wed, Sep 6, 2023 at 9:40 AM Minh HUYNH  wrote:




 

Hello,

 

I have been trying long time ago to configure nifi cluster behind the haproxy/loadbalancer 

But until now, it is always failed.

I have only got access to the welcome page of nifi after all others links are failed.

 

If someone has the configuration, it is helpfull.

 

Thanks a lot

 

Regards 









 

 









 

 









 

 


Re: Re: Node/processor/queue status in grafana

2023-08-16 Thread e-sociaux




Hello YolandaWhere is the document about accepted parameter to set after “includedRegistries”Thanks for helpEnvoyé à partir de l'app mail mobile sur 14/08/2023 le 17:38, Yolanda Davis écrivit:




De: "Yolanda Davis" Date: 14 août 2023à: users@nifi.apache.orgCc: Objet: Re: Node/processor/queue status in grafana



Hi Aaron,
Great to hear it's something you can use!  You are correct in that it should expose everything that the PrometheusReportingTask does.   It also has the ability to limit the metrics exposed on the endpoint, however it is different from how the reporting task can limit metrics.  So for example let's say you only want  to make JVM or NIFi specific metrics available, you can pass that as a parameter value to only include the registries for those metrics (e.g. https://localhost:8443/nifi-api/flow/metrics/prometheus?includedRegistries=JVM).  There are other filtering mechanisms, but you can also just limit what Prometheus scrapes using its own scrape configurations. 

One thing also to be mindful of is the amount of metrics you are collecting and your storage constraints in Prometheus.  Depending on the size of your flow there could be lots of metrics produced for components. The amount of space needed for these metrics are driven not just by the name and value pair but also the labels associated with it; so you may want to consider how to configure your scrape to grab only what is necessary, at intervals that best matches your query needs. 

Again, I hope this is helpful! 

-yolanda  


On Mon, Aug 14, 2023 at 1:09 AM Aaron Rich  wrote:


Hi Yolanda,
That is SUPER helpful to know. This looks like a great starting point.

It looks like this is equivalent to reporting task for all components. Is that correct? I'll have to see if I can get Prometheus to scrape it with a secure endpoint.

Thanks for the starting pointer!

-Aaron


On Sun, Aug 13, 2023 at 7:26 PM Yolanda Davis  wrote:



Hi Aaron,
Just wanted to provide some additional info that hopefully will be useful.  NiFi also has a dedicated prometheus endpoint that can be scraped directly, without needing to set up a specific task.  That endpoint can be found (in a local host example) under https://localhost:8443/nifi-api/flow/metrics/prometheus.  From here you can see the metrics NiFi makes available and prometheus can be configured to scrape depending on the security settings you have in place. Again hope this helps, and good luck!

-yolanda


On Sun, Aug 13, 2023 at 8:03 PM Joe Witt  wrote:


This list is perfectly fine.

All of our metrics are available via push, pull, and prometheus so it can def be done.   It would be great to see what you end up with.

Thanks


On Sun, Aug 13, 2023 at 3:56 PM Aaron Rich  wrote:


Hi,
Not sure if this belongs on the dev or user mailing list but figured would start with user. 

I wanted to see if there is a way to create the same graphs available in the status displays using grafana? I'm assuming I would need to use a prometheus reporting task but wanted to know if all the metrics are reported to generate same graphs. 

I want to be able to graph nifi performance metrics at the same time as my kubernetes cluster metrics so I can understand how they relate and what resources are impacting any slow down in performance of NiFi. 

If anyone has already attempted this, wouls love to get any pointers for implementing. 

Thanks. 



Aaron







-- --yolanda.m.da...@gmail.com
@YolandaMDavis








-- --yolanda.m.da...@gmail.com
@YolandaMDavis










PutGCSObject more option

2023-08-01 Thread e-sociaux

Hello all,

 

We're facing a lot of issue concerning the timeout for "PutGCSObject"

 

PutGCSObject[id=d5a6d876-486f-3f6d-acba-8c661b3527af] Failed to put FlowFile[filename=xxx] to Google Cloud Storage due to Read timed out: com.google.cloud.storage.StorageException: Read timed out - Caused by: java.net.SocketTimeoutException: Read timed out

 

As I understand, it can be resolv by adding these parameters :

 

[2] https://cloud.google.com/appengine/docs/legacy/standard/python/googlecloudstorageclient/retryparams_class
[3] https://cloud.google.com/storage/docs/resumable-uploads

 

[1] https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/AbstractGCPProcessor.java

 

I've already opened ticket here but don't have any information until now: https://issues.apache.org/jira/browse/NIFI-11865

 

So do you kown if it is possible ?

 

Regards