Re: Error on InvokeHTTP

2024-01-12 Thread Juan Pablo Gardella
it seems charset issue. if it is a json add charset=utf-8

On Fri, Jan 12, 2024, 6:33 PM James McMahon  wrote:

> I have a text flowfile that I am trying to send to a translation service
> on a remote EC2 instance from my nifi insurance on my EC2. I am failing
> with only this somewhat-cryptic error:
>
> InvokeHTTP[id=a72e1727-3da0-1d6c-164b-e43c1426fd97] Routing to Failure
> due to exception: Unexpected char 0x20 at 6 in header name: Socket Write
> Timeout: java.lang.IllegalArgumentException: Unexpected char 0x20 at 6 in
> header name: Socket Write Timeout
>
>
> What does this mean? Is what I am sending from InvokeHTTP employing a header 
> formatted in a way that is not expected?
>
>
> I am using an InvokeHTTP version 1.16.3.
>
> Has anyone experienced a similar error?
>
>
>
>
>


Re: Hardware requirement for NIFI instance

2024-01-05 Thread Juan Pablo Gardella
Agree on using sane defaults.

On Fri, Jan 5, 2024 at 11:52 AM Mark Payne  wrote:

> Thanks for following up. That actually makes sense. I don’t think Output
> Batch Size will play a very big role here. But Fetch Size, if I understand
> correctly, is essentially telling the JDBC Driver “Here’s how many rows you
> should pull back at once.” And so it’s going to buffer all of those rows
> into memory until it has written out all of them.
>
> So if you set Fetch Size = 0, it’s going to pull back all rows in your
> database into memory. To be honest, I cannot imagine a single scenario
> where that’s desirable. We should probably set the default to something
> reasonable like 1,000 or 10,000 at most. And in 2.0, where we have the
> ability to migrate old configurations we should automatically change any
> config that has Fetch Size of 0 to the default value.
>
> @Matt Burgess, et al., any concerns with that?
>
> Thanks
> -Mark
>
>
> On Jan 5, 2024, at 9:45 AM, e-soci...@gmx.fr wrote:
>
> So after some tests, here the result perhaps could help someone.
>
> With nifi (2CPU / 8Go Ram)
>
> I have tested with these couples properties :
>
> > 1 executeSQL with "select * from table"
> Output Batch Size : 1
> Fetch Size : 10
>
> > 2 executeSQL with "select * from table"
> Output Batch Size : 1
> Fetch Size : 20
>
> > 2 executeSQL with "select * from table"
> Output Batch Size : 1
> Fetch Size : 40
> and started 5 executeSQL in the same time
>
> The 5 processors work perfectly and receive 5 avro files with same size.
> And during the test, the memory is stable and the Web UI works perfectly
>
>
> FAILED TEST "OUT OF MEMORY" if the properties are :
>
> > 1 executeSQL with "select * from table"
> Output Batch Size : 0
> Fetch Size : 0
> Regards
>
>
> *Envoyé:* vendredi 5 janvier 2024 à 08:12
> *De:* "Matt Burgess" 
> *À:* users@nifi.apache.org
> *Objet:* Re: Hardware requirement for NIFI instance
> You may not need to merge if your Fetch Size is set appropriately. For
> your case I don't recommend setting Max Rows Per Flow File because you
> still have to wait for all the results to be processed before the
> FlowFile(s) get sent "downstream". Also if you set Output Batch Size
> you can't use Merge downstream as ExecuteSQL will send FlowFiles
> downstream before it knows the total count.
>
> If you have a NiFi cluster and not a standalone instance you MIGHT be
> able to represent your complex query using GenerateTableFetch and use
> a load-balanced connection to grab different "pages" of the table in
> parallel with ExecuteSQL. Those can be merged later as long as you get
> all the FlowFiles back to a single node. Depending on how complex your
> query is then it's a long shot but I thought I'd mention it just in
> case.
>
> Regards,
> Matt
>
>
> On Thu, Jan 4, 2024 at 1:41 PM Pierre Villard
>  wrote:
> >
> > You can merge multiple Avro flow files with MergeRecord with an Avro
> Reader and an Avro Writer
> >
> > Le jeu. 4 janv. 2024 à 22:05,  a écrit :
> >>
> >> And the important thing for us it has only one avro file by table.
> >>
> >> So it is possible to merge avro files to one avro file ?
> >>
> >> Regards
> >>
> >>
> >> Envoyé: jeudi 4 janvier 2024 à 19:01
> >> De: e-soci...@gmx.fr
> >> À: users@nifi.apache.org
> >> Cc: users@nifi.apache.org
> >> Objet: Re: Hardware requirement for NIFI instance
> >>
> >> Hello all,
> >>
> >> Thanks a lot for the reply.
> >>
> >> So for more details.
> >>
> >> All the properties for the ExecuteSQL are set by default, except "Set
> Auto Commit: false".
> >>
> >> The sql command could not be more simple than "select * from
> ${db.table.fullname}"
> >>
> >> The nifi version is 1.16.3 and 1.23.2
> >>
> >> I have also test the same sql command in the another nifi (8 cores/ 16G
> Ram) and it is working.
> >> The result is the avro file with 1.6GB
> >>
> >> The detail about the output flowfile :
> >>
> >> executesql.query.duration
> >> 245118
> >> executesql.query.executiontime
> >> 64122
> >> executesql.query.fetchtime
> >> 180996
> >> executesql.resultset.index
> >> 0
> >> executesql.row.count
> >> 14961077
> >>
> >> File Size
> >> 1.62 GB
> >>
> >> Regards
> >>
> >> Minh
> >>
> >>
> >> Envoyé: jeudi 4 janvier 2024 à 17:18
> >> De: "Matt Burgess" 
> >> À: users@nifi.apache.org
> >> Objet: Re: Hardware requirement for NIFI instance
> >> If I remember correctly, the default Fetch Size for Postgresql is to
> >> get all the rows at once, which can certainly cause the problem.
> >> Perhaps try setting Fetch Size to something like 1000 or so and see if
> >> that alleviates the problem.
> >>
> >> Regards,
> >> Matt
> >>
> >> On Thu, Jan 4, 2024 at 8:48 AM Etienne Jouvin 
> wrote:
> >> >
> >> > Hello.
> >> >
> >> > I also think the problem is more about the processor, I guess
> ExecuteSQL.
> >> >
> >> > Should play with batch configuration and commit flag to commit
> intermediate FlowFile.
> >> >
> >> > The out of memory exception makes me believe the full table is
> retrieved, and if 

Re: Expression Language does not work within QueryNifiReportingTask

2023-11-03 Thread Juan Pablo Gardella
Try to use quotes on variable hostname.

On Fri, Nov 3, 2023 at 6:09 AM Doğukan Levendoğlu | Obase <
dogukan.levendo...@obase.com> wrote:

> Apologies fort he horrible image quality:
>
>
>
> *From:* Doğukan Levendoğlu | Obase
> *Sent:* 03 November 2023 12:06
> *To:* 'users@nifi.apache.org' 
> *Subject:* Expression Language does not work within
> QueryNifiReportingTask
>
>
>
> Hello,
>
>
>
> I’m trying to add additional fields to the query results obtained by
> QueryNifiReportingTask like below:
>
>
>
> SQL Query property in QueryNifiReportingTask indicates that it supports
> the expression language. My understanding is that the query needs to be
> evaluated before execution. However I am getting this error ( which tells
> me that’s not what’s happening):
>
>
>
> QueryNiFiReportingTask[id=8fbb9a3a-018b-1000--bb48d6d7] Error
> processing the query due to java.sql.SQLException: Error while preparing
> statement [SELECT
>
>*,
>
>'myCluster' as clusterName,
>
>${hostname(true)} as 'hostname'
>
> FROM PROCESSOR_STATUS]:
> org.apache.nifi.reporting.sql.MetricsSqlQueryService$PreparedStatementException:
> java.sql.SQLException: Error while preparing statement [SELECT
>
>*,
>
>'myCluster' as clusterName,
>
>${hostname(true)} as 'hostname'
>
> FROM PROCESSOR_STATUS]
>
> - Caused by: java.sql.SQLException: Error while preparing statement
> [SELECT
>
>*,
>
>'myCluster' as clusterName,
>
>${hostname(true)} as 'hostname'
>
> FROM PROCESSOR_STATUS]
>
> - Caused by: java.lang.RuntimeException: parse failed: Encountered "$" at
> line 4, column 2.
>
> Was expecting one of:
>
> "ABS" ...
>
> "ARRAY" ...
>
> "AVG" ...
>
> "CARDINALITY" ...
>
> "CASE" ...
>
> "CAST" ...
>
> "CEIL" ...
>
> "CEILING" ...
>
> "CHAR" ...
>
> .
>
> .
>
> .
>
>
>
> We want to be able to monitor some processors on a per node basis. Is
> there a cleaner way to do this? I am on version 1.23.2.
>
>
>
> Thank you,
>
> Dogukan
>


Re: Help : LoadBalancer

2023-09-06 Thread Juan Pablo Gardella
List all servers you need.

server server1 "${NIFI_INTERNAL_HOST1}":8443 ssl
server server2 "${NIFI_INTERNAL_HOST2}":8443 ssl


On Wed, Sep 6, 2023 at 10:35 AM Minh HUYNH  wrote:

> Thanks a lot for reply.
>
> Concerning redirection for one node. It is ok we got it.
>
> But how configure nifi and haproxy to point the cluster node, for instance
> cluster nodes "nifi01, nifi02, nifi03"
>
> regards
>
> Minh
>
>
>
> *Envoyé:* mercredi 6 septembre 2023 à 15:29
> *De:* "Juan Pablo Gardella" 
> *À:* users@nifi.apache.org
> *Objet:* Re: Help : LoadBalancer
> I did that multiple times. Below is how I configured it:
>
> frontend http-in
> # bind ports section
> acl prefixed-with-nifi path_beg /nifi
> use_backend nifi if prefixed-with-nifi
> option forwardfor
>
> backend nifi
> server server1 "${NIFI_INTERNAL_HOST}":8443 ssl
>
>
>
> On Wed, Sep 6, 2023 at 9:40 AM Minh HUYNH  wrote:
>
>>
>> Hello,
>>
>> I have been trying long time ago to configure nifi cluster behind the
>> haproxy/loadbalancer
>> But until now, it is always failed.
>> I have only got access to the welcome page of nifi after all others links
>> are failed.
>>
>> If someone has the configuration, it is helpfull.
>>
>> Thanks a lot
>>
>> Regards
>>
>
>
>


Re: Help : LoadBalancer

2023-09-06 Thread Juan Pablo Gardella
I did that multiple times. Below is how I configured it:

frontend http-in
# bind ports section
acl prefixed-with-nifi path_beg /nifi
use_backend nifi if prefixed-with-nifi
option forwardfor

backend nifi
server server1 "${NIFI_INTERNAL_HOST}":8443 ssl



On Wed, Sep 6, 2023 at 9:40 AM Minh HUYNH  wrote:

>
> Hello,
>
> I have been trying long time ago to configure nifi cluster behind the
> haproxy/loadbalancer
> But until now, it is always failed.
> I have only got access to the welcome page of nifi after all others links
> are failed.
>
> If someone has the configuration, it is helpfull.
>
> Thanks a lot
>
> Regards
>


Re: Nifi Registry v1.22.0 - Unable to create buckets over http behind proxy

2023-06-19 Thread Juan Pablo Gardella
Ignore, I found the option is available after pressing the settings button.

On Mon, Jun 19, 2023 at 11:04 PM Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:

> Hi,
>
> I am testing the nifi registry behind a proxy and it is unable to create
> buckets. Last time I tested using v 1.14.0 works fine. I am checking logs,
> but do you remember any change that prevents that? The logs show
>
> 2023-06-20 01:43:30,663 INFO [NiFi Registry Web Server-14]
> o.a.n.r.w.m.IllegalStateExceptionMapper java.lang.IllegalStateException:
> Access tokens are only issued over HTTPS. Returning Conflict response.
> 2023-06-20 01:43:30,898 INFO [NiFi Registry Web Server-19]
> o.a.n.r.w.m.IllegalStateExceptionMapper java.lang.IllegalStateException:
> User authentication/authorization is only supported when running over
> HTTPS.. Returning Conflict response.
>
> From browser I see following requests:
> https://serverhost/nifi-registry-api/access/token/kerberos
>
> Do you know how to make it works with http again?
>
> Thanks
>
>


Nifi Registry v1.22.0 - Unable to create buckets over http behind proxy

2023-06-19 Thread Juan Pablo Gardella
Hi,

I am testing the nifi registry behind a proxy and it is unable to create
buckets. Last time I tested using v 1.14.0 works fine. I am checking logs,
but do you remember any change that prevents that? The logs show

2023-06-20 01:43:30,663 INFO [NiFi Registry Web Server-14]
o.a.n.r.w.m.IllegalStateExceptionMapper java.lang.IllegalStateException:
Access tokens are only issued over HTTPS. Returning Conflict response.
2023-06-20 01:43:30,898 INFO [NiFi Registry Web Server-19]
o.a.n.r.w.m.IllegalStateExceptionMapper java.lang.IllegalStateException:
User authentication/authorization is only supported when running over
HTTPS.. Returning Conflict response.

>From browser I see following requests:
https://serverhost/nifi-registry-api/access/token/kerberos

Do you know how to make it works with http again?

Thanks


Re: ExecuteSQL not working

2023-05-13 Thread Juan Pablo Gardella
Instead of

SELECT MAX(ORDEN)
FROM demo_planta2.dbo.ORDEN_VENTA_CAB

Use an alias in the projection column:

SELECT MAX(ORDEN) foo
FROM demo_planta2.dbo.ORDEN_VENTA_CAB

Juan

On Fri, May 12, 2023 at 8:46 PM scott  wrote:

> Hello Luis, Juan Pablo,
> I think I'm having the same issue with ExecuteSQL to connect with Azure
> serverless SQL endpoint using Active Directory. I have downloaded the
> official microsoft jdbc driver and all dependent jars, and my results seem
> the same as Luis reported. I am not sure what was done here to resolve the
> issue, when you said to add the column alias name. Can someone help clarify
> what was done to fix this?
>
> Thanks,
> Scott
>
> On Fri, May 8, 2020 at 11:46 AM Luis Carmona 
> wrote:
>
>>
>>
>> Thanks Juan Pablo.
>>
>> It did work !!
>>
>> Thanks.
>>
>> LC
>>
>>
>>
>> On Fri, 2020-05-08 at 13:54 -0300, Juan Pablo Gardella wrote:
>> > Try again by adding a column alias name tonthe results.
>> >
>> > On Fri, May 8, 2020, 12:21 PM Luis Carmona 
>> > wrote:
>> > > Hi juan Pablo,
>> > >
>> > > I did, but jTDS was the only way achive the connection. With the
>> > > offical jdbc driver always issued error about TSL protocol
>> > > problems.
>> > >
>> > > After some reading, seems to be it is cause the SWL server is too
>> > > old.
>> > >
>> > > And with jTDS I got the coneection, and was able to execute
>> > > Database
>> > > list Tables. But the processor ExecuteSQL is not working.
>> > >
>> > > Regards,
>> > >
>> > > LC
>> > >
>> > >
>> > >
>> > >
>> > > On Fri, 2020-05-08 at 02:27 -0300, Juan Pablo Gardella wrote:
>> > > > Did you try using mssql official jdbc driver?
>> > > >
>> > > > On Fri, 8 May 2020 at 01:34, Luis Carmona <
>> > > lcarm...@openpartner.cl>
>> > > > wrote:
>> > > > > Hi everyone,
>> > > > >
>> > > > > I am trying to execute a query to an MS SQL Server, through
>> > > jTDS
>> > > > > driver, but can't figure why is it giving me error all the
>> > > time.
>> > > > >
>> > > > > If I let the processor as it is, setting the controller service
>> > > > > obviously, throws the error of image saying "empty name".
>> > > > >
>> > > > > If I set the processor with Normaliza Table/Columns and Use
>> > > Avro
>> > > > > Types
>> > > > > to TRUE, then throws the error of the image saying "Index out
>> > > of
>> > > > > range"
>> > > > >
>> > > > > Th query is as simple as this:
>> > > > >
>> > > > > SELECT MAX(ORDEN)
>> > > > > FROM demo_planta2.dbo.ORDEN_VENTA_CAB
>> > > > >   WHERE
>> > > > >   CODEMPRESA=2
>> > > > >   AND
>> > > > >   CODTEMPO=1;
>> > > > >
>> > > > > Please some tip about what could be wrong in my settings.
>> > > > >
>> > > > > Regards,
>> > > > >
>> > > > > LC
>> > > > >
>> > > > >
>> > >
>>
>>


Re: Is it possible to completely disable RPGs, not just transmission?

2022-05-04 Thread Juan Pablo Gardella
There is a property on nifi.properties that you can start nifi and all
processors are in stop state. I could not find it, but maybe it is useful
for that scenario.

On Wed, May 4, 2022 at 5:14 AM Isha Lamboo 
wrote:

> Hi all,
>
>
>
> Is there a way to stop disabled Remote Process Groups from continually
> contacting the remote to update the contents?
>
>
>
> I’m migrating a cluster with hundreds of Remote Process Groups and the
> moment I start up the new cluster with all flows stopped/disabled, the RPGs
> all start contacting the remote, regardless of the RPG’s status.
>
> This results in various errors since firewall ports and remote nifi
> policies are not yet in place. I’m worried about the http threads on the
> remote NiFi cluster being overloaded and locally, all other errors are
> drowned out by the RPG errors.
>
>
>
> This seems like a limitation in the Remote Process Group. I can’t even see
> the enabled/disabled status while the RPG is failing to update from the
> remote instance.
>
>
>
> Regards,
>
>
>
> Isha
>
>
>


Re: InvokeHTTP vs invalid SSL certificates

2022-03-04 Thread Juan Pablo Gardella
You can set up a SSLManager to ignore all errors
,
but it is not safe. Other option maybe put a reverse proxy in the middle

+ some manipulation to extract the URL. For example myproxy?connectoTo=URL
and proxy handles that. There are some quick ideas to make a POC.



On Fri, Mar 4, 2022 at 12:35 PM Jean-Sebastien Vachon <
jsvac...@brizodata.com> wrote:

> Thanks David for the information.
>
> My main issue is that we are doing massive web scraping (over 400k
> websites and growing) and I can not just add each certificate manually.
> I can probably automate most of it but I wanted to see what options were
> available to me.
>
> Thanks again. I will look into this.
>
>
> *Jean-Sébastien Vachon *
> Co-Founder & Architect
>
>
> *Brizo Data, Inc. www.brizodata.com
> 
> *
> --
> *From:* David Handermann 
> *Sent:* Friday, March 4, 2022 9:16 AM
> *To:* users@nifi.apache.org 
> *Subject:* Re: InvokeHTTP vs invalid SSL certificates
>
> Thanks for raising this question.  The InvokeHTTP processor relies on the
> OkHttp client library, which implements standard TLS handshaking and
> hostname verification as described in their documentation:
>
> https://square.github.io/okhttp/features/https/
>
> There are many things that could make a certificate invalid for a specific
> connection.  If the remote certificate is self-signed, it is possible to
> configure a NiFi SSL Context Service with a trust store that includes the
> self-signed certificate.
>
> If the remote certificate is expired, the remote server must be updated
> with a new certificate.  If the remote certificate does not include a DNS
> Subject Alternative Name (SAN) matching the domain name that InvokeHTTP
> uses for the connection, the best solution is for the remote server to be
> updated with a new certificate containing a matching SAN.
>
> It is possible to configure OkHttp with a custom hostname verifier or
> trust manager that ignores some of these attributes, but this would require
> custom code that overrides the default behavior of InvokeHTTP.  There have
> been some requests in the past for NiFi to implement support for a custom
> hostname verifier, but this approach breaks one of the fundamental aspects
> of TLS communication security.
>
> With that background, the potential solution depends on why InvokeHTTP
> considers the certificate invalid.
>
> Regards,
> David Handermann
>
> On Fri, Mar 4, 2022 at 6:59 AM Jean-Sebastien Vachon <
> jsvac...@brizodata.com> wrote:
>
> Hi all,
>
> what is the best way to deal with invalid SSL certificates when trying to
> open an URL using InvokeHTTP?
>
>
> Thanks
>
>
> *Jean-Sébastien Vachon *
> Co-Founder & Architect
>
>
> *Brizo Data, Inc. www.brizodata.com
> 
> *
>
>


Re: nifi-app log rotation

2022-01-18 Thread Juan Pablo Gardella
Hi Emmanuel,

That is not related to Nifi itself, I suggest to chek logback manual or, if
the disk partition where are you storing logs are not full.

Juan

On Tue, 18 Jan 2022 at 06:54, QUEVILLON EMMANUEL - EXT-SAFRAN ENGINEERING
SERVICES (SAFRAN)  wrote:

> Hi guys,
>
>
>
> I’m trying to configure the nifi-app.log rotation process by decreasing
> the rotation time and compressing the final rotated file.
>
> To do this, I’ve updated ‘conf/logback.xml’ file by this default
> configuration:
>
>
>
>  class="ch.qos.logback.core.rolling.RollingFileAppender">
>
>…
>
>
> ${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{-MM-dd_HH}.%i.log
>
> …
>
> 
>
>
>
> To this one
>
>
>
>  class="ch.qos.logback.core.rolling.RollingFileAppender">
>
> …
>
>
> ${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d.log.gz
>
>…
>
> 
>
>
>
> However, since I’ve updated `conf/logback.xml`, nifi-appl.log is not
> updated anymore and not new file is created.
>
> I’ve tried different patterns and the following are working ok (log append
> to nifi-app.log):
>
>
>
> 1. 'nifi-app_%d.%i.log.gz'
>
> 2. 'nifi-app_%d{-MM-dd_HH}.%i.log.gz'
>
>
>
> But either 'nifi-app_%d{-MM-dd_HH}.log.gz' nor nifi-app_%d.log.gz are
> working.
>
>
>
> It there a reason for that? Sounds like the ‘%i’ in the fileNamePattern is
> required, whereas it is not the case for ‘nifi-user.log’ for example, this
> configuration ‘nifi-app_%d.log.gz’ is working ok, logs are appended into
> ‘nifi-user.log’
>
>
>
> Thanks for any help
>
>
>
> Emmanuel
>
> C2 - Restricted
>
>
> #
> " Ce courriel et les documents qui lui sont joints peuvent contenir des
> informations confidentielles, être soumis aux règlementations relatives au
> contrôle des exportations ou ayant un caractère privé. S'ils ne vous sont
> pas destinés, nous vous signalons qu'il est strictement interdit de les
> divulguer, de les reproduire ou d'en utiliser de quelque manière que ce
> soit le contenu. Toute exportation ou réexportation non autorisée est
> interdite Si ce message vous a été transmis par erreur, merci d'en informer
> l'expéditeur et de supprimer immédiatement de votre système informatique ce
> courriel ainsi que tous les documents qui y sont attachés."
> **
> " This e-mail and any attached documents may contain confidential or
> proprietary information and may be subject to export control laws and
> regulations. If you are not the intended recipient, you are notified that
> any dissemination, copying of this e-mail and any attachments thereto or
> use of their contents by any means whatsoever is strictly prohibited.
> Unauthorized export or re-export is prohibited. If you have received this
> e-mail in error, please advise the sender immediately and delete this
> e-mail and all attached documents from your computer system."
> #
>


Re: ConsumeJMS causing multiple BIND/UNBIND request

2022-01-13 Thread Juan Pablo Gardella
Probabl you faced https://issues.apache.org/jira/browse/NIFI-7563. Check a
newer version.

On Thu, 13 Jan 2022 at 13:02, Joe Witt  wrote:

> Solace is a commonly used JMS provider with NIFi and it is very likely
> such an issue has been addressed.  Please test/attempt this on a newer
> version of NiFi.
>
> Fixed related to JMS since 1.11
> https://issues.apache.org/jira/browse/NIFI-9239?jql=project%20%3D%20NIFI%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20text%20~%20%22JMS%22%20AND%20fixVersion%20%3E%201.11.0%20ORDER%20BY%20fixVersion%20DESC
>
> Thanks
>
> On Thu, Jan 13, 2022 at 8:29 AM nayan sharma 
> wrote:
>
>> NiFi 1.11 .0
>>
>> On Thu, 13 Jan 2022 at 18:27, Juan Pablo Gardella <
>> gardellajuanpa...@gmail.com> wrote:
>>
>>> Which Nifi version are you using?
>>>
>>> On Thu, 13 Jan 2022 at 09:14, nayan sharma 
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>>
>>>>
>>>> We are using consumeJMS to consume messages from the Solaxce system
>>>> from multiple queues but we are getting multiple bind/unbind requests.
>>>>
>>>> As we are getting alerts *SYSTEM_LOGGING_LOST_EVENT* alerts on solace
>>>> production appliance leading to buffer pool is exhausted.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> We have seen that high *CLIENT_CLIENT_UNBIND* requests from
>>>> applications and that leads to very high solace logging events.
>>>>
>>>>
>>>> Any suggestion would be appreciated.
>>>>
>>>>
>>>> Thanks & Regards,
>>>> Nayan Sharma
>>>>  *+91-8095382952*
>>>>
>>>> <https://www.linkedin.com/in/nayan-sharma>
>>>> <http://stackoverflow.com/users/3687426/nayan-sharma?tab=profile>
>>>>
>>> --
>> Thanks & Regards,
>> Nayan Sharma
>>  *+91-8095382952*
>>
>> <https://www.linkedin.com/in/nayan-sharma>
>> <http://stackoverflow.com/users/3687426/nayan-sharma?tab=profile>
>>
>


Re: ConsumeJMS causing multiple BIND/UNBIND request

2022-01-13 Thread Juan Pablo Gardella
Which Nifi version are you using?

On Thu, 13 Jan 2022 at 09:14, nayan sharma  wrote:

> Hi All,
>
>
>
> We are using consumeJMS to consume messages from the Solaxce system from
> multiple queues but we are getting multiple bind/unbind requests.
>
> As we are getting alerts *SYSTEM_LOGGING_LOST_EVENT* alerts on solace
> production appliance leading to buffer pool is exhausted.
>
>
>
>
>
> We have seen that high *CLIENT_CLIENT_UNBIND* requests from applications
> and that leads to very high solace logging events.
>
>
> Any suggestion would be appreciated.
>
>
> Thanks & Regards,
> Nayan Sharma
>  *+91-8095382952*
>
> 
> 
>


Re: Some JMS unit tests are failing since 1.14.0

2021-09-18 Thread Juan Pablo Gardella
Nifi 1.13.2 tests run fine.

On Sat, 18 Sept 2021 at 12:05, Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:

> Hi all,
>
> Just discovered yesterday that some tests are failing since 1.14.0,
> details at https://issues.apache.org/jira/browse/NIFI-9225. Be aware if
> you are using JMS be careful. It is required to review if failing tests are
> false positive or actually a bug was introduced at JMS processors.
>
> Juan
>


Some JMS unit tests are failing since 1.14.0

2021-09-18 Thread Juan Pablo Gardella
Hi all,

Just discovered yesterday that some tests are failing since 1.14.0, details
at https://issues.apache.org/jira/browse/NIFI-9225. Be aware if you are
using JMS be careful. It is required to review if failing tests are false
positive or actually a bug was introduced at JMS processors.

Juan


Re: NiFi 1.14.0 OpenJDK

2021-08-20 Thread Juan Pablo Gardella
AFAIK Nifi supports JDK 1.8, other JDK versions probably will not work.

On Fri, 20 Aug 2021 at 09:24,  wrote:

> I've installed NiFi 1.14.0 on a Windows Server 2019 machine with OpenJDK
> 16.0.2.
> No changes to properties or anything, just unzipped these and set the PATH
> and JAVA_HOME environment variables.
> I execute the run-nifi.bat file and it runs for a minute and exits.
>
> The nifi-bootstrap.log file contains this:
>
>
> 2021-08-19 19:29:25,531 INFO [NiFi Bootstrap Command Listener]
> org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for
> Bootstrap requests on port 62463
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr java.lang.reflect.InaccessibleObjectException:
> Unable to make protected final java.lang.Class
> java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
> throws java.lang.ClassFormatError accessible: module java.base does not
> "opens java.lang" to unnamed module @6e6ec71f
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:199)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> java.base/java.lang.reflect.Method.setAccessible(Method.java:193)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> org.xerial.snappy.SnappyLoader.injectSnappyNativeLoader(SnappyLoader.java:275)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:227)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at org.xerial.snappy.Snappy.(Snappy.java:48)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> org.apache.nifi.processors.hive.PutHiveStreaming.(PutHiveStreaming.java:158)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at java.base/java.lang.Class.forName0(Native Method)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at java.base/java.lang.Class.forName(Class.java:466)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> org.apache.nifi.nar.StandardExtensionDiscoveringManager.getClass(StandardExtensionDiscoveringManager.java:328)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> org.apache.nifi.documentation.DocGenerator.documentConfigurableComponent(DocGenerator.java:100)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> org.apache.nifi.documentation.DocGenerator.generate(DocGenerator.java:65)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at
> org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1126)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at org.apache.nifi.NiFi.(NiFi.java:159)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at org.apache.nifi.NiFi.(NiFi.java:71)
> 2021-08-19 19:30:53,137 ERROR [NiFi logging handler]
> org.apache.nifi.StdErr at org.apache.nifi.NiFi.main(NiFi.java:303)
> 2021-08-19 19:30:53,215 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi
> never started. Will not restart NiFi
>
>
>
> Maybe this is an OpenJDK issue?
>
> TBH, I haven't spent nearly enough time reading the docs at this point but
> I thought it might just run out of the box.  So, it could be operator error
> on my part too.
>
> Thoughts?
>
> Thanks,
>
> Matt
>


Re: Nifi cluster restart

2021-07-30 Thread Juan Pablo Gardella
Thanks Chris,

I am not very clear about the approach to restart the entire cluster using
systemd. Are you operating at node level using systemd if I understand
correctly, how you will restart the cluster?

Thanks,
Juan

On Fri, 30 Jul 2021 at 10:02, Chris McKeever  wrote:

> We start it with systemd, and have timed jobs that offset stop nodes --
> systemd then sees the failed job and spins it back up
>
> On Fri, Jul 30, 2021 at 7:54 AM Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
>> Hi devs,
>>
>> Which mechanism are you using to restart the cluster? Apart from using
>> ambari, any other suggestions?
>>
>> Thanks,
>> Juan
>>
>


Nifi cluster restart

2021-07-30 Thread Juan Pablo Gardella
Hi devs,

Which mechanism are you using to restart the cluster? Apart from using
ambari, any other suggestions?

Thanks,
Juan


Re: NiFi Queue Monitoring

2021-07-27 Thread Juan Pablo Gardella
+1

On Tue, 27 Jul 2021 at 14:00, Joe Witt  wrote:

> Scott
>
> This sounds pretty darn cool.  Any chance you'd be interested in
> kicking out a blog on it?
>
> Thanks
>
> On Tue, Jul 27, 2021 at 9:58 AM scott  wrote:
> >
> > Matt/all,
> > I was able to solve my problem using the QueryNiFiReportingTask with
> "SELECT * FROM CONNECTION_STATUS WHERE isBackPressureEnabled = true" and
> the new LoggingRecordSink as you suggested. Everything is working
> flawlessly now. Thank you again!
> >
> > Scott
> >
> > On Wed, Jul 21, 2021 at 5:09 PM Matt Burgess 
> wrote:
> >>
> >> Scott,
> >>
> >> Glad to hear it! Please let me know if you have any questions or if
> >> issues arise. One thing I forgot to mention is that I think
> >> backpressure prediction is disabled by default due to the extra
> >> consumption of CPU to do the regressions, make sure the
> >> "nifi.analytics.predict.enabled" property in nifi.properties is set to
> >> "true" before starting NiFi.
> >>
> >> Regards,
> >> Matt
> >>
> >> On Wed, Jul 21, 2021 at 7:21 PM scott  wrote:
> >> >
> >> > Excellent! Very much appreciate the help and for setting me on the
> right path. I'll give the queryNiFiReportingTask code a try.
> >> >
> >> > Scott
> >> >
> >> > On Wed, Jul 21, 2021 at 3:26 PM Matt Burgess 
> wrote:
> >> >>
> >> >> Scott et al,
> >> >>
> >> >> There are a number of options for monitoring flows, including
> >> >> backpressure and even backpressure prediction:
> >> >>
> >> >> 1) The REST API for metrics. As you point out, it's subject to the
> >> >> same authz/authn as any other NiFi operation and doesn't sound like
> it
> >> >> will work out for you.
> >> >> 2) The Prometheus scrape target via the REST API. The issue would be
> >> >> the same as #1 I presume.
> >> >> 3) PrometheusReportingTask. This is similar to the REST scrape target
> >> >> but isn't subject to the usual NiFi authz/authn stuff, however it
> does
> >> >> support SSL/TLS for a secure solution (and is also a "pull" approach
> >> >> despite it being a reporting task)
> >> >> 4) QueryNiFiReportingTask. This is not included with the NiFi
> >> >> distribution but can be downloaded separately, the latest version
> >> >> (1.14.0) is at [1]. I believe this is what Andrew was referring to
> >> >> when he mentioned being able to run SQL queries over the information,
> >> >> you can do something like "SELECT * FROM
> CONNECTION_STATUS_PREDICTIONS
> >> >> WHERE predictedTimeToBytesBackpressureMillis < 1". This can be
> >> >> done either as a push or pull depending on the Record Sink you
> choose.
> >> >> A SiteToSiteReportingRecordSink, KafkaRecordSink, or
> LoggingRecordSink
> >> >> results in a push (to NiFi, Kafka, or nifi-app.log respectively),
> >> >> where a PrometheusRecordSink results in a pull the same as #2 and #3.
> >> >> There's even a ScriptedRecordSink where you can write your own script
> >> >> to put the results where you want them.
> >> >> 5) The other reporting tasks. These have been mentioned frequently in
> >> >> this thread so no need for elaboration here :)
> >> >>
> >> >> Regards,
> >> >> Matt
> >> >>
> >> >> [1]
> https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-sql-reporting-nar/1.14.0/
> >> >>
> >> >> On Wed, Jul 21, 2021 at 5:58 PM scott  wrote:
> >> >> >
> >> >> > Great comments all. I agree with the architecture comment about
> push monitoring. I've been monitoring applications for more than 2 decades
> now, but sometimes you have to work around the limitations of the
> situation. It would be really nice if NiFi had this logic built-in, and
> frankly I'm surprised it is not yet. I can't be the only one who has had to
> deal with queues filling up, causing problems downstream. NiFi certainly
> knows that the queues fill up, they change color and execute back-pressure
> logic. If it would just do something simple like write a log/error message
> to a log file when this happens, I would be good.
> >> >> > I have looked at the new metrics and reporting tasks but still
> haven't found the right thing to do to get notified when any queue in my
> instance fills up. Are there any examples of using them for a similar task
> you can share?
> >> >> >
> >> >> > Thanks,
> >> >> > Scott
> >> >> >
> >> >> > On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com <
> u...@moosheimer.com> wrote:
> >> >> >>
> >> >> >> In general, it is a bad architecture to do monitoring via pull
> request. You should always push. I recommend a look at the book "The Art of
> Monitoring" by James Turnbull.
> >> >> >>
> >> >> >> I also recommend the very good articles by Pierre Villard on the
> subject of NiFi monitoring at
> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
> >> >> >>
> >> >> >> Hope this helps.
> >> >> >>
> >> >> >> Mit freundlichen Grüßen / best regards
> >> >> >> Kay-Uwe Moosheimer
> >> >> >>
> >> >> >> Am 21.07.2021 um 16:45 schrieb Andrew Grande  >:
> >> >> >>
> >> >> >> 
> >> >> >> Can't you leverage some of the recent nif

Re: Nifi 1.14.0 - Using docker and Single User Credentials

2021-07-21 Thread Juan Pablo Gardella
Please ignore, my error during applying the patch (see below).
+joey.fra...@icloud.com  worked fine the patch!
Thank you!!

It works fine! I copied the patched in different location:

COPY --chown=nifi:nifi start.sh /opt/nifi/scripts/scripts/start.sh
COPY --chown=nifi:nifi secure.sh /opt/nifi/scripts/scripts/secure.sh
RUN chmod u+x /opt/nifi/scripts/start.sh /opt/nifi/scripts/secure.sh

Juan

On Wed, 21 Jul 2021 at 10:39, Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:

> Hi,
>
> I tried the patch by adjusting the image, and it seems it is not working
> if it is running behind a proxy.
>
> COPY --chown=nifi:nifi start.sh /opt/nifi/scripts/scripts/start.sh
> COPY --chown=nifi:nifi secure.sh /opt/nifi/scripts/scripts/secure.sh
>
> Nifi starts but I am unable to access it when it runs behind a proxy.
>
> System Error The request contained an invalid host header [
> publichostname:8444] in the request [/nifi/]. Check for request
> manipulation or third-party intercept. Valid host headers are [empty] or:
>
>- 127.0.0.1
>- 127.0.0.1:8443
>- localhost
>- localhost:8443
>- [::1]
>- [::1]:8443
>- 3cdcc5c8b343
>- 3cdcc5c8b343:8443
>- 172.18.0.2
>- 172.18.0.2:8443
>
>
> y adjusting the image as:
>
> environment:
>   SINGLE_USER_CREDENTIALS_USERNAME: ${SINGLE_USER_CREDENTIALS_USERNAME}
>   SINGLE_USER_CREDENTIALS_PASSWORD: ${SINGLE_USER_CREDENTIALS_PASSWORD}
>   NIFI_SENSITIVE_PROPS_KEY: ${NIFI_SENSITIVE_PROPS_KEY}
>   NIFI_WEB_HTTPS_HOST: ${NIFI_INTERNAL_HOST}
>   NIFI_WEB_PROXY_HOST: ${PUBLIC_HOSTNAME}:${NIFI_PUBLIC_PORT}
>
> It seems the certificate is not created properly when nifi is running
> behind a proxy.
>
> Juan
>
> On Sun, 18 Jul 2021 at 22:56, Joey Frazee  wrote:
>
>> Yeah, this wasn’t being handled right anymore. I put up a PR for this on
>> Friday.
>>
>> https://github.com/apache/nifi/pull/5226
>>
>> If you can give it a test that’d be a big help.
>>
>> Best,
>>
>> -joey
>>
>> On Jul 18, 2021, at 6:50 PM, Juan Pablo Gardella <
>> gardellajuanpa...@gmail.com> wrote:
>>
>> 
>> Hello all,
>>
>> I am trying *Single User Credentials* with Docker but it does not work
>> because it does not allow set up nifi.web.proxy.host[1] variable. The
>> start script disallow setting the host:
>>
>> if [ -n "${SINGLE_USER_CREDENTIALS_USERNAME}" ] && [ -n "
>> ${SINGLE_USER_CREDENTIALS_PASSWORD}" ]; then
>> ${NIFI_HOME}/bin/nifi.sh set-single-user-credentials "
>> ${SINGLE_USER_CREDENTIALS_USERNAME}" "${SINGLE_USER_CREDENTIALS_PASSWORD}
>> "
>> fi
>>
>> . "${scripts_dir}/update_cluster_state_management.sh"
>>
>> # Check if we are secured or unsecured
>> case ${AUTH} in
>> tls)
>> echo 'Enabling Two-Way SSL user authentication'
>> . "${scripts_dir}/secure.sh"
>> ;;
>> ldap)
>> echo 'Enabling LDAP user authentication'
>> # Reference ldap-provider in properties
>> export NIFI_SECURITY_USER_LOGIN_IDENTITY_PROVIDER="ldap-provider"
>>
>> . "${scripts_dir}/secure.sh"
>> . "${scripts_dir}/update_login_providers.sh"
>> ;;
>> *)
>> if [ ! -z "${NIFI_WEB_PROXY_HOST}" ]; then
>> echo 'NIFI_WEB_PROXY_HOST was set but NiFi is not configured to run in a
>> secure mode. Will not update nifi.web.proxy.host.'
>> fi
>> ;;
>> esac
>>
>> Why does the echo print that is not in secure mode?
>>
>> Thanks,
>> Juan
>> [1]
>>
>> A comma separated list of allowed HTTP Host header values to consider
>> when NiFi is running securely and will be receiving requests to a different
>> host[:port] than it is bound to. For example, when running in a Docker
>> container or behind a proxy (e.g. localhost:18443, proxyhost:443). By
>> default, this value is blank meaning NiFi should only allow requests sent
>> to the host[:port] that NiFi is bound to.
>>
>>
>>


Re: Nifi 1.14.0 - Using docker and Single User Credentials

2021-07-21 Thread Juan Pablo Gardella
Hi,

I tried the patch by adjusting the image, and it seems it is not working if
it is running behind a proxy.

COPY --chown=nifi:nifi start.sh /opt/nifi/scripts/scripts/start.sh
COPY --chown=nifi:nifi secure.sh /opt/nifi/scripts/scripts/secure.sh

Nifi starts but I am unable to access it when it runs behind a proxy.

System Error The request contained an invalid host header [
publichostname:8444] in the request [/nifi/]. Check for request
manipulation or third-party intercept. Valid host headers are [empty] or:

   - 127.0.0.1
   - 127.0.0.1:8443
   - localhost
   - localhost:8443
   - [::1]
   - [::1]:8443
   - 3cdcc5c8b343
   - 3cdcc5c8b343:8443
   - 172.18.0.2
   - 172.18.0.2:8443


y adjusting the image as:

environment:
  SINGLE_USER_CREDENTIALS_USERNAME: ${SINGLE_USER_CREDENTIALS_USERNAME}
  SINGLE_USER_CREDENTIALS_PASSWORD: ${SINGLE_USER_CREDENTIALS_PASSWORD}
  NIFI_SENSITIVE_PROPS_KEY: ${NIFI_SENSITIVE_PROPS_KEY}
  NIFI_WEB_HTTPS_HOST: ${NIFI_INTERNAL_HOST}
  NIFI_WEB_PROXY_HOST: ${PUBLIC_HOSTNAME}:${NIFI_PUBLIC_PORT}

It seems the certificate is not created properly when nifi is running
behind a proxy.

Juan

On Sun, 18 Jul 2021 at 22:56, Joey Frazee  wrote:

> Yeah, this wasn’t being handled right anymore. I put up a PR for this on
> Friday.
>
> https://github.com/apache/nifi/pull/5226
>
> If you can give it a test that’d be a big help.
>
> Best,
>
> -joey
>
> On Jul 18, 2021, at 6:50 PM, Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
> 
> Hello all,
>
> I am trying *Single User Credentials* with Docker but it does not work
> because it does not allow set up nifi.web.proxy.host[1] variable. The
> start script disallow setting the host:
>
> if [ -n "${SINGLE_USER_CREDENTIALS_USERNAME}" ] && [ -n "
> ${SINGLE_USER_CREDENTIALS_PASSWORD}" ]; then
> ${NIFI_HOME}/bin/nifi.sh set-single-user-credentials "
> ${SINGLE_USER_CREDENTIALS_USERNAME}" "${SINGLE_USER_CREDENTIALS_PASSWORD}"
> fi
>
> . "${scripts_dir}/update_cluster_state_management.sh"
>
> # Check if we are secured or unsecured
> case ${AUTH} in
> tls)
> echo 'Enabling Two-Way SSL user authentication'
> . "${scripts_dir}/secure.sh"
> ;;
> ldap)
> echo 'Enabling LDAP user authentication'
> # Reference ldap-provider in properties
> export NIFI_SECURITY_USER_LOGIN_IDENTITY_PROVIDER="ldap-provider"
>
> . "${scripts_dir}/secure.sh"
> . "${scripts_dir}/update_login_providers.sh"
> ;;
> *)
> if [ ! -z "${NIFI_WEB_PROXY_HOST}" ]; then
> echo 'NIFI_WEB_PROXY_HOST was set but NiFi is not configured to run in a
> secure mode. Will not update nifi.web.proxy.host.'
> fi
> ;;
> esac
>
> Why does the echo print that is not in secure mode?
>
> Thanks,
> Juan
> [1]
>
> A comma separated list of allowed HTTP Host header values to consider when
> NiFi is running securely and will be receiving requests to a different
> host[:port] than it is bound to. For example, when running in a Docker
> container or behind a proxy (e.g. localhost:18443, proxyhost:443). By
> default, this value is blank meaning NiFi should only allow requests sent
> to the host[:port] that NiFi is bound to.
>
>
>


Nifi 1.14.0 - Using docker and Single User Credentials

2021-07-18 Thread Juan Pablo Gardella
Hello all,

I am trying *Single User Credentials* with Docker but it does not work
because it does not allow set up nifi.web.proxy.host[1] variable. The start
script disallow setting the host:

if [ -n "${SINGLE_USER_CREDENTIALS_USERNAME}" ] && [ -n "
${SINGLE_USER_CREDENTIALS_PASSWORD}" ]; then
${NIFI_HOME}/bin/nifi.sh set-single-user-credentials "
${SINGLE_USER_CREDENTIALS_USERNAME}" "${SINGLE_USER_CREDENTIALS_PASSWORD}"
fi

. "${scripts_dir}/update_cluster_state_management.sh"

# Check if we are secured or unsecured
case ${AUTH} in
tls)
echo 'Enabling Two-Way SSL user authentication'
. "${scripts_dir}/secure.sh"
;;
ldap)
echo 'Enabling LDAP user authentication'
# Reference ldap-provider in properties
export NIFI_SECURITY_USER_LOGIN_IDENTITY_PROVIDER="ldap-provider"

. "${scripts_dir}/secure.sh"
. "${scripts_dir}/update_login_providers.sh"
;;
*)
if [ ! -z "${NIFI_WEB_PROXY_HOST}" ]; then
echo 'NIFI_WEB_PROXY_HOST was set but NiFi is not configured to run in a
secure mode. Will not update nifi.web.proxy.host.'
fi
;;
esac

Why does the echo print that is not in secure mode?

Thanks,
Juan
[1]

A comma separated list of allowed HTTP Host header values to consider when
NiFi is running securely and will be receiving requests to a different
host[:port] than it is bound to. For example, when running in a Docker
container or behind a proxy (e.g. localhost:18443, proxyhost:443). By
default, this value is blank meaning NiFi should only allow requests sent
to the host[:port] that NiFi is bound to.


Re: Nifi Registry GitFlowPersistenceProvider

2021-06-14 Thread Juan Pablo Gardella
I use DatabaseFlowPersistenceProvider, it is the simpler one. I spent some
time like you with the Git option but I gived up and chose the DB option.
It works fine. As you already require a DB for metadata, I use only one DB
for metadata and flow. Only one backup is required.

Juan

On Mon, 14 Jun 2021 at 10:25, Chris McKeever  wrote:

> The 1 git to 1 registry doesnt really help though does it? Since you want
> a production registry to always pull? You'll need to maintain 2 flows
> manually (or however you do it) ?
>
> On Mon, Jun 14, 2021 at 8:15 AM Sim, Yoosuk  wrote:
>
>> Hello everyone,
>>
>>
>>
>> Well, I have a little bit more updates to the issue.
>>
>> I did find that I was mistaken about pulling: nifi registry’s local git
>> does pull from remote git repo. When I checked the nifi-registry’s local
>> git storage, multiple ones were fully synced with remote git. However, even
>> though they were synced, nifi-registry itself did not recognize buckets
>> that were not created by the nifi-registry (I think), which meant only a
>> subset of the folders in the repo was shown as buckets by the nifi import
>> process.
>>
>>
>>
>> The only way to resolve it yet was to do something similar to
>> Hesselmann’s method: we dropped the metadata backend database in postgresql
>> and redeploy nifi-registry to repopulate the metadata. This worked, but it
>> certainly does introduce complications in the development process.
>>
>>
>>
>> At the moment, I am more inclined to give one git dedicated to one nifi
>> registry, unless I can find a simpler way to handle nifi-registry sync
>> issue.
>>
>>
>>
>> Any suggestions?
>>
>>
>>
>> Cheers,
>>
>>
>>
>> Tony Sim
>>
>>
>>
>> *From:* Hesselmann, Brian 
>> *Sent:* June-07-21 3:58 PM
>> *To:* users@nifi.apache.org
>> *Subject:* [EXT]Re: Nifi Registry GitFlowPersistenceProvider
>>
>>
>>
>> The easiest way I found to sync nifi registry from Git is by deleting the
>> registry database file(should be in /nifi-registry/database/*.db) and
>> restarting nifi-registry. After that it should fully reflect all changes in
>> the git repository. Basically our process is something like: push changes
>> to nifi registry on dev branch, merge from dev to master branch, delete
>> nifi registry database on master and restart nifi registry.
>>
>> I'm sure there must be a nicer way to do this, but so far this has worked
>> for us until we can spent more time on implementing the registry.
>> --
>>
>> *Van:* Chris McKeever 
>> *Verzonden:* maandag 7 juni 2021 16:49:21
>> *Aan:* users@nifi.apache.org
>> *Onderwerp:* Re: Nifi Registry GitFlowPersistenceProvider
>>
>>
>>
>> EXTERNAL SENDER: Do not click any links or open any attachments unless
>> you trust the sender and know the content is safe.
>> EXPÉDITEUR EXTERNE: Ne cliquez sur aucun lien et n’ouvrez aucune pièce
>> jointe à moins qu’ils ne proviennent d’un expéditeur fiable, ou que vous
>> ayez l'assurance que le contenu provient d'une source sûre.
>>
>>
>>
>> Tony - did you ever get an answer on this?
>>
>>
>>
>> On Fri, Jun 4, 2021 at 9:04 AM Chris McKeever 
>> wrote:
>>
>> oooh, this is interesting ... I know only one registry could/should be
>> the authoritative WRITER ..
>>
>> maybe there is a fetch hook that you can schedule to refresh ... following
>>
>>
>>
>> On Fri, Jun 4, 2021 at 8:50 AM Sim, Yoosuk  wrote:
>>
>> Hello everyone,
>>
>>
>>
>> I am currently setting  up Nifi Registry. In our setup, we wanted
>> multiple Nifi Registry to talk to the same remote git repository. (say, one
>> in DEV, and another in QA, etc.)
>>
>> Over time, I found that not all Nifi Registry retained the same
>> information, even though the remote git repository had the latest
>> information.
>>
>>
>>
>> Does Nifi Registry ever pull from the remote git aside from when it
>> clones?
>>
>> What might be the best way to resolve this issue?
>>
>>
>>
>> Cheers,
>>
>>
>>
>> [image: logo_bell_120dpi_0_84_154]
>>
>> Yoosuk (Tony) Sim
>>
>> Dev, Machine Learning Engineering
>>
>> --
>>
>> *External Email:** Please use caution when opening links and attachments
>> / **Courriel externe:** Soyez prudent avec les liens et documents joints
>> *
>>
>


Re: Broken pipe write failed errors

2021-05-29 Thread Juan Pablo Gardella
Not related to Nifi, but I faced the same type of issue for endpoints
behind a proxy which takes more than 30 seconds to answer. Fixed by
replacing Apache Http client by OkHttp. I did not investigate further, just
simply replaced one library by another and the error was fixed.

Juan

On Sat, 29 May 2021 at 15:08, Robert R. Bruno  wrote:

> I wanted to see if anyone has any ideas on this one.  Since upgrading to
> 1.13.2 from 1.9.2 we are starting to see broken pipe (write failed) errors
> from a few invokeHttp processers.
>
> It is happening to processors talking to different endpoints, so I am
> suspecting it is on the nifi side.  We are now using load balanced queues
> throughout our flow.  Is it possible we are hitting a http connection
> resource issue or something like that? A total guess I'll admit.
>
> If this could be it, does anyone know which parameter(s) to play with in
> the properties file?  I know there is one setting for jetty threads and
> another for max concurrent requests, but it isn't quite clear to me of they
> are at all involved with invokeHttp calls.
>
> Thanks in advance!
>
> Robert
>


Nifi DatabaseRecord has an issue at 1.13.2

2021-04-20 Thread Juan Pablo Gardella
Hi all,

I just discovered a critical issue on PutDatabaseRecord. I just confirmed
that if I roll back to Nifi 1.12.1 works as expected. Filed the problem at
https://issues.apache.org/jira/browse/NIFI-8446.

Juan


Re: Any known issue on SplitRecord?

2021-03-19 Thread Juan Pablo Gardella
Hi Mark/Joe

Actually if I wait enough it is able to process the flowfile. When I start
nifi, it is able to process immediately, but the second time I have to wait
around 1/2 minute. I see similar behaviour on Nifi 1.12.1, I have to wait
second time. I don't see the entry log about *Unable to write to container*
but I check my disk and it is almost full 😁 at least in percentage values:

$ docker exec  373913ea4d00 df -h
Filesystem  Size  Used Avail Use% Mounted on
overlay 461G  424G   14G  98% /
...

Nothing suspicious in logs. So it is a false alarm, the only problem for me
is understanding why it takes so long to split an Avro record  (result from
ExecuteSQLRecord).

Juan


On Thu, 18 Mar 2021 at 11:43, Mark Payne  wrote:

> Juan,
>
> How full is your disk that holds your content repository?
> The issue that Joe mentioned, I don’t think, will cause what you’re
> seeing. But I am guessing that your content repository is exerting back
> pressure.
>
> Can you check your logs for the message "Unable to write to container"? I
> am guessing that you will see a message along the lines of:
>
> 2021-03-18 10:26:20,988 INFO [Timer-Driven Process Thread-10] 
> o.a.n.c.repository.FileSystemRepository Unable to write to container default 
> due to archive file size constraints; waiting for archive cleanup
>
> This would indicate that your content repository is filling up and is
> exerting backpressure on the Input Port, which prevents it from writing the
> data out. This can potentially result in the transfer timing out.
>
> By default, the content repository exerts backpressure if the disk is 50%
> full. This can be configured in nifi.properties:
>
> nifi.content.repository.archive.max.usage.percentage=50%
>
>
> Thanks
> -Mark
>
>
> On Mar 17, 2021, at 11:58 PM, Joe Witt  wrote:
>
> Thanks Juan - that would be very valuable actually.  I'll send you a link
> to a build here in an hour or so. If you can test that and let us know that
> will help us with the release candidate voting process quite a bit.
>
> Thanks
>
> On Wed, Mar 17, 2021 at 8:49 PM Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
>> Wow that is fast! You are awesome, thanks Joe. I will test it.
>>
>> Juan
>>
>> On Thu, 18 Mar 2021 at 00:47, Joe Witt  wrote:
>>
>>> Juan
>>>
>>> We found a bug in 1.13.1 today as reported here
>>> https://issues.apache.org/jira/browse/NIFI-8337 and
>>> https://issues.apache.org/jira/browse/NIFI-8334.
>>>
>>> We will have a 1.13.2 out asap to fix this and the regression now has
>>> tests to prevent it in the future.
>>>
>>> Thanks
>>> Joe
>>>
>>> On Wed, Mar 17, 2021 at 8:44 PM Juan Pablo Gardella <
>>> gardellajuanpa...@gmail.com> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I am using latest nifi version and SplitRecord works only once and then
>>>> hangs:
>>>>
>>>> 
>>>>
>>>> I cannot stop it also.
>>>>
>>>> Juan
>>>>
>>>
>


Re: Any known issue on SplitRecord?

2021-03-17 Thread Juan Pablo Gardella
Wow that is fast! You are awesome, thanks Joe. I will test it.

Juan

On Thu, 18 Mar 2021 at 00:47, Joe Witt  wrote:

> Juan
>
> We found a bug in 1.13.1 today as reported here
> https://issues.apache.org/jira/browse/NIFI-8337 and
> https://issues.apache.org/jira/browse/NIFI-8334.
>
> We will have a 1.13.2 out asap to fix this and the regression now has
> tests to prevent it in the future.
>
> Thanks
> Joe
>
> On Wed, Mar 17, 2021 at 8:44 PM Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
>> Hi all,
>>
>> I am using latest nifi version and SplitRecord works only once and then
>> hangs:
>>
>> [image: image.png]
>>
>> I cannot stop it also.
>>
>> Juan
>>
>


Any known issue on SplitRecord?

2021-03-17 Thread Juan Pablo Gardella
Hi all,

I am using latest nifi version and SplitRecord works only once and then
hangs:

[image: image.png]

I cannot stop it also.

Juan


Re: [E] Re: NIFI show version changed *, but version control show no local changes

2021-01-27 Thread Juan Pablo Gardella
Any guide that explains how a process group can be graduated to upper
environments?

On Wed, 27 Jan 2021 at 12:33, Joe Witt  wrote:

> There is no requirement to use the registry.  It simply gives you a way to
> store versioned flows which you can reference/use from zero or more nifi
> clusters/flows to help keep things in line.  Many teams use this to ensure
> as flows are improved over time and worked through dev/test/stage/prod
> environments that they graduate properly.
>
> Thanks
>
> On Wed, Jan 27, 2021 at 8:31 AM Maksym Skrynnikov <
> skrynnikov.mak...@verizonmedia.com> wrote:
>
>> We use NiFi of version 1.12.1 but we do not use NiFi Registry, I wonder
>> if that's the requirement to use the registry?
>>
>> On Wed, Jan 27, 2021 at 2:25 PM Bryan Bende  wrote:
>>
>>> Please specify the versions of NiFi and NiFi Registry. If it is not
>>> the latest (1.12.1 and 0.8.0), then it would be good to try with the
>>> latest since there have been significant improvements around this area
>>> in the last few releases.
>>>
>>> On Wed, Jan 27, 2021 at 5:45 AM Jens M. Kofoed 
>>> wrote:
>>> >
>>> > Hi
>>> >
>>> > We have a situation where process groups in NIFI shows they are not up
>>> to date in version control. The show a *. But going to version control to
>>> see local changes, there are none.
>>> > NIFI reports back, there are no local changes. Submitting a new
>>> version, makes no different. A new version is created, but NIFI still shows
>>> the * and not the green check mark.
>>> >
>>> > I have tried to restart Registry which doesn't help.
>>> >
>>> > Restarting NIFI help for a short while. After restaring NIFI the
>>> process group show the green check mark and another group which is under
>>> the same version control now shows it needed an update. After updating the
>>> 2nd process group to the new version this process group now shows the * and
>>> not the green check mark. Going to version control to see local changes,
>>> there are none.
>>> >
>>> > Anybody who have experience with this issue?
>>> >
>>> > bug repport created:
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_NIFIREG-2D437-3F&d=DwIFaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=nRtn9-9qg4PKzRb3YqAHXrLTXJYN1G0ZisUsm-XYLkObBvdpApuffGYoI9OPgBKm&m=Z9hTZ0OdCBCst-23EzCV6YNkdOQs--8BkHDlBqQlU2k&s=YVW9lyT5J-D2oUEeIGACI2vGYBHemlqdwupU_Q_5HuU&e=
>>> >
>>> > kind regards
>>> >
>>> > Jens M. Kofoed
>>>
>>


Re: Need help for user authenrication

2020-11-26 Thread Juan Pablo Gardella
AFAIK is not possible.

On Thu, 26 Nov 2020 at 07:51, 酷酷的诚  wrote:

> Hi team!
> As the docs described : NiFi does not perform user authentication
> over HTTP. Using HTTP, all users will be granted all roles.
> But sometimes, we want to use the user authenrication over Http. So if
> I want to do this , which code need I to care when developing?
> Thanks for any suggestions!
> Regards!
>
> ZhangCheng
>
>
>
>
>
>


Re: Unable to rollback changes with Nifi Registry

2020-10-09 Thread Juan Pablo Gardella
I finally discovered the issue and how to solve it.

My first version of the service *MyService* was packaged on myprocessors-nar.
I added it as a service and in Controller Services it appears as Bundle
with com.foo - my-processors-nar. I developed some stuff and I used the
service. After that I repackaged the service in another NAR file although I
did not remove the controller service. It was working without removing it
and the Bundle value shows com.foo - my-processors-nar instead of com.foo -
my-service-nar.

If I add the service again, it shows the correct bundle. So the workaround
is remove old services (although they was working fine) and add them again.
That fix the Bundle value and I am able to rollback changes.

Thanks,
Juan


On Mon, 5 Oct 2020 at 17:33, Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:

> These are my pom.xml files:
>
> *myprocessor nar:*
> http://maven.apache.org/POM/4.0.0";
>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
> http://maven.apache.org/xsd/maven-4.0.0.xsd";>
>   4.0.0
>   
> xx
> xx
> xx
>   
>   myprocessors-nar
>   
> true
> true
>   
>   nar
>   
> 
>   my-processors
>   ${project.groupId}
>   ${project.version}
> 
> 
>   ${project.groupId}
>   ${project.version}
>   myservices-nar
>   nar
> 
>   
>   
> 
>   
> org.apache.nifi
> nifi-nar-maven-plugin
>   
> 
>   
> 
>
> *myservice-nar:*
> http://maven.apache.org/POM/4.0.0"; xmlns:xsi="
> http://www.w3.org/2001/XMLSchema-instance";
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
> http://maven.apache.org/xsd/maven-4.0.0.xsd";>
>   4.0.0
>
>   
> xxx
> xxx
> xxx
>   
>
>   myservices-nar
>   nar
>   
> true
> true
>   
>
>   
> 
>   org.apache.nifi
>   nifi-standard-services-api-nar
>   ${nifi.version}
>   nar
> 
> 
>   ${project.groupId}
>   my-services
>   ${project.version}
> 
>   
>   
> 
>   
> org.apache.nifi
> nifi-nar-maven-plugin
>   
> 
>   
>
> 
>
> Also the other modules are jars:
>
> *my-services:* Here com.foo.MyService is implemented
>   my-services
>  jar
>   
> 
>   ${project.groupId}
>   my-services-api
>   ${project.version}
> 
>
> *my-processors:*
>   my-processors
>  jar
>   
> 
>   ${project.groupId}
>   my-services-api
>   ${project.version}
> 
>
> *my-service-api:* A jar with one interface: com.foo.MyService
>
> That set of libraries and NAR files don't fail in Nifi and also there is
> no issues during the package. Is that type of NAR dependencies an issue for
> Nifi Registry?
>
> Thanks,
> Juan
>
>
>
>
>
> On Mon, 5 Oct 2020 at 16:44, Bryan Bende  wrote:
>
>> I don't know exactly what is in each of your NARs, but it is saying
>> that in the version of the flow being rolled back to, there exists
>> this:
>>
>> type: "com.foo.MyService",
>> bundle":{
>>   "group": "com.foo",
>>   "artifact": "myprocessor-nar",
>>   "version": "0.1.0-SNAPSHOT"
>> }
>>
>> So then NiFi goes to the NAR with that bundle coordinate and checks if
>> it contains that type, and in this case it doesn't, so it doesn't know
>> what to do because the versioned flow said to get a type from a bundle
>> that doesn't contain the type.
>>
>> On Mon, Oct 5, 2020 at 3:15 PM Juan Pablo Gardella
>>  wrote:
>> >
>> > Hi Bryan,
>> >
>> > Thanks, I will try to isolate the issue although myprocessor-nar does
>> not have a class implementing com.foo.MyService, it contains only the
>> interface, because the implementation is part of myservices-nar. IFor Nifi
>> there is no issues, it is working as expected, maybe some checks in Nifi
>> registry are causing problems.
>> >
>> > I will try to investigate and debug a little bit on it and file an
>> issue if I find it and I hope a PR.
>> >
>> > Thanks
>> >
>> > On Mon, 5 Oct 2020 at 16:06, Bryan Bende  wrote:
>> >>
>> >> Seems the version of the flow you are rolling back to has a component
>> >> in it with class name "com.foo.MyService", but the current
>> >> myprocessor-

Re: Unable to rollback changes with Nifi Registry

2020-10-05 Thread Juan Pablo Gardella
These are my pom.xml files:

*myprocessor nar:*
http://maven.apache.org/POM/4.0.0";
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
  4.0.0
  
xx
xx
xx
  
  myprocessors-nar
  
true
true
  
  nar
  

  my-processors
  ${project.groupId}
  ${project.version}


  ${project.groupId}
  ${project.version}
  myservices-nar
  nar

  
  

  
org.apache.nifi
nifi-nar-maven-plugin
  

  


*myservice-nar:*
http://maven.apache.org/POM/4.0.0"; xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance";
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
  4.0.0

  
xxx
xxx
xxx
  

  myservices-nar
  nar
  
true
true
  

  

  org.apache.nifi
  nifi-standard-services-api-nar
  ${nifi.version}
  nar


  ${project.groupId}
  my-services
  ${project.version}

  
  

  
org.apache.nifi
nifi-nar-maven-plugin
  

  



Also the other modules are jars:

*my-services:* Here com.foo.MyService is implemented
  my-services
 jar
  

  ${project.groupId}
  my-services-api
  ${project.version}


*my-processors:*
  my-processors
 jar
  

  ${project.groupId}
  my-services-api
  ${project.version}


*my-service-api:* A jar with one interface: com.foo.MyService

That set of libraries and NAR files don't fail in Nifi and also there is no
issues during the package. Is that type of NAR dependencies an issue for
Nifi Registry?

Thanks,
Juan





On Mon, 5 Oct 2020 at 16:44, Bryan Bende  wrote:

> I don't know exactly what is in each of your NARs, but it is saying
> that in the version of the flow being rolled back to, there exists
> this:
>
> type: "com.foo.MyService",
> bundle":{
>   "group": "com.foo",
>   "artifact": "myprocessor-nar",
>   "version": "0.1.0-SNAPSHOT"
> }
>
> So then NiFi goes to the NAR with that bundle coordinate and checks if
> it contains that type, and in this case it doesn't, so it doesn't know
> what to do because the versioned flow said to get a type from a bundle
> that doesn't contain the type.
>
> On Mon, Oct 5, 2020 at 3:15 PM Juan Pablo Gardella
>  wrote:
> >
> > Hi Bryan,
> >
> > Thanks, I will try to isolate the issue although myprocessor-nar does
> not have a class implementing com.foo.MyService, it contains only the
> interface, because the implementation is part of myservices-nar. IFor Nifi
> there is no issues, it is working as expected, maybe some checks in Nifi
> registry are causing problems.
> >
> > I will try to investigate and debug a little bit on it and file an issue
> if I find it and I hope a PR.
> >
> > Thanks
> >
> > On Mon, 5 Oct 2020 at 16:06, Bryan Bende  wrote:
> >>
> >> Seems the version of the flow you are rolling back to has a component
> >> in it with class name "com.foo.MyService", but the current
> >> myprocessor-nar that you are running does not have that service in it.
> >>
> >> On Mon, Oct 5, 2020 at 3:00 PM Juan Pablo Gardella
> >>  wrote:
> >> >
> >> > Hi all,
> >> >
> >> > Does anyone know about this issue? What does it mean?
> >> >
> >> > Thanks,
> >> > Juan
> >> >
> >> >
> >> > On Sun, 4 Oct 2020 at 11:25, Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
> >> >>
> >> >> Hi all,
> >> >>
> >> >> I am starting to play with Nifi Registry. I am using Nifi 1.12.1 and
> Nifi Registry 0.7.0 almost with all default configurations. No security on
> both. My flow has some custom processors and services which were generated
> by nifi-nar-maven-plugin/1.3.2.
> >> >>
> >> >> I was able to create a bucket in Nifi Registry and started the
> version control and committed some changes.  The problem appeared when I
> tested roll back local changes and following error appears in the UI
> >> >>
> >> >> Found bundle com.foo:myprocessr-nar:0.1.0-SNAPSHOT but does not
> support com.foo.MyService
> >> >>
> >> >> There are no errors in Nifi Registry and Nifi logs. A 409 conflict
> issue appears in the browser developer console: POST
> http://localhost:8000/nifi-api/versions/revert-requests/process-groups/e8ff381c-0174-1000-b8ce-873a9ad9bd47
> operation.
> >> >>
> >> >> I uploaded two NARs files to Nifi Registry using upload bundle to
> test if that was the problem but it didn't work also.
> >> >>
> >> >> I cannot find too much in google, anyone has an idea what does this
> error mean? and also, how I can fix it? Notice the processors and services
> are working without issues on Nifi. Both NAR files were packed using
> >> >>
> >> >> I tried change to true  "allowBundleRedeploy":   "allowPublicRead":
> but still failed (it is created with false).
> >> >>
> >> >> Thanks
>


Re: Unable to rollback changes with Nifi Registry

2020-10-05 Thread Juan Pablo Gardella
Hi Bryan,

Thanks, I will try to isolate the issue although myprocessor-nar does not
have a class implementing com.foo.MyService, it contains only the
interface, because the implementation is part of myservices-nar. IFor Nifi
there is no issues, it is working as expected, maybe some checks in Nifi
registry are causing problems.

I will try to investigate and debug a little bit on it and file an issue if
I find it and I hope a PR.

Thanks

On Mon, 5 Oct 2020 at 16:06, Bryan Bende  wrote:

> Seems the version of the flow you are rolling back to has a component
> in it with class name "com.foo.MyService", but the current
> myprocessor-nar that you are running does not have that service in it.
>
> On Mon, Oct 5, 2020 at 3:00 PM Juan Pablo Gardella
>  wrote:
> >
> > Hi all,
> >
> > Does anyone know about this issue? What does it mean?
> >
> > Thanks,
> > Juan
> >
> >
> > On Sun, 4 Oct 2020 at 11:25, Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
> >>
> >> Hi all,
> >>
> >> I am starting to play with Nifi Registry. I am using Nifi 1.12.1 and
> Nifi Registry 0.7.0 almost with all default configurations. No security on
> both. My flow has some custom processors and services which were generated
> by nifi-nar-maven-plugin/1.3.2.
> >>
> >> I was able to create a bucket in Nifi Registry and started the version
> control and committed some changes.  The problem appeared when I tested
> roll back local changes and following error appears in the UI
> >>
> >> Found bundle com.foo:myprocessr-nar:0.1.0-SNAPSHOT but does not support
> com.foo.MyService
> >>
> >> There are no errors in Nifi Registry and Nifi logs. A 409 conflict
> issue appears in the browser developer console: POST
> http://localhost:8000/nifi-api/versions/revert-requests/process-groups/e8ff381c-0174-1000-b8ce-873a9ad9bd47
> operation.
> >>
> >> I uploaded two NARs files to Nifi Registry using upload bundle to test
> if that was the problem but it didn't work also.
> >>
> >> I cannot find too much in google, anyone has an idea what does this
> error mean? and also, how I can fix it? Notice the processors and services
> are working without issues on Nifi. Both NAR files were packed using
> >>
> >> I tried change to true  "allowBundleRedeploy":   "allowPublicRead":
> but still failed (it is created with false).
> >>
> >> Thanks
>


Re: Unable to rollback changes with Nifi Registry

2020-10-05 Thread Juan Pablo Gardella
Hi all,

Does anyone know about this issue? What does it mean?

Thanks,
Juan


On Sun, 4 Oct 2020 at 11:25, Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:

> Hi all,
>
> I am starting to play with Nifi Registry. I am using Nifi 1.12.1 and Nifi
> Registry 0.7.0 almost with all default configurations. No security on both.
> My flow has some custom processors and services which were generated by
> nifi-nar-maven-plugin/1.3.2.
>
> I was able to create a bucket in Nifi Registry and started the version
> control and committed some changes.  The problem appeared when I tested
> roll back local changes and following error appears in the UI
>
> Found bundle com.foo:myprocessr-nar:0.1.0-SNAPSHOT but does not support
> com.foo.MyService
>
> There are no errors in Nifi Registry and Nifi logs. A 409 conflict issue
> appears in the browser developer console: POST
> http://localhost:8000/nifi-api/versions/revert-requests/process-groups/e8ff381c-0174-1000-b8ce-873a9ad9bd47
> operation.
>
> I uploaded two NARs files to Nifi Registry using upload bundle
> <https://nifi.apache.org/docs/nifi-registry-docs/html/user-guide.html#upload-bundle>
> to test if that was the problem but it didn't work also.
>
> I cannot find too much in google, anyone has an idea what does this error
> mean? and also, how I can fix it? Notice the processors and services are
> working without issues on Nifi. Both NAR files were packed using
>
> I tried change to true  "allowBundleRedeploy":   "allowPublicRead":  but
> still failed (it is created with false).
>
> Thanks
>
>


Unable to rollback changes with Nifi Registry

2020-10-04 Thread Juan Pablo Gardella
Hi all,

I am starting to play with Nifi Registry. I am using Nifi 1.12.1 and Nifi
Registry 0.7.0 almost with all default configurations. No security on both.
My flow has some custom processors and services which were generated by
nifi-nar-maven-plugin/1.3.2.

I was able to create a bucket in Nifi Registry and started the version
control and committed some changes.  The problem appeared when I tested
roll back local changes and following error appears in the UI

Found bundle com.foo:myprocessr-nar:0.1.0-SNAPSHOT but does not support
com.foo.MyService

There are no errors in Nifi Registry and Nifi logs. A 409 conflict issue
appears in the browser developer console: POST
http://localhost:8000/nifi-api/versions/revert-requests/process-groups/e8ff381c-0174-1000-b8ce-873a9ad9bd47
operation.

I uploaded two NARs files to Nifi Registry using upload bundle

to test if that was the problem but it didn't work also.

I cannot find too much in google, anyone has an idea what does this error
mean? and also, how I can fix it? Notice the processors and services are
working without issues on Nifi. Both NAR files were packed using

I tried change to true  "allowBundleRedeploy":   "allowPublicRead":  but
still failed (it is created with false).

Thanks


Re: Nested groups for LdapUserGroupProvider

2020-07-24 Thread Juan Pablo Gardella
Maybe that scenario is not supported, but you can start playing with that
custom scenario. LDAP provider is configurable by XML


*ldap-provider*
org.apache.nifi.ldap.LdapProvider

Juan

On Fri, 24 Jul 2020 at 08:20, Moncef Abboud 
wrote:

> Hello fellow NiFi Users,
>
> I am trying to configure authorization using the LdapUserGroupProvider.
> The documentation is clear : specify your "User Search Base" and "Group
> Search Base"  and define membership either using  "User Group Name
> Attribute" such as "memberOf" or the other way around using "Group Member
> Attribute" such as "member". All that is clear and works perfectly but my
> problems is as follows:
>
> I have two levels of groups in my directory e.g.
>
> GroupA contains Group1 and Group2
> GroupB contains Group2 and Group3
> GroupC contains Group1 and Group3
>
> Group1 contains User1 and User2
> Group2 contains User1 and User3
>
>  LDIF looks something like this:
>
> dn: CN=GroupA 
> member: CN= Group1 ..
> member: CN= Group2 ..
>
> -
> dn: CN=Group1 
> member: CN=User1 ..
> member: CN=User2..
> .
> memberOf: CN=GroupA ...
> memberOf: CN=GroupC ...
>
> 
>
> dn: CN=User1
> memberOf: CN=Group1 ...
> memberOf: CN=Group2 ...
> --
>
> No direct link between a user and a level 1 group (GroupA, GroupB..)
>
> I would like to note that groups of level 1 (GroupA, GroupB ..) are not in
> the same branch in the DIT as those of level 2 (Group1, Group2 ..).
>
> The requirement is that the groups used to manage authorization and that
> should show in the NIFI UI are those of level 1 (GroupA, GroupB..) and that
> users should be assigned to the groups containing their direct groups for
> instance User1 (who is a direct member of Group1 and Group2) should be
> displayed as a member of groups (GroupA, GroupB and GroupC). And level 2
> groups (Group1, Group2..) must not show and must not be used directly in
> the UI but only as link between users and level 1 groups.
>
> So to sum up, NIFI should take into account only level1 groups and handle
> transitive memberships through level2 groups.
>
> Thank you in advance for your answers.
>
> Best Regards,
> Moncef
>


Re: NiFi Registry Bundles - Integration into NiFi

2020-07-20 Thread Juan Pablo Gardella
Be aware that using auto-load works for adding only. If you remove it from
that directory, it does not remove or update AFAIK, at least, on nifi 1.5.0

On Mon, 20 Jul 2020 at 16:10, Bryan Bende  wrote:

> Hello,
>
> The plan is to eventually build a seamless experience on the NiFi side
> where it would obtain the required NARs for a versioned flow, but that work
> has taken place yet.
>
> The ability to store the bundles in registry with an API for retrieving
> them is the starting point for that.
>
> Currently there are some NiFi CLI commands which may be helpful for you,
> there is one for "download-bundle" [1].
>
> Also, you don't have to restart nifi, you can put the NAR into the
> directory specified by nifi.nar.library.autoload.directory and it will
> auto-load, the default directory is ./extensions.
>
> You need to do a hard refresh of the UI before you'll see the components
> since the UI currently caches the list on first load.
>
> Thanks,
>
> Bryan
>
> [1]
> https://github.com/apache/nifi/blob/main/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/registry/extension/DownloadBundle.java
>
>
>
> On Mon, Jul 20, 2020 at 2:24 PM Firlefanz Development <
> firlefanz.developm...@gmail.com> wrote:
>
>> Hello,
>>
>> I recently read about the possibility to upload NAR-files directly into
>> the NiFi Registry using its REST api. [1]
>> Until now I provide my custom processors by placing the corresponding
>> NAR-file onto every node of my NiFi cluster and restart the whole cluster
>> as described in the NiFi guides.
>>
>> I was hoping by providing the NAR-file to the registry those processors
>> would be available to the Flow but could make it work.
>> Is this possible as of now or planned for the future? When there is no
>> direct integration possible with the NiFi cluster, what is the benefit of
>> uploading the NAR-file to the registry?
>>
>> I would love to simplify my build setup, so I'm grateful for any guidance
>> on how to achieve that. If possible I'd like to avoid the semi-manual task
>> of providing the NAR-file to every cluster node as well as the need to
>> restart NiFi.
>>
>> Thank you in advance!
>>
>> [1]
>> https://nifi.apache.org/docs/nifi-registry-docs/html/user-guide.html#manage-bundles
>>
>>
>>


Re: NiFi-light for analysts

2020-06-29 Thread Juan Pablo Gardella
I actually do it manually in docker file:

RUN mv /opt/nifi/nifi-current/lib/*.nar /opt/nifi/nifi-current/lib.original/
RUN cp /opt/nifi/nifi-current/lib.original/nifi-avro-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp
/opt/nifi/nifi-current/lib.original/nifi-update-attribute-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp /opt/nifi/nifi-current/lib.original/nifi-kafka-2-0-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp
/opt/nifi/nifi-current/lib.original/nifi-standard-services-api-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp
/opt/nifi/nifi-current/lib.original/nifi-dbcp-service-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp
/opt/nifi/nifi-current/lib.original/nifi-ldap-iaa-providers-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp /opt/nifi/nifi-current/lib.original/nifi-framework-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp
/opt/nifi/nifi-current/lib.original/nifi-provenance-repository-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp /opt/nifi/nifi-current/lib.original/nifi-standard-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp /opt/nifi/nifi-current/lib.original/nifi-jetty-bundle-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp
/opt/nifi/nifi-current/lib.original/nifi-record-serialization-services-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
RUN cp /opt/nifi/nifi-current/lib.original/nifi-registry-nar-$NIFI_VER.nar
/opt/nifi/nifi-current/lib
# Custom one
COPY --chown=nifi:nifi  processors/*.nar /opt/nifi/nifi-current/lib/

That allows faster starts.

Juan

On Mon, 29 Jun 2020 at 14:16, Joe Witt  wrote:

> That would be a fine option for those users who are capable to run maven
> builds. I think evolving the nifi registry and nifi integration to source
> all nars as needed at runtime from the registry would be the best user
> experience and deployment answer over time.
>
> Thanks
>
> On Mon, Jun 29, 2020 at 9:57 AM Mike Thomsen 
> wrote:
>
>> As far as I can tell, Kylo is dead based on their public github activity.
>>
>> Mark,
>>
>> Would it make sense for us to start modularizing nifi-assembly with more
>> profiles? That way people like Boris could run something like this:
>>
>> mvn install -Pinclude-grpc,include-graph,!include-kafka,!include-mongodb
>>
>> On Mon, Jun 29, 2020 at 11:20 AM Boris Tyukin 
>> wrote:
>>
>>> Hi Mark, thanks for the great comments and for working on these
>>> improvements. these are great enhancements that we
>>> can certainly benefit from - I am thinking of two projects at least we
>>> support today.
>>>
>>> As far as making it more user-friendly, at some point I looked at
>>> Kylo.io and it was quite an interesting project - not sure if it is alive
>>> still - but I liked how they created their own UI/tooling around NiFi.
>>>
>>> I am going to toy with this idea to have a "dumb down" version of NiFi.
>>>
>>> On Sun, Jun 28, 2020 at 3:36 PM Mark Payne  wrote:
>>>
 Hey Boris,

 There’s a good bit to unpack here but I’ll try to answer each question.

 1) I would say that the target audience for NiFi really is a person
 with a pretty technical role. Not developers, necessarily, though. We do
 see a lot of developers using it, as well as data scientists, data
 engineers, sys admins, etc. So while there may be quite a few tasks that a
 non-technical person can achieve, it may be hard to expose the platform to
 someone without a technical background.

 That said, I do believe that you’re right about the notion of flow
 dependencies. I’ve done some work recently to help improve this. For
 example, NIFI-7476 [1] makes it possible to configure a Process Group in
 such a way that only a single FlowFile at a time is allowed into the group.
 And the data is optionally held within the group until that FlowFile has
 completed processing, even if it’s split up into many parts. Additionally,
 NIFI-7509 [2] updates the List* processors so that they can use an optional
 Record Writer. This makes it possible to get a full listing of a directory
 from ListFile as a single FlowFile. Or a listing of all items in an S3
 bucket or an Azure Blob Store, etc. So when that is combined with
 NIFI-7476, it makes it very easy to process an entire directory of files or
 an entire bucket, etc. and wait until all processing is complete before
 data is transferred on to the next task. (Additionally, NIFI-7552 updates
 this to add attributes indicating FlowFile counts for each Output Port so
 it’s easy to determine if there were any “processing failures” etc.).

 So with all of the above said, I don’t think that it necessarily solves
 in a simple and generic sense the requirement to complete Task A, then Task
 B, and then Task C. But it does put us far closer. This may be achievable
 still with some nesting of Process Groups, etc. but it won’t be completely
 as straight-forward as I’d like and would perhaps add significantly latency
 if i

Re: Custom service in NAR generation failure

2020-06-26 Thread Juan Pablo Gardella
I will try to reproduce it (I have a project where I can reproduce the
problem) and then fill a JIRA. I hope I can reproduce it again :)

Juan

On Fri, 26 Jun 2020 at 09:19, Etienne Jouvin 
wrote:

> Some update.
>
> I wanted to reproduce the error in a fresh new project. But no way to have
> it again.
> So for the moment, I am not able to show an example.
>
> I will give it a try later.
>
> Sorry about that
>
>
> Le ven. 19 juin 2020 à 15:48, Bryan Bende  a écrit :
>
>> I haven't fully evaluated the fix, at a quick glance it seems correct,
>> but I'm trying to figure out if something else is not totally correct in
>> your poms because many other projects are using the latest NAR plugin and
>> not having this issue, so there must be some difference that makes it work
>> in some cases.
>>
>> We have Maven archetypes for the processor and service bundles. I wonder
>> if you could compare the resulting projects/poms with yours to see what
>> seems different?
>>
>>
>> https://cwiki.apache.org/confluence/display/NIFI/Maven+Projects+for+Extensions
>>
>>
>> On Fri, Jun 19, 2020 at 9:30 AM Etienne Jouvin 
>> wrote:
>>
>>> My parent pom has this as declaration :
>>>
>>> 
>>> org.apache.nifi
>>> nifi-nar-bundles
>>> 1.11.4
>>> 
>>>
>>> When I studied the maven plugin, I found the following in class
>>> org.apache.nifi.extension.definition.extraction.ExtensionClassLoaderFactory.java
>>> private String determineProvidedEntityVersion(final Set
>>> artifacts, final String groupId, final String artifactId) throws
>>> ProjectBuildingException, MojoExecutionException {
>>> getLog().debug("Determining provided entities for " + groupId +
>>> ":" + artifactId);
>>> for (final Artifact artifact : artifacts) {
>>> if (artifact.getGroupId().equals(groupId) &&
>>> artifact.getArtifactId().equals(artifactId)) {
>>> return artifact.getVersion();
>>> }
>>> }
>>> return findProvidedDependencyVersion(artifacts, groupId,
>>> artifactId);
>>> }
>>> In this case, it search artifact in the dependencies.
>>>
>>> If not found, check from provided dependencies (in fact from artifact
>>> that the current artifact depends on, if I well understood)
>>> And the function is :
>>> private String findProvidedDependencyVersion(final Set
>>> artifacts, final String groupId, final String artifactId) {
>>> final ProjectBuildingRequest projectRequest = new
>>> DefaultProjectBuildingRequest();
>>> projectRequest.setRepositorySession(repoSession);
>>> projectRequest.setSystemProperties(System.getProperties());
>>> projectRequest.setLocalRepository(localRepo);
>>> for (final Artifact artifact : artifacts) {
>>> final Set artifactDependencies = new HashSet<>();
>>> try {
>>> final ProjectBuildingResult projectResult =
>>> projectBuilder.build(artifact, projectRequest);
>>> gatherArtifacts(projectResult.getProject(),
>>> artifactDependencies);
>>> getLog().debug("For Artifact " + artifact + ", found the
>>> following dependencies:");
>>> artifactDependencies.forEach(dep ->
>>> getLog().debug(dep.toString()));
>>>
>>> for (final Artifact dependency : artifactDependencies) {
>>> if (dependency.getGroupId().equals(groupId) &&
>>> dependency.getArtifactId().equals(artifactId)) {
>>> getLog().debug("Found version of " + groupId +
>>> ":" + artifactId + " to be " + artifact.getVersion());
>>> return artifact.getVersion();
>>> }
>>> }
>>> } catch (final Exception e) {
>>> getLog().warn("Unable to construct Maven Project for " +
>>> artifact + " when attempting to determine the expected version of NiFi
>>> API");
>>> getLog().debug("Unable to construct Maven Project for "
>>> + artifact + " when attempting to determine the expected version of NiFi
>>> API", e);
>>> }
>>> }
>>> return null;
>>> }
>>>
>>> And again if I well understood the code, it search in artifact to match
>>> the one for specific group and artifact ids, for example nifi-api.
>>> But the version returned is not the one from the found artifact, but
>>> from the source artifact.
>>>
>>> So that's why I explicitly set dependencies in the artifact pom to solve
>>> temporary the difficulty.
>>>
>>> In the PR, I made the following change :
>>> private String findProvidedDependencyVersion(final Set
>>> artifacts, final String groupId, final String artifactId) {
>>> final ProjectBuildingRequest projectRequest = new
>>> DefaultProjectBuildingRequest();
>>> projectRequest.setRepositorySession(repoSession);
>>> projectRequest.setSystemProperties(System.getProperties());
>>> projectRequest.setLocalRepository(localRepo);
>>> for (final 

Re: Custom service in NAR generation failure

2020-06-19 Thread Juan Pablo Gardella
I am facing a similar problem and the workaround I used was installing the
dependency in local repository.

On Fri, 19 Jun 2020 at 10:30, Etienne Jouvin 
wrote:

> My parent pom has this as declaration :
>
> 
> org.apache.nifi
> nifi-nar-bundles
> 1.11.4
> 
>
> When I studied the maven plugin, I found the following in class
> org.apache.nifi.extension.definition.extraction.ExtensionClassLoaderFactory.java
> private String determineProvidedEntityVersion(final Set
> artifacts, final String groupId, final String artifactId) throws
> ProjectBuildingException, MojoExecutionException {
> getLog().debug("Determining provided entities for " + groupId +
> ":" + artifactId);
> for (final Artifact artifact : artifacts) {
> if (artifact.getGroupId().equals(groupId) &&
> artifact.getArtifactId().equals(artifactId)) {
> return artifact.getVersion();
> }
> }
> return findProvidedDependencyVersion(artifacts, groupId,
> artifactId);
> }
> In this case, it search artifact in the dependencies.
>
> If not found, check from provided dependencies (in fact from artifact that
> the current artifact depends on, if I well understood)
> And the function is :
> private String findProvidedDependencyVersion(final Set
> artifacts, final String groupId, final String artifactId) {
> final ProjectBuildingRequest projectRequest = new
> DefaultProjectBuildingRequest();
> projectRequest.setRepositorySession(repoSession);
> projectRequest.setSystemProperties(System.getProperties());
> projectRequest.setLocalRepository(localRepo);
> for (final Artifact artifact : artifacts) {
> final Set artifactDependencies = new HashSet<>();
> try {
> final ProjectBuildingResult projectResult =
> projectBuilder.build(artifact, projectRequest);
> gatherArtifacts(projectResult.getProject(),
> artifactDependencies);
> getLog().debug("For Artifact " + artifact + ", found the
> following dependencies:");
> artifactDependencies.forEach(dep ->
> getLog().debug(dep.toString()));
>
> for (final Artifact dependency : artifactDependencies) {
> if (dependency.getGroupId().equals(groupId) &&
> dependency.getArtifactId().equals(artifactId)) {
> getLog().debug("Found version of " + groupId + ":"
> + artifactId + " to be " + artifact.getVersion());
> return artifact.getVersion();
> }
> }
> } catch (final Exception e) {
> getLog().warn("Unable to construct Maven Project for " +
> artifact + " when attempting to determine the expected version of NiFi
> API");
> getLog().debug("Unable to construct Maven Project for " +
> artifact + " when attempting to determine the expected version of NiFi
> API", e);
> }
> }
> return null;
> }
>
> And again if I well understood the code, it search in artifact to match
> the one for specific group and artifact ids, for example nifi-api.
> But the version returned is not the one from the found artifact, but from
> the source artifact.
>
> So that's why I explicitly set dependencies in the artifact pom to solve
> temporary the difficulty.
>
> In the PR, I made the following change :
> private String findProvidedDependencyVersion(final Set
> artifacts, final String groupId, final String artifactId) {
> final ProjectBuildingRequest projectRequest = new
> DefaultProjectBuildingRequest();
> projectRequest.setRepositorySession(repoSession);
> projectRequest.setSystemProperties(System.getProperties());
> projectRequest.setLocalRepository(localRepo);
> for (final Artifact artifact : artifacts) {
> final Set artifactDependencies = new HashSet<>();
> try {
> final ProjectBuildingResult projectResult =
> projectBuilder.build(artifact, projectRequest);
> gatherArtifacts(projectResult.getProject(),
> artifactDependencies);
> getLog().debug("For Artifact " + artifact + ", found the
> following dependencies:");
> artifactDependencies.forEach(dep ->
> getLog().debug(dep.toString()));
>
> for (final Artifact dependency : artifactDependencies) {
> if (dependency.getGroupId().equals(groupId) &&
> dependency.getArtifactId().equals(artifactId)) {
> getLog().debug("Found version of " + groupId + ":"
> + artifactId + " to be " + dependency.getVersion());
> return dependency.getVersion();
> }
> }
> } catch (final Exception e) {
> getLog().warn("Unable to construct Maven Project for " +
> artifact + " when attempting to determine the expected version of NiFi
> AP

Re: PutSQL and AutoCommit Mode Still Commits

2020-06-03 Thread Juan Pablo Gardella
Which driver?

On Wed, 3 Jun 2020 at 17:20, Shawn Weeks  wrote:

> Per the JDBC Spec calling commit() on a connection in auto-commit should
> raise a SQLException so what we’re doing is out of spec.
>
>
>
> https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#commit()
>
>
>
> We should probably check c.getProperty(AUTO_COMMIT).asBoolean() and only
> call commit if we weren’t in auto commit mode. I’ll file a jira
>
>
>
> Thanks
>
> Shawn
>
>
>
> *From: *Shawn Weeks 
> *Reply-To: *"users@nifi.apache.org" 
> *Date: *Wednesday, June 3, 2020 at 2:27 PM
> *To: *"users@nifi.apache.org" 
> *Subject: *PutSQL and AutoCommit Mode Still Commits
>
>
>
> It appears that PutSQL calls a commit even when set to autoCommit true
> mode. This breaks things if a driver raises an error on commit in
> autoCommit mode. For example redshift does this.
>
>
>
> Thanks
>
> Shawn
>
>
>
>
>


Re: ExecuteSQL not working

2020-05-08 Thread Juan Pablo Gardella
Try again by adding a column alias name tonthe results.

On Fri, May 8, 2020, 12:21 PM Luis Carmona  wrote:

> Hi juan Pablo,
>
> I did, but jTDS was the only way achive the connection. With the
> offical jdbc driver always issued error about TSL protocol problems.
>
> After some reading, seems to be it is cause the SWL server is too old.
>
> And with jTDS I got the coneection, and was able to execute Database
> list Tables. But the processor ExecuteSQL is not working.
>
> Regards,
>
> LC
>
>
>
>
> On Fri, 2020-05-08 at 02:27 -0300, Juan Pablo Gardella wrote:
> > Did you try using mssql official jdbc driver?
> >
> > On Fri, 8 May 2020 at 01:34, Luis Carmona 
> > wrote:
> > > Hi everyone,
> > >
> > > I am trying to execute a query to an MS SQL Server, through jTDS
> > > driver, but can't figure why is it giving me error all the time.
> > >
> > > If I let the processor as it is, setting the controller service
> > > obviously, throws the error of image saying "empty name".
> > >
> > > If I set the processor with Normaliza Table/Columns and Use Avro
> > > Types
> > > to TRUE, then throws the error of the image saying "Index out of
> > > range"
> > >
> > > Th query is as simple as this:
> > >
> > > SELECT MAX(ORDEN)
> > > FROM demo_planta2.dbo.ORDEN_VENTA_CAB
> > >   WHERE
> > >   CODEMPRESA=2
> > >   AND
> > >   CODTEMPO=1;
> > >
> > > Please some tip about what could be wrong in my settings.
> > >
> > > Regards,
> > >
> > > LC
> > >
> > >
>
>


Re: ExecuteSQL not working

2020-05-07 Thread Juan Pablo Gardella
Did you try using mssql official jdbc driver?

On Fri, 8 May 2020 at 01:34, Luis Carmona  wrote:

> Hi everyone,
>
> I am trying to execute a query to an MS SQL Server, through jTDS
> driver, but can't figure why is it giving me error all the time.
>
> If I let the processor as it is, setting the controller service
> obviously, throws the error of image saying "empty name".
>
> If I set the processor with Normaliza Table/Columns and Use Avro Types
> to TRUE, then throws the error of the image saying "Index out of range"
>
> Th query is as simple as this:
>
> SELECT MAX(ORDEN)
> FROM demo_planta2.dbo.ORDEN_VENTA_CAB
>   WHERE
>   CODEMPRESA=2
>   AND
>   CODTEMPO=1;
>
> Please some tip about what could be wrong in my settings.
>
> Regards,
>
> LC
>
>
>


Re: How to block a flow until first file is finish

2020-03-24 Thread Juan Pablo Gardella
I think this common use cases are easily implemented with Pentaho
Transformation and Pentaho Job constructors. Below a short summary how each
type of construction works:

1) *Process Group (PG)*: Every processors inside the PG runs forever, that
means always are triggered.
2) *Pentaho Transformation*: Every processors (a.k.a step in Pentaho
parlance) start. The equivalent method for onTrigger (processRow) returns a
boolean. When it returns false, the step stop. So, if you have N steps,
once all return false the transformation completes. All steps start at the
same time.
3) *Pentaho Job*: Used for coordination. It can connects different
transformations. For example, runs transformation T1. If T1 completes,
execute T2.

It is difficult in Nifi implement this type of flows/ETL. We know Nifi is
not an ETL, but sometimes it is used as ETL and a lot of time we need that
type of coordination between processors that now, are not easy to
implement. I will be great to have some type of new component/constructor,
like *Process Group For ETL* that can be used like a transformation. Once
all processors *complete*, continue  with next step.

Data flow and ETL are different type of design. For Data flow Nifi is
great, but for ETL starting to coordinate things make the nifi flow
complex.

Juan



On Tue, 24 Mar 2020 at 09:34, Jens M. Kofoed  wrote:

> Hi Chris
>
> Thanks for the links, but yes I've read them before and I'm using similar
> flows in other use cases.
> In both examples from the link a process is splitting data up in two
> flows, flow A and flow B. In flow A you use a wait process blocking the
> rest of flow A. In the end of flow B you have a notify process which
> trigger flow A.
> My issue is that I only have 1 flow. Could be something like this where
> the issue is that the outpu of the ExecuteProcess use static filenames
> (stupid I know). So if the Execute process runs multiple times before the
> GetFile is done, it will overwrite old files. Therefore I need some way to
> block the PutFile and Execute Process, until the GetFile is done or that
> the output folder is empty
> GetData - UpdateAttribute - WAIT (or block until output folder from
> ExecuteProcess is empty) - PutFile - ExecuteProcess - GetFile - NOTIFY - do
> something more.
>
> regards
> Jens
>
>
>
>
> Den tir. 24. mar. 2020 kl. 11.37 skrev Chris Sampson <
> chris.samp...@naimuri.com>:
>
>> Have you looked at some Wait-Notify examples (it does sound like what
>> you're wanting to use):
>>
>> https://gist.github.com/ijokarumawak/20125d663d2116c6dae1eecae8d7acbc
>>
>>
>> https://pierrevillard.com/2018/06/27/nifi-workflow-monitoring-wait-notify-pattern-with-split-and-merge/
>>
>> Your Notify should be on a different "branch" of your Flow than your Wait
>> - send duplicate copies of FlowFiles to the Wait and also to the part of
>> your flow that does the "real" processing.
>>
>>
>> *Chris Sampson*
>> IT Consultant
>> *Tel:* 07867 843 675
>> chris.samp...@naimuri.com
>>
>>
>>
>> On Tue, 24 Mar 2020 at 10:05, Jens M. Kofoed 
>> wrote:
>>
>>> Hi
>>>
>>> I'm trying to build a flow, where a putfile process is only allow to
>>> write a file, if the previous file has finished the following block in the
>>> flow. But I can't works it out with my wait notify. Since the first file
>>> can't go through the wait block because the notify is coming after the
>>> wait block.
>>>
>>> kind regards
>>> Jens
>>>
>>


Re: FW: Execute SQL to MSFT SQLServer - getting error

2020-02-20 Thread Juan Pablo Gardella
According JDBC documentation driver
,
integratedSecurity is a boolean property, and to authenticate against
SQLServer you have three options for integratedSecurity=true. Kerberos,
NTLM or NativeAuthentication. NativeAuthentication only works in windows
os, so you cannot use it in linux.

Juan

On Thu, 20 Feb 2020 at 14:49, Pierre Villard 
wrote:

> Based on this thread [1], it looks like it's because the DLL are not
> loaded by Java. I'd double check what you set for java.library.path system
> property.
>
> [1]
> https://stackoverflow.com/questions/6087819/jdbc-sqlserverexception-this-driver-is-not-configured-for-integrated-authentic
>
> Le jeu. 20 févr. 2020 à 02:03, Samarendra Sahoo <
> sahoo.samaren...@gmail.com> a écrit :
>
>> Thanks a lot Peter. Attaching the controller. In addition to the
>> controller setting, adding jdbc 8 driver, we have added mssql-jdbc_auth dll
>> at below locations in the Nifi server.
>>
>>
>> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-3.b16.el6_9.x86_64/bin/mssql-jdbc_auth-8.2.0.x64.dll
>>
>> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-3.b16.el6_9.x86_64/jre/bin/mssql-jdbc_auth-8.2.0.x64.dll
>> - Forwarded message -
>>
>> From: *Pierre Villard* 
>> Date: Thu, 20 Feb, 2020, 12:54 AM
>> Subject: Re: Execute SQL to MSFT SQLServer - getting error
>> To: 
>> Cc: 
>>
>>
>>
>> Hi,
>>
>>
>>
>> Based on the error, it sounds like the driver you are using is not
>> configured for integrated authentication. Can you share more details about
>> how you configured your controller service?
>>
>>
>>
>> Thanks,
>>
>> Pierre
>>
>>
>>
>> Le mer. 19 févr. 2020 à 09:58, Samarendra Sahoo <
>> sahoo.samaren...@gmail.com> a écrit :
>>
>> Dear all,
>> We are trying to connect to SQLServer from Nifi installed on a RHEL Linux
>> VM. We are not able to find a way to connect SQLServer configured with
>> Integrated Security=*SSPI *with Windows login Authentication*  ( Note :
>> Kerberos not enabled )  *. We are trying to use *DBCPConnectionPool *with
>> JDBC database connector 
>> *jdbc:sqlserver://xx.xxx.xx.xxx:port;databaseName=DEMODB;integratedSecurity=SSPI
>> . *However its not allowed us to login to SQLServer. Does nifi provide
>> any connector/Component or to connect to SSPI Security mode ?  Or Is there
>> any other way to connect using custom code connector?
>>
>>
>>
>> Attaching error message for reference
>>
>>


Re: NiFi Invoke Http processor with oauth2

2020-01-21 Thread Juan Pablo Gardella
*removed dev list*

Hi, I did a processor + a simple token service to do authentication based
on token. Hope it helps to you. Probably you should adjust the token
service to match the json that contains the token. Below the processors and
the service. Notice I am using it in a standalone nifi. For clusters, maybe
you have to externalize where to store the token if you don't request a new
token in each node.


*TokenAttributeUpdater *
@EventDriven
@SideEffectFree
@InputRequirement(Requirement.INPUT_REQUIRED)
@SupportsBatching
@AutoService(Processor.class)
@WritesAttribute(attribute = "token", description = "Token to used in HTTP
header")
public class TokenAttributeUpdater extends AbstractProcessor {

private static final Logger LOGGER = LoggerFactory.getLogger(
TokenAttributeUpdater.class);
public static final Relationship REL_SUCCESS = new Relationship.Builder().
name("success")
.description("").build();
public static final Relationship REL_FAIL = new Relationship.Builder().name(
"fail").description(
"").build();

public static final PropertyDescriptor TOKEN_SERVICE = new
PropertyDescriptor.Builder().name(
"Token service").identifiesControllerService(TokenService.class).required(
true).build();

static {
final List props = new ArrayList<>();
props.add(TOKEN_SERVICE);

PROPERTIES = Collections.unmodifiableList(props);
final Set relationships = new HashSet<>();
relationships.add(REL_SUCCESS);
relationships.add(REL_FAIL);
RELATIONSHIPS = Collections.unmodifiableSet(relationships);

}

private static final List PROPERTIES;
private static final Set RELATIONSHIPS;

@Override
public Set getRelationships() {
return RELATIONSHIPS;
}

@Override
public void onTrigger(ProcessContext context, ProcessSession session) throws
ProcessException {
final FlowFile file = session.get();
if (file != null) {
try {
doProcessFile(file, session, context);
} catch (final Exception e) {
LOGGER.error("Fail to obtain token {}", e.getMessage());
FlowFile flowFileToWrite = session.putAttribute(file, "error", e.getClass()
+ ":" + e
.getMessage());
session.transfer(flowFileToWrite, REL_FAIL);
}
}
}

@Override
protected List getSupportedPropertyDescriptors() {
return PROPERTIES;
}

private void doProcessFile(final FlowFile flowFile, ProcessSession session,
ProcessContext context) {
FlowFile flowFileToWrite = session.putAttribute(flowFile, "token", context.
getProperty(
TOKEN_SERVICE).asControllerService(TokenService.class).getToken());
session.transfer(flowFileToWrite, REL_SUCCESS);
}

}

*TokenService:*
public interface TokenService extends ControllerService {
String getToken() throws ProcessException;
}

*TokenServiceImpl * @CapabilityDescription("Token handler.")
@Tags({"token", "credentials"})
@AutoService(ControllerService.class)
public class TokenServiceImpl extends AbstractControllerService implements
TokenService {

private static final class Token {
// How many seconds extra are used to mark a token expired.
private static final long SECONDS_WINDOWS = 60L;
private final String token;
/**
* When the token expires.
*/
private final Instant expires_in;

public Token(String token, int expires_in) {
this.token = Objects.requireNonNull(token, "token cannot be null");
if (expires_in <= 0) {
throw new IllegalArgumentException("expires_in should be greater than 0");
}
this.expires_in = Instant.now().plusSeconds(expires_in);
}

public String getToken() {
if (isExpired()) {
throw new IllegalStateException("Token expired");
}
return token;
}

public boolean isExpired() {
return Instant.now().plusSeconds(SECONDS_WINDOWS).isAfter(expires_in);
}
}

// TODO: This class should match the JSON which includes the token info and
when it expires.
private static final class TokenResponse {
String token;
int expires_in;
}

public static final PropertyDescriptor LOGIN_URI = new PropertyDescriptor.
Builder().name(
"login_uri").description("Login URI")
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY).
addValidator(
StandardValidators.URI_VALIDATOR).required(true).build();

public static final PropertyDescriptor USER= new PropertyDescriptor.Builder
().name("user")
.description("usuario").expressionLanguageSupported(ExpressionLanguageScope.
VARIABLE_REGISTRY)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR).required(true).build
();

public static final PropertyDescriptor PASSWORD = new PropertyDescriptor.
Builder().name(
"password").description("password").expressionLanguageSupported(
ExpressionLanguageScope.VARIABLE_REGISTRY).addValidator(
StandardValidators.NON_BLANK_VALIDATOR).sensitive(true).required(true).build
();

public static final PropertyDescriptor CONNECTION_TIMEOUT = new
PropertyDescriptor.Builder().name(
"connection.timeout").description("Connection timeout").
expressionLanguageSupported(
ExpressionLanguageScope.VARIABLE_REGISTRY).addValidator(
StandardValidators.TIME_PERIOD_VALIDATOR).defaultValue("10 sec").required(
true)
.build();

private static final String JSON_TEMPLATE = "{\"username\":
\"%s\",\"password\": \"%s\

Re: NiFi ValidateRecord - unable to handle missing mandatory ARRAY ?

2019-12-11 Thread Juan Pablo Gardella
The bug https://issues.apache.org/jira/browse/NIFI-4893 was detected by
myself. Do you have a reproducible flow to validate it?

On Wed, 11 Dec 2019 at 12:54, Oliveira, Emanuel 
wrote:

> Oh I see, makes, sense your analysis, but sorry I have done java 20 years
> ago, nowadays im mostly data engineer (oracle db, etl tools, custom
> migrations, snowflake and lately nifi).. so count on me to detect
> opportunities to improve things, but not able to change base code/tests.
>
>
>
> Thanks so much for your time and analysis, lets wait for community to step
> up to do the fix and update/run the unit tests 😊
>
>
>
> Thanks//Regards,
>
> *Emanuel Oliveira*
>
> Senior Oracle/Data Engineer | CTG | Galway
> TEL ext: 353 – (0)91-74  4971 | int: 8-737 4971 *|*  who's who
> 
>
>
>
> *From:* Mark Payne 
> *Sent:* Wednesday 11 December 2019 15:25
> *To:* users@nifi.apache.org
> *Subject:* Re: NiFi ValidateRecord - unable to handle missing mandatory
> ARRAY ?
>
>
>
> *This email is from an external source - **exercise caution regarding
> links and attachments. *
>
>
>
> Emanuel,
>
>
>
> I looked into this a week or so ago, but haven't had a chance to resolve
> the issue yet. It does appear to be a bug. Specifically, I believe the bug
> is here [1].  When we create a RecordSchema from the Avro Schema, we set
> the default value for the array to an empty array, instead of null. Because
> of this, when the JSON is parsed, we end up creating a Record with an empty
> array for the "Record" field instead of a null. As as result, the Record is
> considered valid because it does have an array (it's just empty). I think
> it *should* be a null value instead.
>
>
>
> It looks like this was introduced in NIFI-4893 [2]. We can easily change
> it to just return a null value for the default, but that does result in two
> of the unit tests added in NIFI-4893 failing. It may be that those unit
> tests need to be fixed, or it may be that such a change does break
> something. I just haven't had a chance yet to dig that far into it.
>
>
>
> If you're someone who is comfortable digging into the code and making the
> updates, then please do and I'm happy to review a PR as soon as I'm able.
>
>
>
> Thanks
>
> -Mark
>
>
>
>
>
> [1]
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java#L629-L631
>
>
>
> [2] https://issues.apache.org/jira/browse/NIFI-4893
>
>
>
>
>
>
>
> On Dec 11, 2019, at 8:02 AM, Oliveira, Emanuel 
> wrote:
>
>
>
> Anyway knowledgably on avro schemas can please confirm/suggest if this
> inability to invalidate json payload missing array in root when allowing
> extra field-true is normal ?
>
>
>
> There’s 2 options with:
>
> · ValidateRecord.Allow Extra Fields=false à need to supply full
> schema
>
> · ValidateRecord.Allow Extra Fields=true à this is what I been
> testing/want, a way to supply schema with only mandatory fields.
>
>
>
> I want 2 mandatory fields, an array with at least 1 element having
> eventVersion, so minimal json should be:
>
> { (..)
>
>"Records": [{
>
>  "eventVersion": "aaa"
>
>  (..)
>
>   }
>
>]
>
>(..)
>
> }
>
>
>
> Problem is ValidateRecord considers FF valid if missing “Records” array in
> the root
>
> {
>
>"Service": "sss",
>
>"Event": "e",
>
>"Time": "2019-11-25T16:21:53.280Z",
>
>"Bucket": "bbb-b-bbb-b-bb",
>
>"RequestId": "RR",
>
>"HostId": "h",
>
> }
>
>
>
> IF I supply the array “Records” then the schema correctly validates I need
> at least eventVersion on the array element record.
>
>
>
>
>
> So… maybe my question can be tuned to “is it possible on avro schema
> syntax to specify cardinalities like in a db e/r diagram where a relation
> can be one of the following:
>
> 0..n
>
> 1..0
>
> 1 and only 1 ?
>
>
>
>
>
> Thanks//Regards,
>
> *Emanuel Oliveira*
>
> Senior Oracle/Data Engineer | CTG | Galway
> TEL ext: 353 – (0)91-74  4971 | int: 8-737 4971 *|*  who's who
> 
>
>
>
> *From:* Oliveira, Emanuel 
> *Sent:* Friday 6 December 2019 10:15
> *To:* users@nifi.apache.org
> *Subject:* RE: NiFi ValidateRecord - unable to handle missing mandatory
> ARRAY ?
>
>
>
> Hi Mark, forgot to share the NiFi version we using:
>
> 1.8.0
>
> 10/22/2018 23:48:30 EDT
>
> Tagged nifi-1.8.0-RC3
>
>
>
>
>
> Thanks//Regards,
>
> *Emanuel Oliveira*
>
> Senior Oracle/Data Engineer | CTG | Galway
> TEL ext: 353 – (0)91-74  4971 | int: 8-737 4971 *|*  who's who
> 
>
>
>
> *From:* Emanuel Oliveira 
> *Sent:* Thursday 5 December 2019 22:42
> *To:* users@nifi.apache.org
> *Subject:* Re: NiFi ValidateRecord - unable to handle missing mandatory
> ARRAY ?
>
>
>
> *This email is from an external source - **exercise caution re

Re: Encrypting passwords - Nifi 1.10.0

2019-12-09 Thread Juan Pablo Gardella
Hi Andy,

I verified what you suggested:
* Can you look for any other entries of the form “nifi.xyz.protected=“?  ->
Verified, no extra protected properties.
* Are you sure that is being removed? -> I am sure.

When you say: *check the value of your master key to ensure it is the same
key that encrypted that value. *How I can check that?

Thanks

On Mon, 9 Dec 2019 at 11:44, Andy LoPresto  wrote:

> Thanks Juan. A couple notes:
>
> Using the same plaintext value for multiple keys will not cause a
> technical problem, but it is bad security practice and is strongly
> discouraged. It would not be the source of the issue here (however, you
> need to use a fully-formed AES key for the provenance encryption key, and
> it’s unlikely that would be the same value or format as a password for the
> sensitive properties. That can cause other problems later on).
>
> As you are using the plain WriteAheadProvenanceRepository and not the
> EncryptedWriteAheadProvenanceRepository, you do not need to provide (and in
> fact, they are currently ignored) any properties for
> nifi.provenance.encryption.*. So you can remove those lines entirely (and
> probably should just for clarity and not to confuse anyone else who looks
> at these properties). If you want to use the encrypted repository, you’ll
> need to change the repository implementation (see step-by-step details in
> the link I provided earlier).
>
> The nested exception was that one of the encrypted properties did not
> contain the “||” delimiter. From visual inspection, it appears that all
> properties you have listed here do contain the delimiter. That exception is
> only thrown in one condition, and that is a simple string contains check
> for the delimiter. Are you sure these are the only encrypted values in your
> nifi.properties file, and that you are referencing the correct file? Can
> you look for any other entries of the form “nifi.xyz.protected=“?
>
> You mentioned that it generates two unique entries for
> “nifi.provenance.repository.encryption.key” and you remove the plaintext
> one. Are you sure that is being removed? If the system believes that
> property is encrypted (as indicated by the
> nifi.provenance.repository.encryption.key.protected=aes/gcm/256” line
> following it) and tries to decrypt the plaintext value, that would cause
> the exception to be thrown.
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Dec 9, 2019, at 2:22 PM, Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
> Thanks for answering my questions Andy,
>
> Below are the sensitive properties:
>
> # Provenance Repository Properties
>
> nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
> nifi.provenance.repository.debug.frequency=1_000_000
>
> *nifi.provenance.repository.encryption.key=fbRg/ZgK7U8qJcrU||4nI1n1aRD0Tooq7TLSTyVDhkmX8*
> nifi.provenance.repository.encryption.key.protected=aes/gcm/256
> nifi.provenance.repository.encryption.key.provider.location=
> nifi.provenance.repository.encryption.key.id=
> # security properties #
> *nifi.sensitive.props.key=jtZiGY+mZyHPQIc1||/IJnMQBBXKN7VNkwMf6Oo7vZmAs*
> nifi.sensitive.props.key.protected=aes/gcm/256
> nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
> nifi.sensitive.props.provider=BC
> nifi.sensitive.props.additional.keys=
>
> nifi.security.keystore=/opt/certs/keystore.jks
> nifi.security.keystoreType=JKS
>
> *nifi.security.keystorePasswd=GuuOm4fyK6yvo76H||av/NQmH7Hw8qK9k0NOMRSjp08tw+walt4D5JLpYPiCHG/Z7DDq5QZ+ui/dKOXxtapH76Gjpt3hMwmP0*
> nifi.security.keystorePasswd.protected=aes/gcm/256
>
> *nifi.security.keyPasswd=y4spsJvsy5Fzc3Uq||Q1vMntNgfLLMMSJuyPNn8+9aHlH+banQy82Ly0qrLWf6hNUTNgA+akyh86rlf2J5XZCONL3JCLX6mY0*
> nifi.security.keyPasswd.protected=aes/gcm/256
> nifi.security.truststore=/opt/certs/truststore.jks
> nifi.security.truststoreType=JKS
>
> *nifi.security.truststorePasswd=9r+fyOSjRUXQLcZG||YwAtPYorADqHSKFUmU4H3SbyqvYqqYNZiGidgCOUCibPdP2jiEAMGtLt5xyFsMcNPm5Pye2qXEioLR8*
> nifi.security.truststorePasswd.protected=aes/gcm/256
>
> These properties are generated by the toolkit. I using the same value for
> nifi.sensitive.props.key value and the
> nifi.provenance.repository.encryption.key, I was not aware they should be
> different. Could be that the problem?
>
> Juan
>
> On Mon, 9 Dec 2019 at 08:20, Andy LoPresto  wrote:
>
>> Hi Juan,
>>
>> The error you are getting is saying that one of the protected properties
>> is not of the expected format. While the Sensitive Property Provider
>> mechanism is extensible (see NIFI-5481 [1] for additional options bei

Re: Encrypting passwords - Nifi 1.10.0

2019-12-09 Thread Juan Pablo Gardella
tive.props.key=my_bad_sensitive_props_password
> NiFi.sensitive.props.key.protected= # or remove this line entirely
>
>
> [1] https://github.com/apache/nifi/pull/3672
> [2]
> https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#encrypted-provenance
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Dec 8, 2019, at 8:01 PM, Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
> Hello all,
>
> I am trying to protect plain text passwords. I am using the latest docker
> image (1.10.0), and edited manually nifi.sensitive.props.key as below
>
> sed -i -e
> "s|^nifi.sensitive.props.key=.*$|nifi.sensitive.props.key=${NIFI_SENSITIVE_PROPS_KEY}|"
> /opt/nifi/nifi-current/conf/nifi.properties
> sed -i -e
> "s|^nifi.provenance.repository.encryption.key=.*$|nifi.provenance.repository.encryption.key=${NIFI_SENSITIVE_PROPS_KEY}|"
> /opt/nifi/nifi-current/conf/nifi.properties
>
> (this command for some reason does not update the file inside the
> Dockerfile, I have to do inside the container).
>
> After updated that property, I run following command inside the container:
>
> bash /opt/nifi/nifi-toolkit-current/bin/encrypt-config.sh -n
> /opt/nifi/nifi-current/conf/nifi.properties -b
> /opt/nifi/nifi-current/conf/bootstrap.conf -a
> /opt/nifi/nifi-current/conf/authorizers.xml -l
> /opt/nifi/nifi-current/conf/login-identity-providers.xml
>
> It prompts to put a master password and after that, I restart[1] the
> container but it failed to start with below error:
>
> nifi  | 2019-12-08 18:57:31,777 INFO [main]
> o.a.nifi.properties.NiFiPropertiesLoader Loaded 162 properties from
> /opt/nifi/nifi-current/./conf/nifi.properties
> *nifi  | 2019-12-08 18:57:31,933 INFO [main]
> o.a.n.properties.ProtectedNiFiProperties There are 5 protected properties
> of 5 sensitive properties (100%)*
> nifi  | 2019-12-08 18:57:31,935 ERROR [main] org.apache.nifi.NiFi
> Failure to launch NiFi due to java.lang.IllegalArgumentException: There was
> an issue decrypting protected properties
> nifi  | java.lang.IllegalArgumentException: There was an issue
> decrypting protected properties
> nifi  | at org.apache.nifi.NiFi.initializeProperties(NiFi.java:341)
> nifi  | at
> org.apache.nifi.NiFi.convertArgumentsToValidatedNiFiProperties(NiFi.java:309)
> nifi  | at org.apache.nifi.NiFi.main(NiFi.java:300)
> nifi  | Caused by: java.lang.IllegalArgumentException: The cipher
> text does not contain the delimiter || -- it should be of the form
> Base64(IV) || Base64(cipherText)
> nifi  | at
> org.apache.nifi.properties.AESSensitivePropertyProvider.unprotect(AESSensitivePropertyProvider.java:217)
> nifi  | at
> org.apache.nifi.properties.ProtectedNiFiProperties.unprotectValue(ProtectedNiFiProperties.java:524)
> nifi  | at
> org.apache.nifi.properties.ProtectedNiFiProperties.getUnprotectedProperties(ProtectedNiFiProperties.java:343)
> nifi  | at
> org.apache.nifi.properties.NiFiPropertiesLoader.load(NiFiPropertiesLoader.java:209)
> nifi  | at
> org.apache.nifi.properties.NiFiPropertiesLoader.load(NiFiPropertiesLoader.java:223)
> nifi  | at
> org.apache.nifi.properties.NiFiPropertiesLoader.loadDefault(NiFiPropertiesLoader.java:130)
> nifi  | at
> org.apache.nifi.properties.NiFiPropertiesLoader.get(NiFiPropertiesLoader.java:241)
> nifi  | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> nifi  | at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> nifi  | at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> nifi  | at java.lang.reflect.Method.invoke(Method.java:498)
> nifi  | at org.apache.nifi.NiFi.initializeProperties(NiFi.java:336)
> nifi  | ... 2 common frames omitted
>
> Any idea why it is failing?
>
> Thanks,
> Juan
>
> [1] Actually, after that command two entries are generated to
> nifi.provenance.repository.encryption.key= in the file, one with the plain
> text and the other encrypted. I have to remove manually the plain text one.
>
>
>


Encrypting passwords - Nifi 1.10.0

2019-12-08 Thread Juan Pablo Gardella
Hello all,

I am trying to protect plain text passwords. I am using the latest docker
image (1.10.0), and edited manually nifi.sensitive.props.key as below

sed -i -e
"s|^nifi.sensitive.props.key=.*$|nifi.sensitive.props.key=${NIFI_SENSITIVE_PROPS_KEY}|"
/opt/nifi/nifi-current/conf/nifi.properties
sed -i -e
"s|^nifi.provenance.repository.encryption.key=.*$|nifi.provenance.repository.encryption.key=${NIFI_SENSITIVE_PROPS_KEY}|"
/opt/nifi/nifi-current/conf/nifi.properties

(this command for some reason does not update the file inside the
Dockerfile, I have to do inside the container).

After updated that property, I run following command inside the container:

bash /opt/nifi/nifi-toolkit-current/bin/encrypt-config.sh -n
/opt/nifi/nifi-current/conf/nifi.properties -b
/opt/nifi/nifi-current/conf/bootstrap.conf -a
/opt/nifi/nifi-current/conf/authorizers.xml -l
/opt/nifi/nifi-current/conf/login-identity-providers.xml

It prompts to put a master password and after that, I restart[1] the
container but it failed to start with below error:

nifi  | 2019-12-08 18:57:31,777 INFO [main]
o.a.nifi.properties.NiFiPropertiesLoader Loaded 162 properties from
/opt/nifi/nifi-current/./conf/nifi.properties
*nifi  | 2019-12-08 18:57:31,933 INFO [main]
o.a.n.properties.ProtectedNiFiProperties There are 5 protected properties
of 5 sensitive properties (100%)*
nifi  | 2019-12-08 18:57:31,935 ERROR [main] org.apache.nifi.NiFi
Failure to launch NiFi due to java.lang.IllegalArgumentException: There was
an issue decrypting protected properties
nifi  | java.lang.IllegalArgumentException: There was an issue
decrypting protected properties
nifi  | at org.apache.nifi.NiFi.initializeProperties(NiFi.java:341)
nifi  | at
org.apache.nifi.NiFi.convertArgumentsToValidatedNiFiProperties(NiFi.java:309)
nifi  | at org.apache.nifi.NiFi.main(NiFi.java:300)
nifi  | Caused by: java.lang.IllegalArgumentException: The cipher
text does not contain the delimiter || -- it should be of the form
Base64(IV) || Base64(cipherText)
nifi  | at
org.apache.nifi.properties.AESSensitivePropertyProvider.unprotect(AESSensitivePropertyProvider.java:217)
nifi  | at
org.apache.nifi.properties.ProtectedNiFiProperties.unprotectValue(ProtectedNiFiProperties.java:524)
nifi  | at
org.apache.nifi.properties.ProtectedNiFiProperties.getUnprotectedProperties(ProtectedNiFiProperties.java:343)
nifi  | at
org.apache.nifi.properties.NiFiPropertiesLoader.load(NiFiPropertiesLoader.java:209)
nifi  | at
org.apache.nifi.properties.NiFiPropertiesLoader.load(NiFiPropertiesLoader.java:223)
nifi  | at
org.apache.nifi.properties.NiFiPropertiesLoader.loadDefault(NiFiPropertiesLoader.java:130)
nifi  | at
org.apache.nifi.properties.NiFiPropertiesLoader.get(NiFiPropertiesLoader.java:241)
nifi  | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
nifi  | at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
nifi  | at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
nifi  | at java.lang.reflect.Method.invoke(Method.java:498)
nifi  | at org.apache.nifi.NiFi.initializeProperties(NiFi.java:336)
nifi  | ... 2 common frames omitted

Any idea why it is failing?

Thanks,
Juan

[1] Actually, after that command two entries are generated to
nifi.provenance.repository.encryption.key= in the file, one with the plain
text and the other encrypted. I have to remove manually the plain text one.


Re: NIFI expression language

2019-11-22 Thread Juan Pablo Gardella
Check here:
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html

On Fri, 22 Nov 2019 at 16:08, KhajaAsmath Mohammed 
wrote:

> Hi,
>
> I have existing flow where it replace value in the json document. I
> understood what is does but i would like to know what does ?s .*? does and
> how to learn more about all these characters from expression language. Any
> guidance would really help me.
>
> (?s)("eventTime"\s*:\s*)("(.*?)") with $1\"${SEND_TIME}\"
>
> [image: image.png]
>
> Thanks,
> Asmath
>


is it possible to query if a processor is yield?

2019-11-17 Thread Juan Pablo Gardella
Hello all,

Is it possible to determine if a processor was yield by UI or API?

Thanks,
Juan


NPE at JMSConsumer processor

2019-11-12 Thread Juan Pablo Gardella
Hello all,

I found the following NPE in Nifi 1.5.0 version:

2019-11-13 04:19:11,031 ERROR [Timer-Driven Process Thread-5]
o.apache.nifi.jms.processors.ConsumeJMS ConsumeJMS -
JMSConsumer[destination:null; pub-sub:true;] ConsumeJMS -
JMSConsumer[destination:null; pub-sub:true;] failed to process session due
to java.lang.NullPointerException: {}java.lang.NullPointerException: nullat
org.apache.nifi.jms.processors.MessageBodyToBytesConverter.toBytes(MessageBodyToBytesConverter.java:40)at
org.apache.nifi.jms.processors.JMSConsumer$1.doInJms(JMSConsumer.java:84)at
org.apache.nifi.jms.processors.JMSConsumer$1.doInJms(JMSConsumer.java:65)at
org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:494)at
org.apache.nifi.jms.processors.JMSConsumer.consume(JMSConsumer.java:65)at
org.apache.nifi.jms.processors.ConsumeJMS.rendezvousWithJms(ConsumeJMS.java:144)at
org.apache.nifi.jms.processors.AbstractJMSProcessor.onTrigger(AbstractJMSProcessor.java:139)at
org.apache.nifi.jms.processors.ConsumeJMS.onTrigger(ConsumeJMS.java:56)at
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)

Basicaly the TextMessage.getText()

comes in null. According the javadoc is possible.

The fix I did consists on log a WARN and write a empty byte array as
output. In latest version is also not handled the null also. Is it a valid
solution to write in flowfile body an empty array in this scenario?

Juan


Re: ElasticSearchClientServiceImpl not working for secured ElasticSearch

2019-10-18 Thread Juan Pablo Gardella
I have an issue to validate, reported at before:
http://apache-nifi.1125220.n5.nabble.com/Error-instantiating-template-on-cluster-The-specified-observer-identifier-already-exists-td12973.html

I reproduced it at apache nifi 1.5.0. I will try to check at nifi 1.9.2.
The template to load is near to 50MB.

Juan

On Fri, 18 Oct 2019 at 14:13, Joe Witt  wrote:

> is a daily effort at this point.  i am close to pushing first rc.  have
> been watching for stability on bug fixes.
>
> On Fri, Oct 18, 2019 at 1:10 PM Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
>> Any ETA for Nifi 1.10 release?
>>
>> On Fri, 18 Oct 2019 at 13:39, Mike Thomsen 
>> wrote:
>>
>>> Peter,
>>>
>>> Are you configuring the service as a trust-only configuration? If so,
>>> that's been addressed in the 1.10 which is due for release in the near(ish)
>>> future.
>>>
>>> https://issues.apache.org/jira/browse/NIFI-6228
>>>
>>> Thanks,
>>>
>>> Mike
>>>
>>> On Fri, Oct 18, 2019 at 11:06 AM Peter Moberg 
>>> wrote:
>>>
>>>> As a follow-up.
>>>>
>>>> On the Nifi node I am able to do a GET to Elastic Search using curl. I
>>>> specify the —cacert option giving it the self-signed root certificate.
>>>>
>>>> Of course, this isn’t using the TrustStore but I am able to use the
>>>> TrustStore if I use other ES processors… just not the
>>>> ElasticSearchClientServicesImpl.
>>>>
>>>> On Oct 18, 2019, 12:48 AM -0500, Peter Moberg ,
>>>> wrote:
>>>>
>>>> Hi Andy,
>>>>
>>>> thanks for your suggestions. Here is what I have tried so far (still no
>>>> luck).
>>>>
>>>> Connecting with openssl and viewing the certs it presents
>>>>
>>>> *openssl s_client -connect quickstart-es-http.es
>>>> <http://quickstart-es-http.es>-cluster -showcerts*
>>>>
>>>> If I then look inside the server cert I can find this
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *Server Cert: Issuer: OU = quickstart, CN = quickstart-http X509v3
>>>> Subject Alternative Name: DNS:quickstart-es-http.es
>>>> <http://quickstart-es-http.es>-cluster.es.local, DNS:quickstart-es-http,
>>>> DNS:quickstart-es-http.es <http://quickstart-es-http.es>-cluster.svc,
>>>> DNS:quickstart-es-http.es <http://quickstart-es-http.es>-cluster*
>>>>
>>>>
>>>> If I look in to the self-signed root cert I find this:
>>>>
>>>>
>>>> *Root Cert: Subject: OU = quickstart, CN = quickstart-http*
>>>>
>>>>
>>>> I now double check  my trust store to make sure the Root Cert is there.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *Trust store content Your keystore contains 1 entry Alias name:
>>>> ca_elastic Creation date: Oct 16, 2019 Entry type: trustedCertEntry Owner:
>>>> CN=quickstart-http, OU=quickstart Issuer: CN=quickstart-http, OU=quickstart
>>>> Serial number: 5aa50b6806d2394fff6f98d2b7d4c287 Valid from: Fri Oct 11
>>>> 14:35:01 UTC 2019 until: Sat Oct 10 14:36:01 UTC 2020 Certificate
>>>> fingerprints: MD5: 1E:E3:33:13:EA:AC:B5:61:23:DE:2E:1A:D7:9C:AA:F0 SHA1:
>>>> 62:EC:5B:EB:32:6A:38:3D:6A:6B:F7:10:5A:DE:E6:F1:F0:5B:07:99 SHA256:
>>>> B4:B5:06:9C:50:5F:E8:A1:58:7C:C7:2C:37:52:2F:E0:CF:32:18:18:68:E4:C7:37:F8:82:B3:BC:61:EB:5B:CF
>>>> Signature algorithm name: SHA256withRSA Subject Public Key Algorithm:
>>>> 2048-bit RSA key Version: 3 Extensions: #1: ObjectId: 2.5.29.19
>>>> Criticality=true BasicConstraints:[ CA:true PathLen:2147483647 ] #2:
>>>> ObjectId: 2.5.29.37 Criticality=false ExtendedKeyUsages [ serverAuth
>>>> clientAuth ] #3: ObjectId: 2.5.29.15 Criticality=true KeyUsage [
>>>> DigitalSignature Key_CertSign ]*
>>>

Re: ElasticSearchClientServiceImpl not working for secured ElasticSearch

2019-10-18 Thread Juan Pablo Gardella
Any ETA for Nifi 1.10 release?

On Fri, 18 Oct 2019 at 13:39, Mike Thomsen  wrote:

> Peter,
>
> Are you configuring the service as a trust-only configuration? If so,
> that's been addressed in the 1.10 which is due for release in the near(ish)
> future.
>
> https://issues.apache.org/jira/browse/NIFI-6228
>
> Thanks,
>
> Mike
>
> On Fri, Oct 18, 2019 at 11:06 AM Peter Moberg 
> wrote:
>
>> As a follow-up.
>>
>> On the Nifi node I am able to do a GET to Elastic Search using curl. I
>> specify the —cacert option giving it the self-signed root certificate.
>>
>> Of course, this isn’t using the TrustStore but I am able to use the
>> TrustStore if I use other ES processors… just not the
>> ElasticSearchClientServicesImpl.
>>
>> On Oct 18, 2019, 12:48 AM -0500, Peter Moberg ,
>> wrote:
>>
>> Hi Andy,
>>
>> thanks for your suggestions. Here is what I have tried so far (still no
>> luck).
>>
>> Connecting with openssl and viewing the certs it presents
>>
>> *openssl s_client -connect quickstart-es-http.es
>> -cluster -showcerts*
>>
>> If I then look inside the server cert I can find this
>>
>>
>>
>>
>>
>> *Server Cert: Issuer: OU = quickstart, CN = quickstart-http X509v3
>> Subject Alternative Name: DNS:quickstart-es-http.es
>> -cluster.es.local, DNS:quickstart-es-http,
>> DNS:quickstart-es-http.es -cluster.svc,
>> DNS:quickstart-es-http.es -cluster*
>>
>>
>> If I look in to the self-signed root cert I find this:
>>
>>
>> *Root Cert: Subject: OU = quickstart, CN = quickstart-http*
>>
>>
>> I now double check  my trust store to make sure the Root Cert is there.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *Trust store content Your keystore contains 1 entry Alias name:
>> ca_elastic Creation date: Oct 16, 2019 Entry type: trustedCertEntry Owner:
>> CN=quickstart-http, OU=quickstart Issuer: CN=quickstart-http, OU=quickstart
>> Serial number: 5aa50b6806d2394fff6f98d2b7d4c287 Valid from: Fri Oct 11
>> 14:35:01 UTC 2019 until: Sat Oct 10 14:36:01 UTC 2020 Certificate
>> fingerprints: MD5: 1E:E3:33:13:EA:AC:B5:61:23:DE:2E:1A:D7:9C:AA:F0 SHA1:
>> 62:EC:5B:EB:32:6A:38:3D:6A:6B:F7:10:5A:DE:E6:F1:F0:5B:07:99 SHA256:
>> B4:B5:06:9C:50:5F:E8:A1:58:7C:C7:2C:37:52:2F:E0:CF:32:18:18:68:E4:C7:37:F8:82:B3:BC:61:EB:5B:CF
>> Signature algorithm name: SHA256withRSA Subject Public Key Algorithm:
>> 2048-bit RSA key Version: 3 Extensions: #1: ObjectId: 2.5.29.19
>> Criticality=true BasicConstraints:[ CA:true PathLen:2147483647 ] #2:
>> ObjectId: 2.5.29.37 Criticality=false ExtendedKeyUsages [ serverAuth
>> clientAuth ] #3: ObjectId: 2.5.29.15 Criticality=true KeyUsage [
>> DigitalSignature Key_CertSign ]*
>>
>> So everything looks Ok. But when I run the
>> ElasticSearchClientServicesImpl with a SSLContext pointing to my trust
>> store I still get the following output in the log.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine
>> problem at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) at
>> sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728) at
>> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:330) at
>> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:322) at
>> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1633)
>> at
>> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>> at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1052) at
>> sun.security.ssl.Handshaker$1.run(Handshaker.java:992) at
>> sun.security.ssl.Handshaker$1.run(Handshaker.java:989) at
>> java.security.AccessController.doPrivileged(Native Method) at
>> sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1467) at
>> org.apache.http.nio.reactor.ssl.SSLIOSession.doRunTask(SSLIOSession.java:283)
>> at
>> org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession.java:353)
>> ... 9 common frames omitted Caused by:
>> sun.security.validator.ValidatorException: PKIX path building failed:
>> sun.security.provider.certpath.SunCertPathBuilderException: unable to find
>> valid certification path to requested target at
>> sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:397) at
>> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:302)
>> at sun.security.validator.Validator.validate(Validator.java:262) at
>> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>> at
>> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:281)
>> at
>> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
>> at
>> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1620)
>> ... 17 common frames omitted*
>>
>> Both the Nifi install and Elastic Search install is running

Re: NIFI - PUTSQL - sql.args.1.type

2019-09-23 Thread Juan Pablo Gardella
The values of the constants defined at
https://docs.oracle.com/javase/8/docs/api/java/sql/Types.html

On Mon, Sep 23, 2019, 8:10 PM Wesley C. Dias de Oliveira <
wcdolive...@gmail.com> wrote:

> Hi, KhajaAsmath.
>
> I've search it too but found nothing.
>
> I think it's more easy to observe the type based on your query. "mapping"
> the arguments with the query fields.
>
> Em seg, 23 de set de 2019 às 20:02, KhajaAsmath Mohammed <
> mdkhajaasm...@gmail.com> escreveu:
>
>> Hi,
>>
>> I have existing flow and trying to understand what is sql.args.1.type ?
>>
>> sql.args.1.type =11
>> sql.args.1.type=13
>>
>> I understood that they are matched to column data type for
>> sql.args.1.value.
>>
>> what is data type for 11 and 13 ? May I know what are all the options
>> availabe for String, Integer, double ,date etc
>>
>> Thanks,
>> Asmath
>>
>
>
> --
> Grato,
> Wesley C. Dias de Oliveira.
>
> Linux User nº 576838.
>


Re: Weird behavior w/ HWX schema registry new builds

2019-04-15 Thread Juan Pablo Gardella
Yes, it works after that. We are stuck with Nifi 1.5.0 (it uses 0.3.0), so
after upgraded the client library (to 0.5.1) it works. Probably you are
facing a similar problem. Notice you also need NIFI-4893
<https://issues.apache.org/jira/browse/NIFI-4893>applied for some uses
cases.

Juan

On Mon, 15 Apr 2019 at 13:46, Michael Pearce  wrote:

> Sounds like a client compatibility issue, maybe raise a bug on HWX schema
> registry, pretty bad to be breaking api calls.
>
>
>
> *From: *Juan Pablo Gardella 
> *Reply-To: *"users@nifi.apache.org" 
> *Date: *Monday, April 15, 2019 at 5:44 PM
> *To: *"users@nifi.apache.org" 
> *Subject: *Re: Weird behavior w/ HWX schema registry new builds
>
>
>
> I had to rebuild it using a newer version of HWX client.
>
>
>
> On Mon, 15 Apr 2019 at 13:24, Mike Thomsen  wrote:
>
> We deployed 0.7.0 and started running into issues with it saying it could
> not find a schema. Really weird since we can see the schema in the UI and
> get the metadata about it with the same call that returns a failure message.
>
>
>
> Anyone seen similar behavior or have any idea what might be happening?
> Just trying to figure out if there are any known issues before I start
> poking around at our code and the registry's code.
>
>
>
> Thanks,
>
>
>
> Mike
>
> The information contained in this email is strictly confidential and for
> the use of the addressee only, unless otherwise indicated. If you are not
> the intended recipient, please do not read, copy, use or disclose to others
> this message or any attachment. Please also notify the sender by replying
> to this email or by telephone (+44 (0)20 7896 0011) and then delete the
> email and any copies of it. Opinions, conclusions (etc) that do not relate
> to the official business of this company shall be understood as neither
> given nor endorsed by it. IG Group Holdings plc is a company registered in
> England and Wales under number 04677092. VAT registration number 761 2978
> 07. Registered Office: Cannon Bridge House, 25 Dowgate Hill, London EC4R
> 2YA. Listed on the London Stock Exchange. Its subsidiaries IG Markets
> Limited and IG Index Limited are authorised and regulated by the Financial
> Conduct Authority (IG Markets Limited FCA registration number 195355 and IG
> Index Limited FCA registration number 114059). IG Europe GmbH is authorised
> and regulated by the Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin
> registration number 148759) and the Deutsche Bundesbank. The Swedish
> branches of IG Markets Ltd and IG Europe GmbH are regulated by the
> Finansinspektionen.-
>


Re: Weird behavior w/ HWX schema registry new builds

2019-04-15 Thread Juan Pablo Gardella
I had to rebuild it using a newer version of HWX client.

On Mon, 15 Apr 2019 at 13:24, Mike Thomsen  wrote:

> We deployed 0.7.0 and started running into issues with it saying it could
> not find a schema. Really weird since we can see the schema in the UI and
> get the metadata about it with the same call that returns a failure message.
>
> Anyone seen similar behavior or have any idea what might be happening?
> Just trying to figure out if there are any known issues before I start
> poking around at our code and the registry's code.
>
> Thanks,
>
> Mike
>


Re: [ANNOUNCE] Apache NiFi 1.9.1 release.

2019-03-18 Thread Juan Pablo Gardella
Hello,

It seems docker image was built incorrectly:

root@c7b796dde1a8:/opt/nifi/nifi-current# ls lib/*.nar|head
lib/nifi-ambari-nar-1.9.0.nar
lib/nifi-amqp-nar-1.9.0.nar
lib/nifi-avro-nar-1.9.0.nar
lib/nifi-aws-nar-1.9.0.nar
lib/nifi-aws-service-api-nar-1.9.0.nar
lib/nifi-azure-nar-1.9.0.nar
lib/nifi-beats-nar-1.9.0.nar
lib/nifi-cassandra-nar-1.9.0.nar
lib/nifi-cassandra-services-api-nar-1.9.0.nar
lib/nifi-cassandra-services-nar-1.9.0.nar
root@c7b796dde1a8:/opt/nifi/nifi-current#

The NAR files are 1.9.0.

Juan

On Mon, 18 Mar 2019 at 10:07 Joe Witt  wrote:

> Hello
>
> The Apache NiFi team would like to announce the release of Apache NiFi
> 1.9.1.
>
> Apache NiFi is an easy to use, powerful, and reliable system to process
> and distribute
> data.  Apache NiFi was made for dataflow.  It supports highly configurable
> directed graphs
> of data routing, transformation, and system mediation logic.
>
> More details on Apache NiFi can be found here:
> https://nifi.apache.org/
>
> The release artifacts can be downloaded from here:
> https://nifi.apache.org/download.html
>
> Maven artifacts have been made available here:
>
> https://repository.apache.org/content/repositories/releases/org/apache/nifi/
>
> Issues closed/resolved for this list can be found here:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316020&version=12345163
>
> Release note highlights can be found here:
>
> https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version1.9.1
>
> Thank you
> The Apache NiFi team
>


Re: LDAP, Groups and nifi users

2019-03-11 Thread Juan Pablo Gardella
Did you delete users.xml and authorizations.xml files?

On Mon, 11 Mar 2019 at 09:16 DEHAY Aurelien 
wrote:

> Hello.
>
>
>
> I’m struggling to configure a correct authorizes.xml to achieve the
> following. I’m using nifi 1.8 and 1.9 (freshy install) in secure mode +
> ldap auth.
>
>
>
> -  I have a LDAP serveur (RH identity manager) where users/groups
> are stored.
>
> -  I’d like to be able to grant rights on Nifi based on user group
>
> -  I’d like to be able to see users and their associated rights
> in nifi menu => users (not working, see screenshot bellow)
>
>
>
>
>
> I don’t know where is my mistake , I’ve tried a lot of conf in
> ldap-user-group-provider, I’m not even really sure the problem is here.
> Authentication itself is working, I can assign policy to users, but nothing
> works with groups.
>
>
>
> My configurations are
> https://gist.github.com/zorel/6934e7e6c1ae9e951ab13a1ce1db2330
>
>
>
> Thanks for any pointer.
>
>
>
>
>
>
> *Aurélien DEHAY *Big Data Architect
> +33 616 815 441
>
> aurelien.de...@faurecia.com
>
> 23/27 avenue des Champs Pierreux
> 
> 92735 Nanterre Cedex – France
>
> [image: Faurecia_inspiring_mobility_logo-RVB_150]
>
>
>
> This electronic transmission (and any attachments thereto) is intended
> solely for the use of the addressee(s). It may contain confidential or
> legally privileged information. If you are not the intended recipient of
> this message, you must delete it immediately and notify the sender. Any
> unauthorized use or disclosure of this message is strictly prohibited.
> Faurecia does not guarantee the integrity of this transmission and shall
> therefore never be liable if the message is altered or falsified nor for
> any virus, interception or damage to your system.
>


Re: nifi 1.9.0 on dockerhub

2019-02-25 Thread Juan Pablo Gardella
Probably README

[image: image.png]


On Mon, 25 Feb 2019 at 10:45 Pierre Villard 
wrote:

> Hi Harleen,
>
> What do you mean?
> There is a 1.9.0 Docker image available here (it's the latest) -
> https://hub.docker.com/r/apache/nifi/tags
>
> Pierre
>
> Le lun. 25 févr. 2019 à 13:02, harleen mann  a
> écrit :
>
>> Hello there,
>>
>> Is there anyone working on the nifi 1.9.0 docker image? If not, I am
>> happy to work on it and submit a PR on github. Is that the process?
>>
>> Regards
>> Harleen
>>
>>
>> 
>>  Virus-free.
>> www.avg.com
>> 
>> <#m_2196753248180838528_m_-3052492074873619898_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>


Re: [EXT] ReplaceText cannot consume messages if Regex does not match

2018-10-26 Thread Juan Pablo Gardella
Patch available at https://issues.apache.org/jira/browse/NIFI-5761
<https://www.google.com/url?q=https://issues.apache.org/jira/browse/NIFI-5761&sa=D&source=hangouts&ust=1540669801593000&usg=AFQjCNHVVj1whk8fb1Z3Ic51w2TdwvrC1Q>

On Thu, 18 Oct 2018 at 13:33 Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:

> Exactly, I know it is not an issue in the replace text code, but it
> happens inside it. If we are using the ReplaceText in multiple places, it
> increases the flow design complexity. We need to evaluate all expressions
> before sending to the processor, to be sure will not fail in ReplaceText
> processor.
>
> Notice it's impossible if you have to process content dynamically. I would
> happy to file a ticket and the patch as I mention.
>
> Juan
>
>
>
> On Thu, 18 Oct 2018 at 13:18 Shawn Weeks 
> wrote:
>
>> I understand the issue now, I’m not sure that a failure of ReplaceText is
>> the best place to catch this though.  The reason I’m not sure it’s the best
>> place is what happens if there are multiple failures because you had
>> multiple expressions, just having them all routed to the same failure
>> wouldn’t help you make decisions on what to do with a single attribute.
>> Perhaps a better solution would be to use a RouteOnAttribute to check if
>> the attributes match a certain pattern before sending them to ReplaceText.
>> A possible expression could be
>> “${actualSettlementDate:matches('[0-9]{2}/[0-9]{2}/[0-9]{4}')}” however
>> that would not catch things that look like dates but aren’t valid.
>>
>>
>>
>> Thanks
>>
>> Shawn Weeks
>>
>>
>>
>> *From:* Juan Pablo Gardella 
>>
>> *Sent:* Thursday, October 18, 2018 11:03 AM
>>
>>
>> *To:* users@nifi.apache.org
>> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
>> not match
>>
>>
>>
>> At *search value*:(?s)(^.*$)
>>
>>
>>
>> At *Replacement value*:
>>
>>
>> *
>> ${actualSettlementDate:toDate('MM/dd/'):format("-MM-dd'T'00:00:00.000")}*
>>
>>
>>
>> The actualSettlementDate is a flowfile attribute. The problem is the
>> replacement value is evaluated inside the processor and the *toDate *method
>> fails.
>>
>>
>>
>> Hope it's clear now.
>>
>>
>>
>>
>>
>> On Thu, 18 Oct 2018 at 12:51 Shawn Weeks 
>> wrote:
>>
>> I’m still trying to understand your actual issue, can your provide a
>> screenshot of the ReplaceText config like the attached, I need to see
>> exactly where you’re putting the expression. A template would also be
>> really helpful.
>>
>>
>>
>> Thanks
>>
>> Shawn Weeks
>>
>>
>>
>> *From:* Juan Pablo Gardella 
>>
>> *Sent:* Thursday, October 18, 2018 10:45 AM
>>
>>
>> *To:* users@nifi.apache.org
>> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
>> not match
>>
>>
>>
>> At ReplaceText
>> <https://raw.githubusercontent.com/apache/nifi/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java>processor
>> we have:
>>
>>
>>
>> [image: image.png]
>>
>> As you can see, only if *StackOverflowError *is raised during the
>> evaluation, the flowfile is send to failure relationship. I would like to
>> update the code to use Exception or NifiExpressionFailedException (if it
>> exits).
>>
>>
>>
>> Juan
>>
>>
>>
>> On Thu, 18 Oct 2018 at 12:33 Shawn Weeks 
>> wrote:
>>
>> What processor are you defining your expression in? I also may be
>> misunderstanding the problem because I don’t see any regular expressions
>> anywhere. Can you create a sample workflow showing your issue so I can take
>> a look at it.
>>
>>
>>
>> Thanks
>>
>> Shawn Weeks
>>
>>
>>
>> *From:* Juan Pablo Gardella 
>> *Sent:* Thursday, October 18, 2018 10:27 AM
>> *To:* users@nifi.apache.org
>> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
>> not match
>>
>>
>>
>> No, it's not a valid date. I would like if it an error happens, I would
>> like to throw the flowfile to failure and continue.
>>
>>
>>
>> On Thu, 18 Oct 2018 at 12:19 Shawn Weeks 
>> wrote:
>>
>> Any expression language syntax has to be correct or

Re: Recommended NiFi Docker volume mappings?

2018-10-25 Thread Juan Pablo Gardella
I suggest to be careful when mount log directory. In one day fills some
Gigabytes. If you want to mount logs, adjust the logging.

On Thu, 25 Oct 2018 at 10:07 Stephen Greszczyszyn 
wrote:

>
>
> On Thu, 25 Oct 2018 at 12:50, Peter Wilcsinszky <
> peterwilcsins...@gmail.com> wrote:
>
> But even with 1.8 I'll need to declare the host mount directory somehow
> via docker-compose, as how will the built docker image on dockerhub know
> where to locally mount the internal $(NIFI_HOME) volumes as described below?
>
> VOLUME ${NIFI_LOG_DIR} \
>>>${NIFI_HOME}/conf \
>>>${NIFI_HOME}/database_repository \
>>>${NIFI_HOME}/flowfile_repository \
>>>${NIFI_HOME}/content_repository \
>>>${NIFI_HOME}/provenance_repository \
>>>${NIFI_HOME}/state
>>>
>>
>> Yes you should specify volumes explicitly if you use 1.7.1, but also you
>> should specify an extra separate volume to use for your incoming SFTP data.
>>
>>


Re: Who uses NiFi Cluster in Docker ?

2018-10-19 Thread Juan Pablo Gardella
It will be great!

On Fri, 19 Oct 2018 at 16:13 Michael Moser  wrote:

> I have done exactly what Juan Pablo Gardella suggested in my own Docker
> sandbox, and also a *ADDNIFI_*
> function.  It would take a lot of cleanup and documentation in order to
> contribute, but if there is interest in it, then I'll see what I can do.
>
> -- Mike
>
>
>
> On Fri, Oct 19, 2018 at 1:51 PM Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
>> It would be great to expose properties as *NIFI_> nifi.properties>*. I see that approach at
>> https://hub.docker.com/r/wurstmeister/kafka/ kafka docker image.
>>
>> On Fri, 19 Oct 2018 at 11:56 Robert R. Bruno  wrote:
>>
>>> Been running nifi cluster in a on-prem kubernetes cluster with a lot of
>>> success.  We found using local disks volumes helped performance.
>>>
>>> On Fri, Oct 19, 2018, 03:21 Mike Thomsen  wrote:
>>>
>>>> Guillaume,
>>>>
>>>> We also have a patch coming in 1.8 that exposes the clustering settings
>>>> through Docker, so that should make it a lot easier for you to set up a
>>>> test cluster.
>>>>
>>>> On Fri, Oct 19, 2018 at 3:49 AM Asanka Sanjaya 
>>>> wrote:
>>>>
>>>>> Hi Guillaume,
>>>>> I'm using nifi in our production kubernetes cluster on Google cloud
>>>>> for about a year now and didn't run into any trouble. One thing you need 
>>>>> to
>>>>> be aware of is to have a persistent disk attached to your container in
>>>>> kubernetes. Otherwise, when the pod gets restarted you will loose queued
>>>>> flow files.
>>>>>
>>>>> On Thu, Oct 18, 2018 at 9:10 PM PICHARD, Guillaume <
>>>>> guillaume.pich...@sogeti.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>
>>>>>>
>>>>>> I’m looking for experiences and return on experience in running a
>>>>>> Nifi Cluster in production using docker/kubernetes/mesos. Is it working
>>>>>> well ? Is it stable ? Does it handle well a high workload ?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Thanks for you feedbacks,
>>>>>>
>>>>>> Guillaume.
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> *Thanks,*
>>>>>
>>>>> Asanka Sanjaya Herath
>>>>>
>>>>> Senior Software Engineer | Zone24x7
>>>>>
>>>>


Re: Who uses NiFi Cluster in Docker ?

2018-10-19 Thread Juan Pablo Gardella
It would be great to expose properties as *NIFI_*. I see that approach at
https://hub.docker.com/r/wurstmeister/kafka/ kafka docker image.

On Fri, 19 Oct 2018 at 11:56 Robert R. Bruno  wrote:

> Been running nifi cluster in a on-prem kubernetes cluster with a lot of
> success.  We found using local disks volumes helped performance.
>
> On Fri, Oct 19, 2018, 03:21 Mike Thomsen  wrote:
>
>> Guillaume,
>>
>> We also have a patch coming in 1.8 that exposes the clustering settings
>> through Docker, so that should make it a lot easier for you to set up a
>> test cluster.
>>
>> On Fri, Oct 19, 2018 at 3:49 AM Asanka Sanjaya 
>> wrote:
>>
>>> Hi Guillaume,
>>> I'm using nifi in our production kubernetes cluster on Google cloud for
>>> about a year now and didn't run into any trouble. One thing you need to be
>>> aware of is to have a persistent disk attached to your container in
>>> kubernetes. Otherwise, when the pod gets restarted you will loose queued
>>> flow files.
>>>
>>> On Thu, Oct 18, 2018 at 9:10 PM PICHARD, Guillaume <
>>> guillaume.pich...@sogeti.com> wrote:
>>>
 Hi,



 I’m looking for experiences and return on experience in running a Nifi
 Cluster in production using docker/kubernetes/mesos. Is it working well ?
 Is it stable ? Does it handle well a high workload ?



 Thanks for you feedbacks,

 Guillaume.



>>>
>>>
>>> --
>>>
>>> *Thanks,*
>>>
>>> Asanka Sanjaya Herath
>>>
>>> Senior Software Engineer | Zone24x7
>>>
>>


Re: [EXT] ReplaceText cannot consume messages if Regex does not match

2018-10-18 Thread Juan Pablo Gardella
Exactly, I know it is not an issue in the replace text code, but it happens
inside it. If we are using the ReplaceText in multiple places, it increases
the flow design complexity. We need to evaluate all expressions before
sending to the processor, to be sure will not fail in ReplaceText
processor.

Notice it's impossible if you have to process content dynamically. I would
happy to file a ticket and the patch as I mention.

Juan



On Thu, 18 Oct 2018 at 13:18 Shawn Weeks  wrote:

> I understand the issue now, I’m not sure that a failure of ReplaceText is
> the best place to catch this though.  The reason I’m not sure it’s the best
> place is what happens if there are multiple failures because you had
> multiple expressions, just having them all routed to the same failure
> wouldn’t help you make decisions on what to do with a single attribute.
> Perhaps a better solution would be to use a RouteOnAttribute to check if
> the attributes match a certain pattern before sending them to ReplaceText.
> A possible expression could be
> “${actualSettlementDate:matches('[0-9]{2}/[0-9]{2}/[0-9]{4}')}” however
> that would not catch things that look like dates but aren’t valid.
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
>
> *Sent:* Thursday, October 18, 2018 11:03 AM
>
>
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> At *search value*:(?s)(^.*$)
>
>
>
> At *Replacement value*:
>
>
> *
> ${actualSettlementDate:toDate('MM/dd/'):format("-MM-dd'T'00:00:00.000")}*
>
>
>
> The actualSettlementDate is a flowfile attribute. The problem is the
> replacement value is evaluated inside the processor and the *toDate *method
> fails.
>
>
>
> Hope it's clear now.
>
>
>
>
>
> On Thu, 18 Oct 2018 at 12:51 Shawn Weeks 
> wrote:
>
> I’m still trying to understand your actual issue, can your provide a
> screenshot of the ReplaceText config like the attached, I need to see
> exactly where you’re putting the expression. A template would also be
> really helpful.
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
>
> *Sent:* Thursday, October 18, 2018 10:45 AM
>
>
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> At ReplaceText
> <https://raw.githubusercontent.com/apache/nifi/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java>processor
> we have:
>
>
>
> [image: image.png]
>
> As you can see, only if *StackOverflowError *is raised during the
> evaluation, the flowfile is send to failure relationship. I would like to
> update the code to use Exception or NifiExpressionFailedException (if it
> exits).
>
>
>
> Juan
>
>
>
> On Thu, 18 Oct 2018 at 12:33 Shawn Weeks 
> wrote:
>
> What processor are you defining your expression in? I also may be
> misunderstanding the problem because I don’t see any regular expressions
> anywhere. Can you create a sample workflow showing your issue so I can take
> a look at it.
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
> *Sent:* Thursday, October 18, 2018 10:27 AM
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> No, it's not a valid date. I would like if it an error happens, I would
> like to throw the flowfile to failure and continue.
>
>
>
> On Thu, 18 Oct 2018 at 12:19 Shawn Weeks 
> wrote:
>
> Any expression language syntax has to be correct or the processor won’t
> run. I’m not sure there is any way to work around that except to explicitly
> check that the value you are trying to evaluate is valid. Is the attribute
> “tradeDate” coming from the contents of a flow file or is it defined
> somewhere else. Can you ensure it is a valid date in that format before
> hand?
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
>
> *Sent:* Thursday, October 18, 2018 10:13 AM
>
>
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> Hi, the error is not in the processor itself. It's in the expression used
> against flowfile attributes. For example inside the text, I have:
>
>
>
>
> ${tradeDate:toDate('MM/dd/'):format("-MM-dd'T'00:00:00.000")}
>
&

Re: [EXT] ReplaceText cannot consume messages if Regex does not match

2018-10-18 Thread Juan Pablo Gardella
At *search value*:(?s)(^.*$)

At *Replacement value*:

*${actualSettlementDate:toDate('MM/dd/'):format("-MM-dd'T'00:00:00.000")}*

The actualSettlementDate is a flowfile attribute. The problem is the
replacement value is evaluated inside the processor and the *toDate *method
fails.

Hope it's clear now.


On Thu, 18 Oct 2018 at 12:51 Shawn Weeks  wrote:

> I’m still trying to understand your actual issue, can your provide a
> screenshot of the ReplaceText config like the attached, I need to see
> exactly where you’re putting the expression. A template would also be
> really helpful.
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
>
> *Sent:* Thursday, October 18, 2018 10:45 AM
>
>
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> At ReplaceText
> <https://raw.githubusercontent.com/apache/nifi/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java>processor
> we have:
>
>
>
> [image: image.png]
>
> As you can see, only if *StackOverflowError *is raised during the
> evaluation, the flowfile is send to failure relationship. I would like to
> update the code to use Exception or NifiExpressionFailedException (if it
> exits).
>
>
>
> Juan
>
>
>
> On Thu, 18 Oct 2018 at 12:33 Shawn Weeks 
> wrote:
>
> What processor are you defining your expression in? I also may be
> misunderstanding the problem because I don’t see any regular expressions
> anywhere. Can you create a sample workflow showing your issue so I can take
> a look at it.
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
> *Sent:* Thursday, October 18, 2018 10:27 AM
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> No, it's not a valid date. I would like if it an error happens, I would
> like to throw the flowfile to failure and continue.
>
>
>
> On Thu, 18 Oct 2018 at 12:19 Shawn Weeks 
> wrote:
>
> Any expression language syntax has to be correct or the processor won’t
> run. I’m not sure there is any way to work around that except to explicitly
> check that the value you are trying to evaluate is valid. Is the attribute
> “tradeDate” coming from the contents of a flow file or is it defined
> somewhere else. Can you ensure it is a valid date in that format before
> hand?
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
>
> *Sent:* Thursday, October 18, 2018 10:13 AM
>
>
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> Hi, the error is not in the processor itself. It's in the expression used
> against flowfile attributes. For example inside the text, I have:
>
>
>
>
> ${tradeDate:toDate('MM/dd/'):format("-MM-dd'T'00:00:00.000")}
>
> And that is the root issue. If it's unable to convert it, the flow cannot
> be consumed. How can I evaluate attributes in a non-blocker way?
>
>
>
> Juan
>
>
>
> On Thu, 18 Oct 2018 at 12:07 Shawn Weeks 
> wrote:
>
> Where is your expression? That’s not the entire configuration for that
> processor.
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
> *Sent:* Thursday, October 18, 2018 10:03 AM
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> Configuration:
>
> Replacement Strategy: Always replace
>
> EvaluationMode: Entire text
>
>
>
>
>
> On Thu, 18 Oct 2018 at 12:01 Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
> Hortonworks nifi based on 1.5.0:
>
>
>
> Configuration:
>
> Thanks
>
>
>
> On Thu, 18 Oct 2018 at 11:56 Peter Wicks (pwicks) 
> wrote:
>
> Hi Juan,
>
>
>
> What version of NiFi are you running on?
>
> What mode are you running ReplaceText in, all text or line by line?
>
> Other settings that might be important? What’s your RegEx look like (if
> your able to share).
>
>
>
> --Peter
>
>
>
>
>
> *From:* Juan Pablo Gardella [mailto:gardellajuanpa...@gmail.com]
> *Sent:* Thursday, October 18, 2018 8:53 AM
> *To:* users@nifi.apache.org
> *Subject:* [EXT] ReplaceText cannot consume messages if Regex does not
> match
>
>
>
> Hi all,
>
>
>
> I'm seeing that ReplaceText is not able to consume messages that does not
> match regex. It keeps all the messages in the input queue instead of
> sending them to failure relationship. Is this the intended behavior or I
> have to file a ticket in order to be fixed? In that way, the processor is
> not able to process bad messages and converts in the bottleneck of a flow
>
>
>
> Juan
>
>


Re: [EXT] ReplaceText cannot consume messages if Regex does not match

2018-10-18 Thread Juan Pablo Gardella
At ReplaceText
<https://raw.githubusercontent.com/apache/nifi/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java>processor
we have:

[image: image.png]
As you can see, only if *StackOverflowError *is raised during the
evaluation, the flowfile is send to failure relationship. I would like to
update the code to use Exception or NifiExpressionFailedException (if it
exits).

Juan

On Thu, 18 Oct 2018 at 12:33 Shawn Weeks  wrote:

> What processor are you defining your expression in? I also may be
> misunderstanding the problem because I don’t see any regular expressions
> anywhere. Can you create a sample workflow showing your issue so I can take
> a look at it.
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
> *Sent:* Thursday, October 18, 2018 10:27 AM
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> No, it's not a valid date. I would like if it an error happens, I would
> like to throw the flowfile to failure and continue.
>
>
>
> On Thu, 18 Oct 2018 at 12:19 Shawn Weeks 
> wrote:
>
> Any expression language syntax has to be correct or the processor won’t
> run. I’m not sure there is any way to work around that except to explicitly
> check that the value you are trying to evaluate is valid. Is the attribute
> “tradeDate” coming from the contents of a flow file or is it defined
> somewhere else. Can you ensure it is a valid date in that format before
> hand?
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
>
> *Sent:* Thursday, October 18, 2018 10:13 AM
>
>
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> Hi, the error is not in the processor itself. It's in the expression used
> against flowfile attributes. For example inside the text, I have:
>
>
>
>
> ${tradeDate:toDate('MM/dd/'):format("-MM-dd'T'00:00:00.000")}
>
> And that is the root issue. If it's unable to convert it, the flow cannot
> be consumed. How can I evaluate attributes in a non-blocker way?
>
>
>
> Juan
>
>
>
> On Thu, 18 Oct 2018 at 12:07 Shawn Weeks 
> wrote:
>
> Where is your expression? That’s not the entire configuration for that
> processor.
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
> *Sent:* Thursday, October 18, 2018 10:03 AM
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> Configuration:
>
> Replacement Strategy: Always replace
>
> EvaluationMode: Entire text
>
>
>
>
>
> On Thu, 18 Oct 2018 at 12:01 Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
> Hortonworks nifi based on 1.5.0:
>
>
>
> Configuration:
>
> Thanks
>
>
>
> On Thu, 18 Oct 2018 at 11:56 Peter Wicks (pwicks) 
> wrote:
>
> Hi Juan,
>
>
>
> What version of NiFi are you running on?
>
> What mode are you running ReplaceText in, all text or line by line?
>
> Other settings that might be important? What’s your RegEx look like (if
> your able to share).
>
>
>
> --Peter
>
>
>
>
>
> *From:* Juan Pablo Gardella [mailto:gardellajuanpa...@gmail.com]
> *Sent:* Thursday, October 18, 2018 8:53 AM
> *To:* users@nifi.apache.org
> *Subject:* [EXT] ReplaceText cannot consume messages if Regex does not
> match
>
>
>
> Hi all,
>
>
>
> I'm seeing that ReplaceText is not able to consume messages that does not
> match regex. It keeps all the messages in the input queue instead of
> sending them to failure relationship. Is this the intended behavior or I
> have to file a ticket in order to be fixed? In that way, the processor is
> not able to process bad messages and converts in the bottleneck of a flow
>
>
>
> Juan
>
>


Re: [EXT] ReplaceText cannot consume messages if Regex does not match

2018-10-18 Thread Juan Pablo Gardella
No, it's not a valid date. I would like if it an error happens, I would
like to throw the flowfile to failure and continue.

On Thu, 18 Oct 2018 at 12:19 Shawn Weeks  wrote:

> Any expression language syntax has to be correct or the processor won’t
> run. I’m not sure there is any way to work around that except to explicitly
> check that the value you are trying to evaluate is valid. Is the attribute
> “tradeDate” coming from the contents of a flow file or is it defined
> somewhere else. Can you ensure it is a valid date in that format before
> hand?
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
>
> *Sent:* Thursday, October 18, 2018 10:13 AM
>
>
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> Hi, the error is not in the processor itself. It's in the expression used
> against flowfile attributes. For example inside the text, I have:
>
>
>
>
> ${tradeDate:toDate('MM/dd/'):format("-MM-dd'T'00:00:00.000")}
>
> And that is the root issue. If it's unable to convert it, the flow cannot
> be consumed. How can I evaluate attributes in a non-blocker way?
>
>
>
> Juan
>
>
>
> On Thu, 18 Oct 2018 at 12:07 Shawn Weeks 
> wrote:
>
> Where is your expression? That’s not the entire configuration for that
> processor.
>
>
>
> Thanks
>
> Shawn Weeks
>
>
>
> *From:* Juan Pablo Gardella 
> *Sent:* Thursday, October 18, 2018 10:03 AM
> *To:* users@nifi.apache.org
> *Subject:* Re: [EXT] ReplaceText cannot consume messages if Regex does
> not match
>
>
>
> Configuration:
>
> Replacement Strategy: Always replace
>
> EvaluationMode: Entire text
>
>
>
>
>
> On Thu, 18 Oct 2018 at 12:01 Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
> Hortonworks nifi based on 1.5.0:
>
> [image: image.png]
>
>
>
> Configuration:
>
> Thanks
>
>
>
> On Thu, 18 Oct 2018 at 11:56 Peter Wicks (pwicks) 
> wrote:
>
> Hi Juan,
>
>
>
> What version of NiFi are you running on?
>
> What mode are you running ReplaceText in, all text or line by line?
>
> Other settings that might be important? What’s your RegEx look like (if
> your able to share).
>
>
>
> --Peter
>
>
>
>
>
> *From:* Juan Pablo Gardella [mailto:gardellajuanpa...@gmail.com]
> *Sent:* Thursday, October 18, 2018 8:53 AM
> *To:* users@nifi.apache.org
> *Subject:* [EXT] ReplaceText cannot consume messages if Regex does not
> match
>
>
>
> Hi all,
>
>
>
> I'm seeing that ReplaceText is not able to consume messages that does not
> match regex. It keeps all the messages in the input queue instead of
> sending them to failure relationship. Is this the intended behavior or I
> have to file a ticket in order to be fixed? In that way, the processor is
> not able to process bad messages and converts in the bottleneck of a flow
>
>
>
> Juan
>
>


ReplaceText cannot consume messages if Regex does not match

2018-10-18 Thread Juan Pablo Gardella
Hi all,

I'm seeing that ReplaceText is not able to consume messages that does not
match regex. It keeps all the messages in the input queue instead of
sending them to failure relationship. Is this the intended behavior or I
have to file a ticket in order to be fixed? In that way, the processor is
not able to process bad messages and converts in the bottleneck of a flow

Juan


Re: Nifi with docker and LDAP

2018-09-24 Thread Juan Pablo Gardella
I will check, thanks. But what about if I would like to run as a cluster? I
cannot follow that approach right?

On Mon, 24 Sep 2018 at 12:59 Mike Thomsen  wrote:

> hostname: nifi
>
> Under the nifi declaration
> On Mon, Sep 24, 2018 at 11:07 AM Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
>> How?
>>
>> On Mon, 24 Sep 2018 at 11:31 David Gallagher <
>> dgallag...@cleverdevices.com> wrote:
>>
>>> Hi – not sure if it helps, but you can set a static hostname in your
>>> docker-compose.
>>>
>>>
>>>
>>> Thanks,
>>>
>>>
>>> Dave
>>>
>>>
>>>
>>> *From:* Juan Pablo Gardella 
>>> *Sent:* Sunday, September 23, 2018 3:43 PM
>>> *To:* users@nifi.apache.org
>>> *Subject:* Nifi with docker and LDAP
>>>
>>>
>>>
>>> Hi all,
>>>
>>>
>>>
>>> I'm using Nifi with docker and it's secure.
>>>
>>>
>>>
>>> I'm facing an issue when I bounce my LAPTOP (I'm running it locally).
>>> After bouncing my lap, I cannot access to it (the container is running).
>>> The only workaround it's restart the service. I suppose it's something
>>> related to the host name. Any thoughts?
>>>
>>>
>>>
>>> Configuration:
>>>
>>>nifi:
>>> build:
>>>   context: .
>>>   dockerfile: Dockerfile-nifi
>>> image: myimageid
>>> container_name: nifi-d
>>> restart: always
>>> ports:
>>>   - 8443:8443
>>> depends_on:
>>>   - ldap
>>> environment:
>>>   AUTH: ldap
>>>
>>> (other variables)
>>>
>>>
>>>
>>> I think maybe it's related to the hostname. It's changed after bounce
>>> maybe.
>>>
>>>
>>>
>>> Juan
>>>
>>


Re: Nifi with docker and LDAP

2018-09-24 Thread Juan Pablo Gardella
How?

On Mon, 24 Sep 2018 at 11:31 David Gallagher 
wrote:

> Hi – not sure if it helps, but you can set a static hostname in your
> docker-compose.
>
>
>
> Thanks,
>
>
> Dave
>
>
>
> *From:* Juan Pablo Gardella 
> *Sent:* Sunday, September 23, 2018 3:43 PM
> *To:* users@nifi.apache.org
> *Subject:* Nifi with docker and LDAP
>
>
>
> Hi all,
>
>
>
> I'm using Nifi with docker and it's secure.
>
>
>
> I'm facing an issue when I bounce my LAPTOP (I'm running it locally).
> After bouncing my lap, I cannot access to it (the container is running).
> The only workaround it's restart the service. I suppose it's something
> related to the host name. Any thoughts?
>
>
>
> Configuration:
>
>nifi:
> build:
>   context: .
>   dockerfile: Dockerfile-nifi
> image: myimageid
> container_name: nifi-d
> restart: always
> ports:
>   - 8443:8443
> depends_on:
>   - ldap
> environment:
>   AUTH: ldap
>
> (other variables)
>
>
>
> I think maybe it's related to the hostname. It's changed after bounce
> maybe.
>
>
>
> Juan
>


Nifi with docker and LDAP

2018-09-23 Thread Juan Pablo Gardella
Hi all,

I'm using Nifi with docker and it's secure.

I'm facing an issue when I bounce my LAPTOP (I'm running it locally). After
bouncing my lap, I cannot access to it (the container is running). The only
workaround it's restart the service. I suppose it's something related to
the host name. Any thoughts?

Configuration:
   nifi:
build:
  context: .
  dockerfile: Dockerfile-nifi
image: myimageid
container_name: nifi-d
restart: always
ports:
  - 8443:8443
depends_on:
  - ldap
environment:
  AUTH: ldap
(other variables)

I think maybe it's related to the hostname. It's changed after bounce
maybe.

Juan


Re: Anyone using HashAttribute?

2018-09-05 Thread Juan Pablo Gardella
I vote to keep for backward compatibility.

On Wed, 5 Sep 2018 at 13:33 Brandon DeVries  wrote:

> Mike,
>
> We don't use it with Elasticsearch.
>
> Fundamentally, it feels like the problem is that this change would break
> backwards compatibility, which would require a major version bump.  So, in
> lieu of that, the options are probably 1) use a different name or 2) put
> the new functionality in HashContent as something that can be toggled on,
> but leaving the current behavior as the default.
>
> Brandon
>
> On Wed, Sep 5, 2018 at 12:21 PM Mike Thomsen 
> wrote:
>
>> Brandon,
>>
>> What processor do you use it for in that capacity? If it's an
>> ElasticSearch one we can look into ways to bring this functionality into
>> that bundle so Andy can refactor.
>>
>> Thanks,
>>
>> Mike
>>
>> On Wed, Sep 5, 2018 at 12:07 PM Brandon DeVries  wrote:
>>
>>> Andy,
>>>
>>> We use it pretty much how Joe is... to create a unique composite key.
>>> It seems as though that shouldn't be a difficult functionality to add.
>>> Possibly, you could flip your current dynamic key/value properties.  Make
>>> the key the name of the attribute you want to create, and the value is the
>>> attribute / attributes (newline delimited) that you want to include in the
>>> hash.  This does mean you can't use "${algorithm.name}" in the name of
>>> the created hash attribute, but I don't know if you'd consider that a big
>>> loss.  In any case, I'm sure there are other solutions, this is just a
>>> thought.
>>>
>>> Brandon
>>>
>>> On Wed, Sep 5, 2018 at 10:27 AM Joe Percivall 
>>> wrote:
>>>
 Hey Andy,

 We're currently using the HashAttribute processor. The use-case is that
 we have various events that come in but sometimes those events are just
 updates of previous ones. We store everything in ElasticSearch. So for
 certain events, we'll calculate a hash based on a couple of attributes in
 order to have a composite unique key to upsert as the ES _id. This allows
 us to easily just insert/update events that are the same (as determined by
 the hashed composite key).

 As for the configuration of the processors, we're essentially just
 specifying exact attributes as dynamic properties of HashAttribute. Then
 passing that FF to PutElasticSearchHttp with the resulting attribute from
 HashAttribute as the "Identifier Attribute".

 Joe

 On Mon, Sep 3, 2018 at 9:52 PM Andy LoPresto 
 wrote:

> I opened PRs for 2980 [1] and 2983 [2] which add more performant,
> consistent, and full-featured processors to calculate cryptographic hashes
> of flowfile content and flowfile attributes. I would like to deprecate and
> drop support for HashAttribute, as it performs a convoluted calculation
> that was probably useful in an old scenario, but doesn’t “hash attributes”
> like the name implies. As it blocks the new implementation from using that
> name and following our naming convention, I am hoping to find anyone still
> using the old implementation and understand their use case. Thanks for 
> your
> help.
>
> [1] https://github.com/apache/nifi/pull/2980
> [2] https://github.com/apache/nifi/pull/2983
>
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
>

 --
 *Joe Percivall*
 linkedin.com/in/Percivall
 e: jperciv...@apache.com

>>>


Re: Strange behavior with PutSQL

2018-08-15 Thread Juan Pablo Gardella
Probably connection pool is exhausted.

On Wed, 15 Aug 2018 at 11:44 Lone Pine Account  wrote:

> I have a simple flow that takes the output of a ReplaceText processor and
> sends it to PutSQL.
>
> This has been working in the past with a "toy" configuration.  Now that
> I'm testing it on a larger input set, it is not working but I can't figure
> out why.
>
> Attached is my setup.  The ReplaceText processor fills up the success
> queue, and rather than processing these it appears that the PutSQL just
> keeps filling up its "In" port with more copies of them - the number of
> "In" for PutSQL keeps going up, but nothing comes off the success queue.
>
> So I'm trying to debug:
> - I've attached LogAttribute processors to the ReplaceText processor, and
> verified that the SQL commands are correct.
> - I've attached LogAttribute processors to all of the PutSQL
> relationships, and none receive output.
> - I've looked through the app logs and there is nothing coming up for my
> PutSQL processor.
>
> Where can I look to figure out why the PutSQL processor is no longer
> writing to the database?
>
> Thanks
>


Re: Simple CSV to Parquet without Hadoop

2018-08-14 Thread Juan Pablo Gardella
It's a warning. You can ignore that.

On Tue, 14 Aug 2018 at 18:53 Bryan Bende  wrote:

> Scott,
>
> Sorry I did not realize the Hadoop client would be looking for this
> winutils.exe when running on Windows.
>
> On linux and MacOS you don't need anything external installed outside
> of NiFi so I wasn't expecting this.
>
> Not sure if there is any other good option here regarding Parquet.
>
> Thanks,
>
> Bryan
>
>
> On Tue, Aug 14, 2018 at 5:31 PM, scott  wrote:
> > Hi Bryan,
> > I'm fine if I have to trick the API, but don't I still need Hadoop
> installed
> > somewhere? After creating the core-site.xml as you described, I get the
> > following errors:
> >
> > Failed to locate the winutils binary in the hadoop binary path
> > IOException: Could not locate executable null\bin\winutils.exe in the
> Hadoop
> > binaries
> > Unable to load native-hadoop library for your platform... using
> builtin-java
> > classes where applicable
> > Failed to write due to java.io.IOException: No FileSystem for scheme
> >
> > BTW, I'm using NiFi version 1.5
> >
> > Thanks,
> > Scott
> >
> >
> > On Tue, Aug 14, 2018 at 12:44 PM, Bryan Bende  wrote:
> >>
> >> Scott,
> >>
> >> Unfortunately the Parquet API itself is tied to the Hadoop Filesystem
> >> object which is why NiFi can't read and write Parquet directly to flow
> >> files (i.e. they don't provide a way to read/write to/from Java input
> >> and output streams).
> >>
> >> The best you can do is trick the Hadoop API into using the local
> >> file-system by creating a core-site.xml with the following:
> >>
> >> 
> >> 
> >> fs.defaultFS
> >> file:///
> >> 
> >> 
> >>
> >> That will make PutParquet or FetchParquet work with your local
> >> file-system.
> >>
> >> Thanks,
> >>
> >> Bryan
> >>
> >>
> >> On Tue, Aug 14, 2018 at 3:22 PM, scott  wrote:
> >> > Hello NiFi community,
> >> > Is there a simple way to read CSV files and write them out as Parquet
> >> > files
> >> > without Hadoop? I run NiFi on Windows and don't have access to a
> Hadoop
> >> > environment. I'm trying to write the output of my ETL in a compressed
> >> > and
> >> > still query-able format. Is there something I should be using instead
> of
> >> > Parquet?
> >> >
> >> > Thanks for your time,
> >> > Scott
> >
> >
>


Re: Weird behavior with the latest rev of MySQL and NiFi 1.7.1

2018-08-13 Thread Juan Pablo Gardella
Did you check
https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html ?
I have similar issue with postgres. Mysql is case sensitive. Try to use
lower or upper case to test if that's the fix

On Mon, 13 Aug 2018 at 13:26 Rick  wrote:

> Hi Everybody!
>
> I am seeing some really unusual and unexpected MySQL error messages
> written into the NiFi log file by the latest MySQL version (5.7.23) when I
> try to load simple records from a CSV file with a very simple
> PutDatabaseRecord application.
>
> So, before I expend a lot of effort having the drains up, I wondered if
> anyone else has seen this sort of behavior with this NiFi / MySQL version
> combo, or alternatively, if y'all have this combo working ok, then I know
> that I need to look for something local on my system.
>
> Stripping my NiFi flow down to the bare minimum necessary to demonstrate
> the issue, it is:  ListFile >> FetchFile >> PutDatabaseRecord, and it is
> the latter that triggers these weird error messages when it attempts to
> INSERT the records into the target SQL table. Shorn of the irrelevant
> verbiage, the specific error that I am now getting in the NiFi log file is
> "PutDatabaseRecord failed to process StandardFlowFIle Due to Unknown Table 
> <*name
> of the table that I am trying to write to*> in information_schema".
>
> Behind the scenes are also a DBCPConnectionPool controller and a
> CSVReader. These are enabled and running; the former is obviously
> connecting to the DB as it can 'see' an information schema in the first
> place (separately, I have checked and the DBConnectionURL is the right
> one), and the latter is clearly working since if I inspect the flowfile
> contents (payload) in the queue before it gets to PutDbRecord then it is
> what I expect (half a dozen simple records that I am using for testing with
> two char strings and an int value in each record). Nothing odd. No weird
> hex / control characters or any such in there anywhere. Just simple ascii
> text values.
>
> This dataflow should work, but it isn't, and I dont understand why nor why
> the odd SQL error message is given. If I use the SQL CLI then I can
> read/write to/change (etc) the table as normal. If I run a bit of equally
> simple Java, the insert works just fine, but with the NiFi, it fails
> repeatably.
>
> So, I admit to being baffled, and therefore any help / suggestions /
> insight etc is welcome.  Given how simple the application causing the
> problem is, I am actually starting to wonder about there being a bug
> somewhere either in NiFi or MySQL, but it's going to be one of those that
> are a bugger to track down (no pun intended!) and so I thought that I would
> ask the community (y'all!) for any comments and help first.
>
> I've been working with SQL (and MySQL) for many years, and NiFi for a fair
> while now, so I have done all the obvious already - changing db and table
> names, creating new NiFi CSVReaders and so forth but they make no
> difference. My dev environment here is also as simple as you can get - a
> single machine running Ubuntu 18.04.1 (the latest LTS version), MySQL
> 5.7.23, and NiFi 1.7.1. There are no clustering, Hadoop, or Kafka type
> things to cause problems anywhere.
>
> All help gratefully received  Regards and thanks to all
>
>
>
> Rick
>
>


Re: Attributes vs JOLTTransformJSON

2018-07-18 Thread Juan Pablo Gardella
The best docs are javadoc for Jolt. I suggest to checkout the code and read
from there. It also has examples.

On Wed, 18 Jul 2018 at 18:41 Jean-Sebastien Vachon 
wrote:

> Hi all,
>
>
>
> I’m using a JOLT transformation at the very end of my flow to filter out
> some attributes that I don’t want to send to ElasticSearch for Indexing. So
> far, it is working great but I’d like to include the value of an attribute
> (docId) into the transformation as well.
>
>
>
> My JOLT specs are:
>
> [{
>
> "operation": "shift",
>
> "spec": {
>
> "companyId": "&",
>
> "companyName": "&",
>
> "s3Key": "&",
>
> "runId": "&",
>
> "urls": "&",
>
> "urlId": "&",
>
> "urlLevel": "&",
>
> "urlAddress": "&",
>
>  "docId": "${docId}"
>
> }
>
> }]
>
>
>
> When I run my flow through this processor, the result is (check the last
> field):
>
>
>
> {
>
>   "companyId" : 1,
>
>   "companyName" : "some company",
>
>   "s3Key" : "1.9fe1cf4d384cd0a4cec3d97f54ae5a8d.json",
>
>   "runId" : 1,
>
>   "urls" : [ {
>
> "url" : "http://www.somecompany.com";,
>
> "id" : 0,
>
> "filter_status" : "ok"
>
>   }, {
>
> "url" : "http://www. somecompany.com/contact",
>
> "id" : 0,
>
> "filter_status" : "ok"
>
>   }, {
>
> "url" : "http://www. somecompany.com/#nav",
>
> "id" : 0,
>
> "filter_status" : "ok"
>
>   }, {
>
> "url" : "http://www. somecompany.com#top",
>
> "id" : 0,
>
> "filter_status" : "ok"
>
>   } ],
>
>   "urlId" : 1,
>
>   "urlLevel" : 0,
>
>   "urlAddress" : "http://www. somecompany.com",
>
>   "1001" : "1001"
>
> }
>
>
>
> I was expecting the last field to read like “docId”: “1001”…
>
> Now, I’m pretty sure this is obvious to someone experienced with JOLT but
> I googled a bit and could not find good documentation about JOLT’s syntax.
>
>
>
> Thanks
>
> --
>
> Jean-Sébastien Vachon
>
> vacho...@gmail.com 
>
> jsvac...@brizodata.com
>
> www.brizodata.com
>
>
>


Re: Unable to create HiveConnectionPool with kerberos.

2018-03-26 Thread Juan Pablo Gardella
Sorry, the issue happens when a HA configuration is used.

On Mon, 26 Mar 2018 at 13:03 Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:

> See https://issues.apache.org/jira/browse/NIFI-2575, the driver does not
> suppor that. I've put some workarounds in the ticket.
>
> On Mon, 26 Mar 2018 at 13:03  wrote:
>
>> Hi,
>>
>>
>>
>> I am getting the following warning when I use HiveConnection pool with
>> Kerberos :
>>
>>
>>
>> HiveConnectionPool[id=6e60258b-9e00-3bac-85ba-0dac8e22142f] Configuration
>> does not have security enabled, Keytab and Principal will be ignored
>>
>>
>>
>> It also throws the following bulletin in my PutHiveQl processor:
>>
>> PutHiveQL[id=55f4ac1b-ecf9-3db3-b898-7a9d145a5382] 
>> org.apache.nifi.processors.hive.PutHiveQL$$Lambda$663/2042832677 
>> <(204)%20283-2677>@40267000 failed to process due to 
>> org.apache.nifi.processor.exception.ProcessException: 
>> org.apache.commons.dbcp.SQLNestedException: Cannot create 
>> PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
>> jdbc:hive2://**:1/nifi_test1: Peer indicated failure: Unsupported 
>> mechanism type PLAIN); rolling back session: 
>> org.apache.commons.dbcp.SQLNestedException: Cannot create 
>> PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
>> jdbc:hive2://**:1/nifi_test1: Peer indicated failure: Unsupported 
>> mechanism type PLAIN)
>>
>>
>>
>> Hive Configuration Resources:-
>> /etc/hive/conf/hive-site.xml,/etc/hadoop/conf/core-site.xml
>>
>> I have set hive.security.authentication and hadoop.security.authentication
>> to Kerberos.
>>
>>
>>
>> Please let me know if I’m doing anything wrong.
>>
>>
>>
>> Regards,
>>
>> Mohit
>>
>


Re: Unable to create HiveConnectionPool with kerberos.

2018-03-26 Thread Juan Pablo Gardella
See https://issues.apache.org/jira/browse/NIFI-2575, the driver does not
suppor that. I've put some workarounds in the ticket.

On Mon, 26 Mar 2018 at 13:03  wrote:

> Hi,
>
>
>
> I am getting the following warning when I use HiveConnection pool with
> Kerberos :
>
>
>
> HiveConnectionPool[id=6e60258b-9e00-3bac-85ba-0dac8e22142f] Configuration
> does not have security enabled, Keytab and Principal will be ignored
>
>
>
> It also throws the following bulletin in my PutHiveQl processor:
>
> PutHiveQL[id=55f4ac1b-ecf9-3db3-b898-7a9d145a5382] 
> org.apache.nifi.processors.hive.PutHiveQL$$Lambda$663/2042832677 
> <(204)%20283-2677>@40267000 failed to process due to 
> org.apache.nifi.processor.exception.ProcessException: 
> org.apache.commons.dbcp.SQLNestedException: Cannot create 
> PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
> jdbc:hive2://**:1/nifi_test1: Peer indicated failure: Unsupported 
> mechanism type PLAIN); rolling back session: 
> org.apache.commons.dbcp.SQLNestedException: Cannot create 
> PoolableConnectionFactory (Could not open client transport with JDBC Uri: 
> jdbc:hive2://**:1/nifi_test1: Peer indicated failure: Unsupported 
> mechanism type PLAIN)
>
>
>
> Hive Configuration Resources:-
> /etc/hive/conf/hive-site.xml,/etc/hadoop/conf/core-site.xml
>
> I have set hive.security.authentication and hadoop.security.authentication
> to Kerberos.
>
>
>
> Please let me know if I’m doing anything wrong.
>
>
>
> Regards,
>
> Mohit
>


Re: QueryDatabaseAdapter does not work with Phoenix's TIMESTAMP columns

2018-03-14 Thread Juan Pablo Gardella
Awesome, I will file a JIRA and do something similar for Phoenix.

Thanks

On Wed, 14 Mar 2018 at 15:50 Matt Burgess  wrote:

> Juan,
>
> We've had to do similar things for Oracle [1], so there is precedence,
> please feel free to create a JIRA to fix it, thanks!
>
> Regards,
> Matt
>
> [1] https://issues.apache.org/jira/browse/NIFI-2323
>
>
> On Wed, Mar 14, 2018 at 2:43 PM, Juan Pablo Gardella
>  wrote:
> > Hello team,
> >
> > I'm testing QueryDatabaseAdapter against Phoenix DB and it cannot convert
> > TIMESTAMP. The error is described below:
> >
> >
> https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase
> >
> > Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it
> work. Do
> > you think worth to create a JIRA to fix it? Or are there a workaround for
> > this?
> >
> > Juan
>


QueryDatabaseAdapter does not work with Phoenix's TIMESTAMP columns

2018-03-14 Thread Juan Pablo Gardella
Hello team,

I'm testing QueryDatabaseAdapter against Phoenix DB and it cannot convert
TIMESTAMP. The error is described below:

https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase

Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work.
Do you think worth to create a JIRA to fix it? Or are there a workaround
for this?

Juan


Add Max Rows Per Flow File into ExecuteSQL

2018-03-08 Thread Juan Pablo Gardella
Hello team,

I would like to add "Max Rows Per Flow File" to ExecuteSQL processor. I can
create a JIRA and spent some time into that. But before doing this, I would
like to know if someone of the team see a problem with that or, if that is
intentional.

I found that option useful in some use cases.

Thanks,
Juan


Re: Cannot convert to Record a valid Avro schema

2018-02-27 Thread Juan Pablo Gardella
Mark,

I've attached a simple project to reproduce the bugs, and it fails either
default to 0 or [0]:

Tests in error:
  testIssue1(com.foo.TestIssueDefaultValues): Cannot set the default value
for field [listOfInt] to [0] because that is not a valid value for Data
Type [ARRAY[INT]]
  testIssue2(com.foo.TestIssueDefaultValues): Cannot set the default value
for field [listOfInt] to [[0]] because that is not a valid value for Data
Type [ARRAY[INT]]

So when you said: *Then it works properly, giving us the default value of a
1-element array with the value 0 as the only element*, at least using Nifi
1.5.0 does not work.

Regarding to:
*But what you expect to happenis for this case to be treated the same as if
the schema had said:*

*"type": {"type": "array", "items": "int" }, "default": []*
>>
>>
*So that if the field is not specified, you get an empty array for the
value.*

*Is that accurate? -> **YES*. Basically I guess Nifi should mimic Avro
(when it creates a RecordSchema from Avro schema) in this case in order to
support that scenarios.

Thanks a lot,
Juan

On Tue, 27 Feb 2018 at 11:25 Mark Payne  wrote:

> Juan,
>
> OK, thanks. I was trying to understand what the intent was with that
> schema. So to make sure that everyone
> is on the same page:
>
> If I use the following field in the schema:
>
> "type": {"type": "array", "items": "int" }, "default": [ 0 ]
>>>
>>>
> Then it works properly, giving us the default value of a 1-element array
> with the value 0 as the only element.
>
> But if I use:
>
> "type": {"type": "array", "items": "int" }, "default": 0
>>>
>>>
> Then currently it throws an Exception because the default value is not an
> array. But what you expect to happen
> is for this case to be treated the same as if the schema had said:
>
> "type": {"type": "array", "items": "int" }, "default": []
>>>
>>>
> So that if the field is not specified, you get an empty array for the
> value.
>
> Is that accurate?
>
> Thanks
> -Mark
>
>
> On Feb 27, 2018, at 9:18 AM, Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
> Thanks Mike, agree but Avro does not complain and it allows using it. The
> schema is used in production and I cannot change it for now.
>
> On Tue, 27 Feb 2018 at 11:17 Mike Thomsen  wrote:
>
>> That doesn't look like the right way to specify an empty array. This SO
>> example fits about what I'd expect:
>>
>> https://stackoverflow.com/a/42140165/284538
>>
>> So it should be default:[0]
>>
>> On Tue, Feb 27, 2018 at 8:56 AM, Mark Payne  wrote:
>>
>>> Juan,
>>>
>>> So the scenario that you laid out in the NIFI-4893 is not one that I've
>>> personally
>>> encountered. What does it mean exactly to have an Avro schema with an
>>> "array" type
>>> that has a value? In the example that you laid out, the field has:
>>>
>>> "type": {"type": "array", "items": "int" }, "default": 0
>>>
>>>
>>> In this case, what should be the value of this field if it's not
>>> specified? A single-element array with the value of 0?
>>> From looking at the PR that was submitted, it appears to set the
>>> defaultValue to a new (empty) ArrayList.
>>> I would think that maybe it should set defaultValue to
>>>
>>> new Integer[] {0};
>>>
>>> But I am not certain of the semantics here.
>>>
>>> Thanks
>>> -Mark
>>>
>>>
>>> On Feb 27, 2018, at 7:30 AM, Juan Pablo Gardella <
>>> gardellajuanpa...@gmail.com> wrote:
>>>
>>> Hello team,
>>>
>>> I could not fix the issue. I did a patch but it's solves the issue, but
>>> I believe is not the correct solution. Anyone that knows Record Framework
>>> can help me on that?
>>>
>>> Thanks in advance,
>>> Juan
>>>
>>> On Mon, 19 Feb 2018 at 22:33 Juan Pablo Gardella <
>>> gardellajuanpa...@gmail.com> wrote:
>>>
>>>> I saw an issue in a test :(. I will continue looking into current
>>>> approach.
>>>>
>>>> On Mon, 19 Feb 2018 at 22:23 Juan Pablo Gardella <
>>>> gardellajuanpa...@gmail.com> wrote:
>>>>
>>>>> Hello team,
>>>>>
>>>>> I filed an issue at https://issues.apache.org/jira/browse/NIFI-4893.
>>>>> I discovered using a complex Avro schema. I've isolated the issue and also
>>>>> did a patch. At least, this solve the issue but actually I don't know well
>>>>> the implications on that solution.
>>>>>
>>>>> Please let me know what do you think. I have another issue related to
>>>>> Avro and Record, I will file the issue tomorrow.
>>>>>
>>>>> Thanks,
>>>>> Juan
>>>>>
>>>>
>>>
>>
>


Re: Cannot convert to Record a valid Avro schema

2018-02-27 Thread Juan Pablo Gardella
Thanks Mike, agree but Avro does not complain and it allows using it. The
schema is used in production and I cannot change it for now.

On Tue, 27 Feb 2018 at 11:17 Mike Thomsen  wrote:

> That doesn't look like the right way to specify an empty array. This SO
> example fits about what I'd expect:
>
> https://stackoverflow.com/a/42140165/284538
>
> So it should be default:[0]
>
> On Tue, Feb 27, 2018 at 8:56 AM, Mark Payne  wrote:
>
>> Juan,
>>
>> So the scenario that you laid out in the NIFI-4893 is not one that I've
>> personally
>> encountered. What does it mean exactly to have an Avro schema with an
>> "array" type
>> that has a value? In the example that you laid out, the field has:
>>
>> "type": {"type": "array", "items": "int" }, "default": 0
>>
>>
>> In this case, what should be the value of this field if it's not
>> specified? A single-element array with the value of 0?
>> From looking at the PR that was submitted, it appears to set the
>> defaultValue to a new (empty) ArrayList.
>> I would think that maybe it should set defaultValue to
>>
>> new Integer[] {0};
>>
>> But I am not certain of the semantics here.
>>
>> Thanks
>> -Mark
>>
>>
>> On Feb 27, 2018, at 7:30 AM, Juan Pablo Gardella <
>> gardellajuanpa...@gmail.com> wrote:
>>
>> Hello team,
>>
>> I could not fix the issue. I did a patch but it's solves the issue, but I
>> believe is not the correct solution. Anyone that knows Record Framework can
>> help me on that?
>>
>> Thanks in advance,
>> Juan
>>
>> On Mon, 19 Feb 2018 at 22:33 Juan Pablo Gardella <
>> gardellajuanpa...@gmail.com> wrote:
>>
>>> I saw an issue in a test :(. I will continue looking into current
>>> approach.
>>>
>>> On Mon, 19 Feb 2018 at 22:23 Juan Pablo Gardella <
>>> gardellajuanpa...@gmail.com> wrote:
>>>
>>>> Hello team,
>>>>
>>>> I filed an issue at https://issues.apache.org/jira/browse/NIFI-4893. I
>>>> discovered using a complex Avro schema. I've isolated the issue and also
>>>> did a patch. At least, this solve the issue but actually I don't know well
>>>> the implications on that solution.
>>>>
>>>> Please let me know what do you think. I have another issue related to
>>>> Avro and Record, I will file the issue tomorrow.
>>>>
>>>> Thanks,
>>>> Juan
>>>>
>>>
>>
>


Re: Cannot convert to Record a valid Avro schema

2018-02-27 Thread Juan Pablo Gardella
Hi Mark,

Thank you for take some time in the issue. Regarding your questions:

* *What does it mean exactly to have an Avro schema with an "array" type
that has a value?* -> A default value I mean.
** What should be the value of this field if it's not specified?* *A
single-element array with the value of 0? * -> Do you mean without default?
It fails with the error: org.apache.avro.AvroRuntimeException: Field
listOfInt type:ARRAY pos:0 not set and has no default value

I agree that the default value is not the best example (I took from a real
scenario), but Avro does not complain and default it to an empty array. I
also added a reproducible scenario with the default value to [0] and also
it fails.

This issue prevent using ConvertRecord because it is not possible to create
a RecordSchema from that Avro schema. I believe, in this case mimic Avro.
If the type is an array and the default value is not an array, default to
an empty array.

Thanks a lot, please let me know if you need something else, I will be glad
to help you. I'm interested in fix of this issue.

Juan

On Tue, 27 Feb 2018 at 10:56 Mark Payne  wrote:

> Juan,
>
> So the scenario that you laid out in the NIFI-4893 is not one that I've
> personally
> encountered. What does it mean exactly to have an Avro schema with an
> "array" type
> that has a value? In the example that you laid out, the field has:
>
> "type": {"type": "array", "items": "int" }, "default": 0
>
>
> In this case, what should be the value of this field if it's not
> specified? A single-element array with the value of 0?
> From looking at the PR that was submitted, it appears to set the
> defaultValue to a new (empty) ArrayList.
> I would think that maybe it should set defaultValue to
>
> new Integer[] {0};
>
> But I am not certain of the semantics here.
>
> Thanks
> -Mark
>
>
> On Feb 27, 2018, at 7:30 AM, Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
> Hello team,
>
> I could not fix the issue. I did a patch but it's solves the issue, but I
> believe is not the correct solution. Anyone that knows Record Framework can
> help me on that?
>
> Thanks in advance,
> Juan
>
> On Mon, 19 Feb 2018 at 22:33 Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
>> I saw an issue in a test :(. I will continue looking into current
>> approach.
>>
>> On Mon, 19 Feb 2018 at 22:23 Juan Pablo Gardella <
>> gardellajuanpa...@gmail.com> wrote:
>>
>>> Hello team,
>>>
>>> I filed an issue at https://issues.apache.org/jira/browse/NIFI-4893. I
>>> discovered using a complex Avro schema. I've isolated the issue and also
>>> did a patch. At least, this solve the issue but actually I don't know well
>>> the implications on that solution.
>>>
>>> Please let me know what do you think. I have another issue related to
>>> Avro and Record, I will file the issue tomorrow.
>>>
>>> Thanks,
>>> Juan
>>>
>>
>


Re: Cannot convert to Record a valid Avro schema

2018-02-27 Thread Juan Pablo Gardella
Hello team,

I could not fix the issue. I did a patch but it's solves the issue, but I
believe is not the correct solution. Anyone that knows Record Framework can
help me on that?

Thanks in advance,
Juan

On Mon, 19 Feb 2018 at 22:33 Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:

> I saw an issue in a test :(. I will continue looking into current approach.
>
> On Mon, 19 Feb 2018 at 22:23 Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
>
>> Hello team,
>>
>> I filed an issue at https://issues.apache.org/jira/browse/NIFI-4893. I
>> discovered using a complex Avro schema. I've isolated the issue and also
>> did a patch. At least, this solve the issue but actually I don't know well
>> the implications on that solution.
>>
>> Please let me know what do you think. I have another issue related to
>> Avro and Record, I will file the issue tomorrow.
>>
>> Thanks,
>> Juan
>>
>


Re: Cannot convert to Record a valid Avro schema

2018-02-19 Thread Juan Pablo Gardella
I saw an issue in a test :(. I will continue looking into current approach.

On Mon, 19 Feb 2018 at 22:23 Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:

> Hello team,
>
> I filed an issue at https://issues.apache.org/jira/browse/NIFI-4893. I
> discovered using a complex Avro schema. I've isolated the issue and also
> did a patch. At least, this solve the issue but actually I don't know well
> the implications on that solution.
>
> Please let me know what do you think. I have another issue related to Avro
> and Record, I will file the issue tomorrow.
>
> Thanks,
> Juan
>


Cannot convert to Record a valid Avro schema

2018-02-19 Thread Juan Pablo Gardella
Hello team,

I filed an issue at https://issues.apache.org/jira/browse/NIFI-4893. I
discovered using a complex Avro schema. I've isolated the issue and also
did a patch. At least, this solve the issue but actually I don't know well
the implications on that solution.

Please let me know what do you think. I have another issue related to Avro
and Record, I will file the issue tomorrow.

Thanks,
Juan


Re: PutParquet fails to convert avro logical decimal type.

2018-02-16 Thread Juan Pablo Gardella
Probably related to https://issues.apache.org/jira/browse/NIFI-4846


On Fri, 16 Feb 2018 at 05:24  wrote:

> I  tried it with Nifi 1.5.0, still I am facing the same issue.
>
>
>
> *From:* mohit.j...@open-insights.co.in [mailto:
> mohit.j...@open-insights.co.in]
> *Sent:* 16 February 2018 12:12
> *To:* users@nifi.apache.org
> *Subject:* RE: PutParquet fails to convert avro logical decimal type.
>
>
>
> Hi Juan,
>
>
>
> I’m using Nifi 1.4.0.
>
>
>
>
>
> *From:* Juan Pablo Gardella [mailto:gardellajuanpa...@gmail.com
> ]
> *Sent:* 16 February 2018 12:10
> *To:* users@nifi.apache.org
> *Subject:* Re: PutParquet fails to convert avro logical decimal type.
>
>
>
> Are you using Nifi 1.5.0? If not, try with it first. There are bugs in
> older versions related to Record/Avro.
>
>
>
> On Fri, 16 Feb 2018 at 02:46  wrote:
>
> Hi all,
>
>
>
> I am using QueryDatabaseTable to extract records from mysql. I have set
> Logical Data Type to true. I am using the PutParquet processor to write to
> HDFS. It is not able to convert the logical decimal type.
>
>
>
> It throws an exception :-
>
> 2018-02-15 17:59:05,189 ERROR [Timer-Driven Process Thread-10]
> o.a.nifi.processors.parquet.PutParquet
> PutParquet[id=01611011-e4a8-106a-f933-eb66d923cfd1] Failed to write due to
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException:
> Cannot convert value 1234567.2 of type class java.lang.Double because no
> compatible types exist in the UNION for field dectype: {}
>
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException:
> Cannot convert value 1234567.2 of type class java.lang.Double because no
> compatible types exist in the UNION for field dectype
>
>at
> org.apache.nifi.avro.AvroTypeUtil.convertUnionFieldValue(AvroTypeUtil.java:667)
>
>at
> org.apache.nifi.avro.AvroTypeUtil.convertToAvroObject(AvroTypeUtil.java:572)
>
>at
> org.apache.nifi.avro.AvroTypeUtil.createAvroRecord(AvroTypeUtil.java:432)
>
>at
> org.apache.nifi.processors.parquet.record.AvroParquetHDFSRecordWriter.write(AvroParquetHDFSRecordWriter.java:43)
>
>at
> org.apache.nifi.processors.hadoop.record.HDFSRecordWriter.write(HDFSRecordWriter.java:48)
>
>at
> org.apache.nifi.processors.hadoop.AbstractPutHDFSRecord.lambda$null$0(AbstractPutHDFSRecord.java:324)
>
>at
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2174)
>
>at
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2144)
>
>at
> org.apache.nifi.processors.hadoop.AbstractPutHDFSRecord.lambda$onTrigger$1(AbstractPutHDFSRecord.java:305)
>
>at java.security.AccessController.doPrivileged(Native
> Method)
>
>at javax.security.auth.Subject.doAs(Subject.java:360)
>
>at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>
>at
> org.apache.nifi.processors.hadoop.AbstractPutHDFSRecord.onTrigger(AbstractPutHDFSRecord.java:272)
>
>at
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>
>at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
>
>at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>
>at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>
>at
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
>
>at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>
>at
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>
>at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>
>at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>
>at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>
>at java.lang.Thread.run(Thread.java:748)
>
>
>
>
>
> Please let me know if I’m doing anything wrong.
>
>
>
> Regards,
>
> Mohit Jain
>
>


Re: PutParquet fails to convert avro logical decimal type.

2018-02-15 Thread Juan Pablo Gardella
Are you using Nifi 1.5.0? If not, try with it first. There are bugs in
older versions related to Record/Avro.

On Fri, 16 Feb 2018 at 02:46  wrote:

> Hi all,
>
>
>
> I am using QueryDatabaseTable to extract records from mysql. I have set
> Logical Data Type to true. I am using the PutParquet processor to write to
> HDFS. It is not able to convert the logical decimal type.
>
>
>
> It throws an exception :-
>
> 2018-02-15 17:59:05,189 ERROR [Timer-Driven Process Thread-10]
> o.a.nifi.processors.parquet.PutParquet
> PutParquet[id=01611011-e4a8-106a-f933-eb66d923cfd1] Failed to write due to
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException:
> Cannot convert value 1234567.2 of type class java.lang.Double because no
> compatible types exist in the UNION for field dectype: {}
>
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException:
> Cannot convert value 1234567.2 of type class java.lang.Double because no
> compatible types exist in the UNION for field dectype
>
>at
> org.apache.nifi.avro.AvroTypeUtil.convertUnionFieldValue(AvroTypeUtil.java:667)
>
>at
> org.apache.nifi.avro.AvroTypeUtil.convertToAvroObject(AvroTypeUtil.java:572)
>
>at
> org.apache.nifi.avro.AvroTypeUtil.createAvroRecord(AvroTypeUtil.java:432)
>
>at
> org.apache.nifi.processors.parquet.record.AvroParquetHDFSRecordWriter.write(AvroParquetHDFSRecordWriter.java:43)
>
>at
> org.apache.nifi.processors.hadoop.record.HDFSRecordWriter.write(HDFSRecordWriter.java:48)
>
>at
> org.apache.nifi.processors.hadoop.AbstractPutHDFSRecord.lambda$null$0(AbstractPutHDFSRecord.java:324)
>
>at
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2174)
>
>at
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2144)
>
>at
> org.apache.nifi.processors.hadoop.AbstractPutHDFSRecord.lambda$onTrigger$1(AbstractPutHDFSRecord.java:305)
>
>at java.security.AccessController.doPrivileged(Native
> Method)
>
>at javax.security.auth.Subject.doAs(Subject.java:360)
>
>at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>
>at
> org.apache.nifi.processors.hadoop.AbstractPutHDFSRecord.onTrigger(AbstractPutHDFSRecord.java:272)
>
>at
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>
>at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
>
>at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>
>at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>
>at
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
>
>at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>
>at
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>
>at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>
>at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>
>at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>
>at java.lang.Thread.run(Thread.java:748)
>
>
>
>
>
> Please let me know if I’m doing anything wrong.
>
>
>
> Regards,
>
> Mohit Jain
>


Re: [EXTERNAL EMAIL]Re: Kerberos hive failure to renew tickets

2018-01-10 Thread Juan Pablo Gardella
Did you try disabling ticket cache usage?

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="xxx""
  principal=""
  storeKey=true
*  useTicketCache=false*
  serviceName="zookeeper";
};

On Wed, 10 Jan 2018 at 12:57 Schneider, Jonathan  wrote:

> For reference, the specific error I get is:
>
> 2018-01-10 09:55:55,988 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.hive.PutHiveQL
> PutHiveQL[id=3a4f82fd-015f-1000--5aa22fb2] Failed to update Hive
> for
> StandardFlowFileRecord[uuid=7ba71cdb-7557-4eab-bd2d-bd89add1c73f,claim=StandardContentClaim
> [resourceClaim=StandardResourceClaim[id=1515205062419-12378,
> container=default, section=90], offset=342160,
> length=247],offset=0,name=vp_employmentstat.orc,size=247] due to
> java.sql.SQLException: org.apache.thrift.transport.TTransportException:
> org.apache.http.client.ClientProtocolException; it is possible that
> retrying the operation will succeed, so routing to retry:
> java.sql.SQLException: org.apache.thrift.transport.TTransportException:
> org.apache.http.client.ClientProtocolException
> java.sql.SQLException: org.apache.thrift.transport.TTransportException:
> org.apache.http.client.ClientProtocolException
> at
> org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:308)
> at
> org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:241)
> at
> org.apache.hive.jdbc.HivePreparedStatement.execute(HivePreparedStatement.java:98)
> at
> org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
> at
> org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
> at
> org.apache.nifi.processors.hive.PutHiveQL.lambda$null$3(PutHiveQL.java:218)
> at
> org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
> at
> org.apache.nifi.processors.hive.PutHiveQL.lambda$new$4(PutHiveQL.java:199)
> at
> org.apache.nifi.processor.util.pattern.Put.putFlowFiles(Put.java:59)
> at
> org.apache.nifi.processor.util.pattern.Put.onTrigger(Put.java:101)
> at
> org.apache.nifi.processors.hive.PutHiveQL.lambda$onTrigger$6(PutHiveQL.java:255)
> at
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
> at
> org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
> at
> org.apache.nifi.processors.hive.PutHiveQL.onTrigger(PutHiveQL.java:255)
> at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1118)
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> at
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException:
> org.apache.http.client.ClientProtocolException
> at
> org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:297)
> at
> org.apache.thrift.transport.THttpClient.flush(THttpClient.java:313)
> at
> org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
> at
> org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
> at
> org.apache.hive.service.cli.thrift.TCLIService$Client.send_ExecuteStatement(TCLIService.java:223)
> at
> org.apache.hive.service.cli.thrift.TCLIService$Client.ExecuteStatement(TCLIService.java:215)
> at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1374)
> at com.sun.proxy.$Proxy174.ExecuteStatement(Unknown Source)
> at
> org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:299)
> ... 24 common frames omitted
> Caused by: org.apache.http.client.ClientProtocolEx

Re: NULL String

2017-12-19 Thread Juan Pablo Gardella
Which Nifi version are you using? 1.3.0 and 1.4.0 have some bugs with null.
Check NIFI-4069 and NIFI-4671.

Juan

On Tue, 19 Dec 2017 at 15:57 Bryan Bende  wrote:

> Hello,
>
> What does your schema look like?
>
> You would need the schema to indicate that the field is nullable like this:
>
> "type": ["string", "null"] }
>
> If you only have "type": ["string"] } then it will produce an error when
> reading a null value.
>
> -Bryan
>
>
> On Tue, Dec 19, 2017 at 1:46 PM, Aruna Sankaralingam <
> aruna.sankaralin...@cormac-corp.com> wrote:
>
>> I have a CSV file in which some of the fields have NULL values. I am
>> getting this error that “Field cannot be null”. How do I let Nifi know to
>> accept NULL values?
>>
>>
>>
>> I see this property in the CSV reader called “NULL String”. Is there
>> anything that I can give here?
>>
>>
>>
>>
>


Re: Install of a Custom Processor

2017-12-19 Thread Juan Pablo Gardella
No, only put the processor at nifi/lib folder. Remember to add
META-INF/service file

On Tue, 19 Dec 2017 at 09:00 James McMahon  wrote:

> Good morning. I have a question about installation of a custom processor.
>
> Under my -bundle directory I have two subdirs. One is a -nar subdir,
> within which I find a nar file after my mvn install. The second of which is
> a -processors subdir, within which I find a jar file after my mvn install.
>
> When I install my custom processor, I understand that the nar file should
> be placed under the nifi lib subdir where I find all the other nifi nar
> files that come "out of the box". Does the jar need to be installed
> somewhere too, prior to stopping/restarting my nifi service?
>
> I am running a 0.7.x nifi instance because of some legacy constraints.
>
> Thank you in advance for your help. -Jim
>


Re: ValidateRecord1.4.0 vs ConvertJsonToAvro1.4.0 regarding required field in nested object

2017-12-06 Thread Juan Pablo Gardella
Could you share a reproducible repo or files?

El mié., 6 de dic. de 2017 07:00, Martin Mucha 
escribió:

> Hi,
>
> I have JSON like:
>
> {
>   "a": {
> "b": "1"
>   }
> }
>
> and corresponding avro schema (written for the sake of this e-mail, need
> not to be 100% accurate)
>
> {
>   "name": "aRecord",
>   "type": "record",
>   "namespace": "a",
>   "fields": [
> {
>   "name": "a",
>   "type": {
> "name": "bRecord",
> "type":"record",
> "fields": [
>   { "name": "b", "type": "string"}
> ]
>   }
> }
>
>   ]
> }
>
> In ConvertJsonToAvro processor, json missing field "b":
>
> {"a":{}}
>
> will be rejected, while in ValidateRecord it will be accepted as valid
> (which is not valid according to schema). Is there anything I can do about
> it? Is it bug?
>
> thanks,
> Martin.
>


Re: package org.apache.nifi.serialization

2017-12-02 Thread Juan Pablo Gardella
Check at
https://github.com/apache/nifi/tree/master/nifi-nar-bundles/nifi-extension-utils/nifi-record-utils

El sáb., 2 de dic. de 2017 18:46, Uwe Geercken 
escribió:

> Hello,
>
> I wanted to update my processor. But I can't seem to find the maven
> archetype which provides the serialization classes.
>
> Can anybody help, please?
>
> rgds,
>
> Uwe
>


Re: Extract Record field to an attribute

2017-11-21 Thread Juan Pablo Gardella
Awesome! Thanks Mark, I didn't think multiple records flow file, it makes a
lot of sense your suggestions.

Thanks,
Juan

On Tue, 21 Nov 2017 at 11:29 Mark Payne  wrote:

> Juan,
>
> Working with attributes is a little bit trickier in this case because the
> record-oriented processors
> are generally intended to work on 'streams' of records. I.e., each
> FlowFile can have 1 record or
> it can be made up of thousands or more records. So if you have many
> records in a FlowFile, it's
> a little more difficult to extract a field value into an attribute.
>
> So what we do is a little bit different here. We need to group together
> 'like records' into separate
> FlowFiles. For example, if we have 5 records in a FlowFile, and
> /person/name is 'Juan' for the first 2
> and is 'Mark' for the last 3, then we can use PartitionRecord to separate
> our FlowFile into two separate
> FlowFiles, the first containing those records where /person/name is 'Juan'
> and the second FlowFile
> containing those records where /person/name is 'Mark'. Once we have done,
> it now makes more sense
> to extract the name 'Juan' and the name 'Mark' into FlowFile attributes.
> And that's just what PartitionRecord
> does. Each outbound FlowFile will have an attribute that is equal to the
> value of the field specified.
>
> So for example, if you add a single property to PartitionRecord named
> 'person' with a value of /person/name
> and then send in that example FlowFile mentioned above, then you'd get out
> 2 FlowFiles. The first would
> have an attribute 'person' (the name of the property you added is the name
> of the attribute) with a value of
> 'Juan' and the second would have an attribute 'person' with a value of
> 'Mark'.
>
> Also of note - the PartitionRecord processor takes a Record Reader and
> Writer, so this allows you to read
> the data in as JSON and then write it out as Avro. Essentially, it allows
> for an implicit record conversion, so
> you will no longer need your ConvertRecord processor in your flow.
>
> The documentation for PartitionRecord can be found here [1]. If you click
> the 'Additional Details...' link at the
> end of the first paragraph, it will provide quite a bit more documentation
> with examples. Hopefully this all
> makes sense, but if you have further questions, I am happy to elaborate if
> there is something that's not clear.
>
> Thanks!
> -Mark
>
> [1]
> http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PartitionRecord/index.html
>
>
>
> > On Nov 21, 2017, at 7:10 AM, Juan Pablo Gardella <
> gardellajuanpa...@gmail.com> wrote:
> >
> > Hello all,
> >
> > I'm working with Nifi records. Currently I have a json converted to Avro
> object using ConvertRecord processor. I would like to know if it is
> possible to use something similar to UpdateAttribute to add an attribute
> which evaluates a record expression. Something similar to:
> >
> > attribute1 -> ${/person/name}
> >
> > Juan
>
>


Extract Record field to an attribute

2017-11-21 Thread Juan Pablo Gardella
Hello all,

I'm working with Nifi records. Currently I have a json converted to Avro
object using ConvertRecord processor. I would like to know if it is
possible to use something similar to UpdateAttribute to add an attribute
which evaluates a record expression. Something similar to:

attribute1 -> ${/person/name}

Juan


Re: putdatabaserecord

2017-11-15 Thread Juan Pablo Gardella
Could you check share the logs? Remember PostreSQL does not follow SQL
standard

:

*"Quoting an identifier also makes it case-sensitive, whereas unquoted
names are always folded to lower case. For example, the
identifiers FOO, foo, and "foo" are considered the same by PostgreSQL,
but "Foo" and "FOO" are different from these three and each other. (The
folding of unquoted names to lower case in PostgreSQL is incompatible with
the SQL standard, which says that unquoted names should be folded to upper
case. Thus, foo should be equivalent to "FOO" not "foo" according to the
standard. If you want to write portable applications you are advised to
always quote a particular name or never quote it.)"*

Maybe you need to use quoting.


On Wed, 15 Nov 2017 at 11:59 Austin Duncan  wrote:

> All,
>
> I am using putdatabaserecord to insert into a postgres table. The input is
> a flat json record that is being read by a jsonpath reader. The data will
> be inserted but when i look at the table it appears that all of the data is
> null. I also tried using a json tree reader and that didnt work either. Any
> ideas?
>
>
> --
> ​Austin Duncan
> *​Researcher​*
>
> PYA Analytics
> 2220 Sutherland Avenue
> 
> Knoxville, TN 37919
> 
> 423-260-4172
>
> 
> <%28865%29%20684-2828>
>


Re: Cannot convert to timestamp error using putsql processor NiFi 1.3.0

2017-10-16 Thread Juan Pablo Gardella
Adjust the date format properly. I see the value is different that the
specified format.

On Mon, 16 Oct 2017 at 08:50 rakesh  wrote:

> Hi Juan,
>
> As per your suggestion I tried your date format in update attribute
> processor.
> <
> http://apache-nifi-users-list.2361937.n4.nabble.com/file/t310/nifihelp1.png
> >
> <
> http://apache-nifi-users-list.2361937.n4.nabble.com/file/t310/NifiHelp.png
> >
> <
> http://apache-nifi-users-list.2361937.n4.nabble.com/file/t310/nifierror.png
> >
>
>
> My flow is
> EvaluateJsonPath>updateAttribute->ConvertJsonToSql>PutSql.
>
>
>
> --
> Sent from: http://apache-nifi-users-list.2361937.n4.nabble.com/
>


  1   2   >