Re: Bug with Redis Distributed Map Cache in NiFi 1.8.0?
Looks like this is already a JIRA :) https://issues.apache.org/jira/browse/NIFI-5795 best KT fre. 9. nov. 2018 kl. 00:30 skrev Bryan Bende : > Hello, > > Thanks for bringing this to our attention. > > Most likely this happened as a result of upgrading the version of > spring-data-redis used by the redis bundle [1]. > > Looks like it should be an easy fix, but unfortunately wouldn't be > available in a release for a little while. > > Thanks, > > Bryan > > [1] https://issues.apache.org/jira/browse/NIFI-4811 > On Thu, Nov 8, 2018 at 2:52 PM Ken Tore Tallakstad > wrote: > > > > Hi, > > > > We just upgraded from NiFi 1.7.1 to 1.8.0, and discovered that Processor > PutDistributedMapCache 1.8.0, configured with "Cache update > strategy=Replace if present" now fails to update data in Redis (our Redis > version is 4.0.10). When the update strategy is set to "Keep original" > everything works fine. The error thrown is > > > > 2018-11-08 12:59:03,421 ERROR [Timer-Driven Process Thread-4] > o.a.n.p.standard.PutDistributedMapCache > PutDistributedMapCache[id=cc1839c5-4d17-15ff-937a-532b1b2025d1] > PutDistributedMapCache[id=cc1839c5-4d17-15ff-937a-532b1b2025d1] failed to > process session due to java.lang.IllegalArgumentException: Option must not > be null!; Processor Administratively Yielded for 1 sec: > java.lang.IllegalArgumentException: Option must not be null! > > java.lang.IllegalArgumentException: Option must not be null! > > at org.springframework.util.Assert.notNull(Assert.java:198) > > at > org.springframework.data.redis.connection.jedis.JedisStringCommands.set(JedisStringCommands.java:159) > > at > org.springframework.data.redis.connection.DefaultedRedisConnection.set(DefaultedRedisConnection.java:281) > > at > org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.lambda$put$3(RedisDistributedMapCacheClientService.java:191) > > at > org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.withConnection(RedisDistributedMapCacheClientService.java:344) > > at > org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.put(RedisDistributedMapCacheClientService.java:189) > > at sun.reflect.GeneratedMethodAccessor759.invoke(Unknown Source) > > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:498) > > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84) > > at com.sun.proxy.$Proxy146.put(Unknown Source) > > at > org.apache.nifi.processors.standard.PutDistributedMapCache.onTrigger(PutDistributedMapCache.java:202) > > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) > > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) > > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > at java.lang.Thread.run(Thread.java:748) > > > > We have reproduced this with nifi standalone and cluster / redis > sentinel and standalone. In 1.7.1 it works fine. Its a bit of a downer for > us. since our flow depends heavily on redis cache overwrite updates. > > Anyone else encountered this or able to reproduce on 1.8? > > > > best, > > > > KT :) >
Re: Problem of connection in Remote Process Group
Hi Jean, If you haven't, please take a look on this documentation. There are few example configurations and deployment diagrams you can refer. https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#site_to_site_reverse_proxy_properties Also, here are some Nginx configurations that I used to develop and test the routing feature. https://github.com/ijokarumawak/nifi-reverseproxy/tree/master/nginx The S2S routing capability is introduced at NiFi 1.7.0. Hope this helps. Thanks, Koji On Fri, Nov 9, 2018 at 6:38 AM GASCHLER, Jean wrote: > Hi > > > > I have some difficulties to put in place a working infrastructure with at > least two secured NIFI(s) : the first one calling a remoteProcessGroup > linked with the second one which is behind a Nginx reverse-proxy. > > I am looking for someone who has experience with such configuration > because the NIFI documentation is not clear about this. > > > > What should I change in the NIFI configuration of the site3 if I want this > other infrastructure? > > > > Thanks a lot > > > > *--Jean Gaschler* > This message contains information that may be privileged or confidential > and is the property of the Capgemini Group. It is intended only for the > person to whom it is addressed. If you are not the intended recipient, you > are not authorized to read, print, retain, copy, disseminate, distribute, > or use this message or any part thereof. If you receive this message in > error, please notify the sender immediately and delete all copies of this > message. >
Re: Bug with Redis Distributed Map Cache in NiFi 1.8.0?
Hello, Thanks for bringing this to our attention. Most likely this happened as a result of upgrading the version of spring-data-redis used by the redis bundle [1]. Looks like it should be an easy fix, but unfortunately wouldn't be available in a release for a little while. Thanks, Bryan [1] https://issues.apache.org/jira/browse/NIFI-4811 On Thu, Nov 8, 2018 at 2:52 PM Ken Tore Tallakstad wrote: > > Hi, > > We just upgraded from NiFi 1.7.1 to 1.8.0, and discovered that Processor > PutDistributedMapCache 1.8.0, configured with "Cache update strategy=Replace > if present" now fails to update data in Redis (our Redis version is 4.0.10). > When the update strategy is set to "Keep original" everything works fine. The > error thrown is > > 2018-11-08 12:59:03,421 ERROR [Timer-Driven Process Thread-4] > o.a.n.p.standard.PutDistributedMapCache > PutDistributedMapCache[id=cc1839c5-4d17-15ff-937a-532b1b2025d1] > PutDistributedMapCache[id=cc1839c5-4d17-15ff-937a-532b1b2025d1] failed to > process session due to java.lang.IllegalArgumentException: Option must not be > null!; Processor Administratively Yielded for 1 sec: > java.lang.IllegalArgumentException: Option must not be null! > java.lang.IllegalArgumentException: Option must not be null! > at org.springframework.util.Assert.notNull(Assert.java:198) > at > org.springframework.data.redis.connection.jedis.JedisStringCommands.set(JedisStringCommands.java:159) > at > org.springframework.data.redis.connection.DefaultedRedisConnection.set(DefaultedRedisConnection.java:281) > at > org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.lambda$put$3(RedisDistributedMapCacheClientService.java:191) > at > org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.withConnection(RedisDistributedMapCacheClientService.java:344) > at > org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.put(RedisDistributedMapCacheClientService.java:189) > at sun.reflect.GeneratedMethodAccessor759.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84) > at com.sun.proxy.$Proxy146.put(Unknown Source) > at > org.apache.nifi.processors.standard.PutDistributedMapCache.onTrigger(PutDistributedMapCache.java:202) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > > We have reproduced this with nifi standalone and cluster / redis sentinel and > standalone. In 1.7.1 it works fine. Its a bit of a downer for us. since our > flow depends heavily on redis cache overwrite updates. > Anyone else encountered this or able to reproduce on 1.8? > > best, > > KT :)
Re: Tasks / Time extremely high measure of tasks for LogAttribute
On Thu, Nov 8, 2018 at 3:24 PM Colin Williams < colin.williams.seat...@gmail.com> wrote: > [image: airflow_tasks_time.png] > > > On Thu, Nov 8, 2018 at 3:23 PM Colin Williams < > colin.williams.seat...@gmail.com> wrote: > >> I have a LogAttribute processor connected to the failure of an >> S3PutObject processor. Then I noticed when the Put fails the >> LogAttribute Tasks/Time shot to 42,045,444 / 00:01:31.664 for In 14 >> (109KB) and Out 14 (109KB). >> >> I'm not sure about HTML email or attachments then will attach >> separately. Just curious what this high task value means. >> >
Re: Tasks / Time extremely high measure of tasks for LogAttribute
[image: airflow_tasks_time.png] On Thu, Nov 8, 2018 at 3:23 PM Colin Williams < colin.williams.seat...@gmail.com> wrote: > I have a LogAttribute processor connected to the failure of an > S3PutObject processor. Then I noticed when the Put fails the > LogAttribute Tasks/Time shot to 42,045,444 / 00:01:31.664 for In 14 > (109KB) and Out 14 (109KB). > > I'm not sure about HTML email or attachments then will attach > separately. Just curious what this high task value means. >
Tasks / Time extremely high measure of tasks for LogAttribute
I have a LogAttribute processor connected to the failure of an S3PutObject processor. Then I noticed when the Put fails the LogAttribute Tasks/Time shot to 42,045,444 / 00:01:31.664 for In 14 (109KB) and Out 14 (109KB). I'm not sure about HTML email or attachments then will attach separately. Just curious what this high task value means.
Re: [EXT] ExecuteSQL: convertToAvroStream failure with SQlite integer
What version of NiFi are you using? An error like this comes up every now and then; one was just fixed in NiFi 1.8.0 but it was related to JDBC drivers that return Long for unsigned ints. 1.8.0 also improved the error message so that it should show the type of the object that was passed into the unresolvable union. https://github.com/apache/nifi/pull/3032 From: l vic Reply-To: "users@nifi.apache.org" Date: Thursday, November 8, 2018 at 5:43 PM To: "users@nifi.apache.org" Subject: [EXT] ExecuteSQL: convertToAvroStream failure with SQlite integer Hi, I am trying to use ExecuteSQL to get "epoch time" value from SQLite table: select start_date from sched where start_time is defined as INTEGER If the start_date = 1536548297955 i see the following exception: failed to process due to org.apache.avro.file.DataFileWriter$AppendWriteException: org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 1536548297955; rolling back session: {} org.apache.avro.file.DataFileWriter$AppendWriteException: org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 1536548297955 at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308) at Caused by: org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 1536548297955 at org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709) This is obviously Avro conversion issue as this works from sqlite3 CLI.. If I try to define it as BIGINT i have org.apache.avro.UnresolvedUnionException: Not in union ["null","long"]: 1536548297955; Any idea how i can resolve this? Thanks, -V
ExecuteSQL: convertToAvroStream failure with SQlite integer
Hi, I am trying to use ExecuteSQL to get "epoch time" value from SQLite table: select start_date from sched where start_time is defined as INTEGER If the start_date = 1536548297955 i see the following exception: failed to process due to org.apache.avro.file.DataFileWriter$AppendWriteException: org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 1536548297955; rolling back session: {} org.apache.avro.file.DataFileWriter$AppendWriteException: org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 1536548297955 at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308) at Caused by: org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 1536548297955 at org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709) This is obviously Avro conversion issue as this works from sqlite3 CLI.. If I try to define it as BIGINT i have org.apache.avro.UnresolvedUnionException: Not in union ["null","long"]: 1536548297955; Any idea how i can resolve this? Thanks, -V
Re: Nulls in input data throwing exceptions when using QueryRecord
Hi Mandeep, Thanks for reporting this issue! Koji filed the JIRA [1] and submitted a PR for it [2]. I just merged it into master and it will be released with NiFi 1.9.0. You can also build the standard processors NAR from the master branch if you need the fix quickly. [1] https://issues.apache.org/jira/browse/NIFI-5802 [2] https://github.com/apache/nifi/pull/3158 Pierre Le mer. 7 nov. 2018 à 12:54, Mandeep Gill a écrit : > Hi, > > We're hitting a couple of issues working with nulls when using QueryRecord > using both NiFi 1.7.1 and 1.8.0. > > Things work as expected for strings, however when using other primitive > types as defined by the avro schema, such as boolean, long, and double, > null values in the input data aren't converted to NULLs within the SQL > engine / Calcite. Instead they appear to remain as java null values and > throw NPEs when attempting to use them within a query or simply return them > as the output. > > To give some examples, given the following record data and schema (tested > using both JSON and Avro record reader/writers) > > [ { "str_test" : "hello1", "bool_test" : true }, { "str_test" : null, > "bool_test" : null } ] > > { > "type": "record", > "name": "schema", > "fields": [ > { > "name": "str_test", > "type": [ "string", "null" ], > "default": null > }, > { > "name": "bool_test", > "type": [ "boolean", "null" ], > "default": null > } > ] > } > > The following queries return the empty resultset, > > select 'res' as res from FLOWFILE where bool_test IS NULL > select 'res' as res from FLOWFILE where bool_test IS UNKNOWN > > and the query below returns a resultset of count 2, > > select 'res' from FLOWFILE where bool_test IS NOT NULL > > The query below works as expected, suggesting things work fine for strings > > select 'res' as res from FLOWFILE where str_test IS NULL > > However, finally the following query throws a NullPointerException (see > [1]) on trying to convert the null to a boolean within the output writer > > select * from FLOWFILE where bool_test IS NOT NULL > > The null values for these types seem to be treated as distinct to the > NULLs within the SQL engine, as the following query returns the empty > resultset. > > select 'res' as res from FLOWFILE where CAST(NULL as boolean) IS DISTINCT > FROM bool_test > > and the following query gives an RuntimeException (see [2]), > > select (COALESCE(bool_test, TRUE)) as res from flowfile > > Given all this we're unable to make use of datasets with nulls, are nulls > only supported for strings or is there perhaps something we're doing wrong > here in our setup/config. One thing we've noticed when running a simple > "SELECT * from FLOWFILE" returns a nullable type for strings in the output > avro schema but not for other primitives, even if they were nullable in the > input schema - which could be related. > > Cheers, > Mandeep > > [1] org.apache.nifi.processor.exception.ProcessException: IOException > thrown from QueryRecord[id=43ee29ff-0166-1000-28bd-06dd07c1425d]: > java.io.IOException: > org.apache.avro.file.DataFileWriter$AppendWriteException: > java.lang.NullPointerException: null of boolean in field bool_test of > org.apache.nifi.nifiRecord > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2667) > at > org.apache.nifi.processors.standard.QueryRecord.onTrigger(QueryRecord.java:309) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: > org.apache.avro.file.DataFileWriter$AppendWriteException: > java.lang.NullPointerException: null of boolean in field bool_test of > org.apache.nifi.nifiRecord > at > org.apache.nifi.processors.standard.QueryRecord$1.process(QueryRecord.java:327) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2648) > ... 12 common frames omitted > Caused by: org.apache.avro.file.DataFileWriter$AppendWriteException: > java.lang.NullPointerException: null of boolean in
Problem of connection in Remote Process Group
Hi I have some difficulties to put in place a working infrastructure with at least two secured NIFI(s) : the first one calling a remoteProcessGroup linked with the second one which is behind a Nginx reverse-proxy. I am looking for someone who has experience with such configuration because the NIFI documentation is not clear about this. [cid:image001.png@01D4778C.272ACC30] What should I change in the NIFI configuration of the site3 if I want this other infrastructure? [cid:image002.png@01D4778C.272ACC30] Thanks a lot --Jean Gaschler This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.
Re: Nulls in input data throwing exceptions when using QueryRecord
Hi Mandeep, Thanks for reporting the issue and detailed explanation. That's very helpful! I was able to reproduce the issue and found a possible solution. Filed a JIRA, a PR will be submitted shortly to fix it. https://issues.apache.org/jira/browse/NIFI-5802 Thanks, Koji On Wed, Nov 7, 2018 at 8:54 PM Mandeep Gill wrote: > > Hi, > > We're hitting a couple of issues working with nulls when using QueryRecord > using both NiFi 1.7.1 and 1.8.0. > > Things work as expected for strings, however when using other primitive types > as defined by the avro schema, such as boolean, long, and double, null values > in the input data aren't converted to NULLs within the SQL engine / Calcite. > Instead they appear to remain as java null values and throw NPEs when > attempting to use them within a query or simply return them as the output. > > To give some examples, given the following record data and schema (tested > using both JSON and Avro record reader/writers) > > [ { "str_test" : "hello1", "bool_test" : true }, { "str_test" : null, > "bool_test" : null } ] > > { > "type": "record", > "name": "schema", > "fields": [ > { > "name": "str_test", > "type": [ "string", "null" ], > "default": null > }, > { > "name": "bool_test", > "type": [ "boolean", "null" ], > "default": null > } > ] > } > > The following queries return the empty resultset, > > select 'res' as res from FLOWFILE where bool_test IS NULL > select 'res' as res from FLOWFILE where bool_test IS UNKNOWN > > and the query below returns a resultset of count 2, > > select 'res' from FLOWFILE where bool_test IS NOT NULL > > The query below works as expected, suggesting things work fine for strings > > select 'res' as res from FLOWFILE where str_test IS NULL > > However, finally the following query throws a NullPointerException (see [1]) > on trying to convert the null to a boolean within the output writer > > select * from FLOWFILE where bool_test IS NOT NULL > > The null values for these types seem to be treated as distinct to the NULLs > within the SQL engine, as the following query returns the empty resultset. > > select 'res' as res from FLOWFILE where CAST(NULL as boolean) IS DISTINCT > FROM bool_test > > and the following query gives an RuntimeException (see [2]), > > select (COALESCE(bool_test, TRUE)) as res from flowfile > > Given all this we're unable to make use of datasets with nulls, are nulls > only supported for strings or is there perhaps something we're doing wrong > here in our setup/config. One thing we've noticed when running a simple > "SELECT * from FLOWFILE" returns a nullable type for strings in the output > avro schema but not for other primitives, even if they were nullable in the > input schema - which could be related. > > Cheers, > Mandeep > > [1] org.apache.nifi.processor.exception.ProcessException: IOException thrown > from QueryRecord[id=43ee29ff-0166-1000-28bd-06dd07c1425d]: > java.io.IOException: > org.apache.avro.file.DataFileWriter$AppendWriteException: > java.lang.NullPointerException: null of boolean in field bool_test of > org.apache.nifi.nifiRecord > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2667) > at > org.apache.nifi.processors.standard.QueryRecord.onTrigger(QueryRecord.java:309) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: > org.apache.avro.file.DataFileWriter$AppendWriteException: > java.lang.NullPointerException: null of boolean in field bool_test of > org.apache.nifi.nifiRecord > at > org.apache.nifi.processors.standard.QueryRecord$1.process(QueryRecord.java:327) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2648) > ... 12 common frames omitted > Caused by: org.apache.avro.file.DataFileWriter$AppendWriteException: > java.lang.NullPointerException: null of boolean in field bool_test of > org.apache.nifi.nifiRecord > at
Bug with Redis Distributed Map Cache in NiFi 1.8.0?
Hi, We just upgraded from NiFi 1.7.1 to 1.8.0, and discovered that Processor PutDistributedMapCache 1.8.0, configured with "Cache update strategy=Replace if present" now fails to update data in Redis (our Redis version is 4.0.10). When the update strategy is set to "Keep original" everything works fine. The error thrown is 2018-11-08 12:59:03,421 ERROR [Timer-Driven Process Thread-4] o.a.n.p.standard.PutDistributedMapCache PutDistributedMapCache[id=cc1839c5-4d17-15ff-937a-532b1b2025d1] PutDistributedMapCache[id=cc1839c5-4d17-15ff-937a-532b1b2025d1] failed to process session due to java.lang.IllegalArgumentException: Option must not be null!; Processor Administratively Yielded for 1 sec: java.lang.IllegalArgumentException: Option must not be null! java.lang.IllegalArgumentException: Option must not be null! at org.springframework.util.Assert.notNull(Assert.java:198) at org.springframework.data.redis.connection.jedis.JedisStringCommands.set(JedisStringCommands.java:159) at org.springframework.data.redis.connection.DefaultedRedisConnection.set(DefaultedRedisConnection.java:281) at org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.lambda$put$3(RedisDistributedMapCacheClientService.java:191) at org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.withConnection(RedisDistributedMapCacheClientService.java:344) at org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.put(RedisDistributedMapCacheClientService.java:189) at sun.reflect.GeneratedMethodAccessor759.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84) at com.sun.proxy.$Proxy146.put(Unknown Source) at org.apache.nifi.processors.standard.PutDistributedMapCache.onTrigger(PutDistributedMapCache.java:202) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) We have reproduced this with nifi standalone and cluster / redis sentinel and standalone. In 1.7.1 it works fine. Its a bit of a downer for us. since our flow depends heavily on redis cache overwrite updates. Anyone else encountered this or able to reproduce on 1.8? best, KT :)