[jira] [Updated] (NIFI-1624) ExtractText - Add option to Throw Failure if Text is greater than Capture Group

2019-06-25 Thread Randy Bovay (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Bovay updated NIFI-1624:
--
Affects Version/s: 1.5.0
   1.6.0
   1.7.0
   1.8.0
   1.9.0

> ExtractText - Add option to Throw Failure if Text is greater than Capture 
> Group
> ---
>
> Key: NIFI-1624
> URL: https://issues.apache.org/jira/browse/NIFI-1624
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 0.4.1, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0
>Reporter: Randy Bovay
>Priority: Major
>
> ExtractText allows us to specify the "Maximum Capture Group Length"
> Occasionally I will get a text string that is greater than what I've 
> specified.
> In these unexpected situations, I would LIKE to route this to something that 
> can handle it, or throw an error so I can look into why this is larger than 
> expected.
> The ask is to make the behavior configurable on LengthExceededBehavior.  With 
> options of 'truncate' or 'failure'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5117) AMQP Consumer: Error during creation of Flow File results in lost message

2019-01-29 Thread Randy Bovay (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755218#comment-16755218
 ] 

Randy Bovay commented on NIFI-5117:
---

This story is fixed by the 1.7.1 changes to ConsumeAMQP.processor to act as a 
consumer by default

In fact, we've tested this, and in backpressure situations where NiFi is not 
able to move the message to the queue, A nice error or warning message is 
created and the Message is tagged as unacknowledged in RabbitMQ.(Confirmed by 
our Rabbit support team), and later pulled back in on subsequent pulls. 


We DO have the 'Auto-Acknowledge messages' set to False

> AMQP Consumer: Error during creation of Flow File results in lost message
> -
>
> Key: NIFI-5117
> URL: https://issues.apache.org/jira/browse/NIFI-5117
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: Edward Armes
>Priority: Major
>
> The AMQP Consumer performs a "basicGet()". The was this basicGet is called 
> results in the message being dequeued from the AMQP queue.
> If a processor instances fails to submit a flow file to the output as a 
> result in the case of an error in "session.write()" or the processor is 
> unexpectedly halted before the flow file is created and persisted, the 
> message consumer from an AMQP queue is lost and can't be recovered.
> Reference: 
> [https://rabbitmq.github.io/rabbitmq-java-client/api/current/com/rabbitmq/client/Channel.html#basicGet-java.lang.String-boolean-:]
> A potential fix here would be to:
>  # AMQPConsumer.java: Change the call "basicGet(this.queueName, true)" -> 
> "basicGet(this.queueName, false)"
>  # AMQPConsumer.java: New method that wraps the basicAck() and basicNack() 
> methods to taking a long (the delivery tag) and boolean (successes) if 
> successes is true basicAck() is called is false basicNack() with requeue is 
> called
>  # ConsumerAMQP.java: An additional call(s) to "consumer" to call the new 
> method as needed in case of successes and error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5710) Minor bugs introduced into 1.8.0

2018-10-29 Thread Randy Bovay (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668067#comment-16668067
 ] 

Randy Bovay commented on NIFI-5710:
---

I just noticed this issue, and have a question / concern around the third 
bullet point.

" - When load balancing a connection, the same UUID is assigned to the flowfile 
on the receiving side. It should be a unique UUID on the receiving side and 
just reference the UUID of the sending side in the Source System UUID field of 
the RECEIVE provenance event."

 

We actually leverage this UUID value to be able to trace a flowfile throughout 
the system.  Does this imply that we will no longer be able to do that?

 

> Minor bugs introduced into 1.8.0
> 
>
> Key: NIFI-5710
> URL: https://issues.apache.org/jira/browse/NIFI-5710
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> I've encountered a couple of minor bugs in 1.8.0. Since they can be fixed 
> before 1.8.0 is released and they are quite minor, I am creating a single 
> Jira to encompass them all:
>  * When obtaining processor diagnostics, the garbage collection info is 
> sorted in the wrong order in a cluster and not sorted at all in standalone 
> mode.
>  * When obtaining processor diagnostics, a NullPointerException is thrown if 
> the Processor allows the user to reference a controller service but no 
> controller service is currently configured.
>  * When load balancing a connection, the same UUID is assigned to the 
> flowfile on the receiving side. It should be a unique UUID on the receiving 
> side and just reference the UUID of the sending side in the Source System 
> UUID field of the RECEIVE provenance event.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5456) PutKinesisStream - Fails to work with AWS Private Link endpoint

2018-08-15 Thread Randy Bovay (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16581506#comment-16581506
 ] 

Randy Bovay commented on NIFI-5456:
---

[~sivaprasanna], [~godiarie]
I'm not sure how the Credentials Provider factors into building the URL for 
Private Link.  Is that logic / code NOT in the original deprecated class?

I do see how actually making a changeover to a new class May be a bigger story 
/ new feature, but this current issues is a 'bug', which could go in prior to 
2.0.  I'll have to defer to the Committee on that.   If it's possible to fix 
this using the current code base though, that seems reasonable.

> PutKinesisStream - Fails to work with AWS Private Link endpoint
> ---
>
> Key: NIFI-5456
> URL: https://issues.apache.org/jira/browse/NIFI-5456
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0, 1.7.1
> Environment: RedHat 6
>Reporter: Ariel Godinez
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>  Labels: easyfix
>
> NiFi version: 1.6.0
> PutKinesisStream fails to put due to invalid signing information when using 
> an AWS Private Link as the endpoint override URL. The endpoint override URL 
> pattern for private links is like below along with the error that NiFi 
> outputs when we attempt to use this type of URL as the 'Endpoint Override 
> URL' property value.
> Endpoint Override URL: 
> [https://vpce-|https://vpce-/].kinesis.us-east-2.vpce.amazonaws.com
> ERROR [Timer-Driven Process Thread-11] "o.a.n.p.a.k.stream.PutKinesisStream" 
> PutKinesisStream[id=4c314e25-0164-1000--9bd79c77] Failed to publish 
> due to exception com.amazonaws.services.kinesis.model.AmazonKinesisException: 
> Credential should be scoped to a valid region, not 'vpce'.  (Service: 
> AmazonKinesis; Status Code: 400; Error Code: InvalidSignatureException; 
> Request ID: 6330b83c-a64e-4acf-b892-a505621cf78e) flowfiles 
> [StandardFlowFileRecord[uuid=ba299cec-7cbf-4750-a766-c348b5cd9c73,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1532469012962-1, 
> container=content002, section=1], offset=2159750, 
> length=534625],offset=0,name=900966573101260,size=534625]]
>  
> It looks like 'vpce' is being extracted from the url as the region name when 
> it should be getting 'us-east-2'. We were able to get this processor to work 
> correctly by explicitly passing in the region and service using 
> 'setEndpoint(String endpoint, String serviceName, String regionId)' instead 
> of 'setEndpoint(String endpoint)' in 
> 'nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java'
>  line 289



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5456) PutKinesisStream - Fails to work with AWS Private Link endpoint

2018-08-09 Thread Randy Bovay (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575311#comment-16575311
 ] 

Randy Bovay commented on NIFI-5456:
---

Any updates?

 

> PutKinesisStream - Fails to work with AWS Private Link endpoint
> ---
>
> Key: NIFI-5456
> URL: https://issues.apache.org/jira/browse/NIFI-5456
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0, 1.7.1
> Environment: RedHat 6
>Reporter: Ariel Godinez
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>  Labels: easyfix
>
> NiFi version: 1.6.0
> PutKinesisStream fails to put due to invalid signing information when using 
> an AWS Private Link as the endpoint override URL. The endpoint override URL 
> pattern for private links is like below along with the error that NiFi 
> outputs when we attempt to use this type of URL as the 'Endpoint Override 
> URL' property value.
> Endpoint Override URL: 
> [https://vpce-|https://vpce-/].kinesis.us-east-2.vpce.amazonaws.com
> ERROR [Timer-Driven Process Thread-11] "o.a.n.p.a.k.stream.PutKinesisStream" 
> PutKinesisStream[id=4c314e25-0164-1000--9bd79c77] Failed to publish 
> due to exception com.amazonaws.services.kinesis.model.AmazonKinesisException: 
> Credential should be scoped to a valid region, not 'vpce'.  (Service: 
> AmazonKinesis; Status Code: 400; Error Code: InvalidSignatureException; 
> Request ID: 6330b83c-a64e-4acf-b892-a505621cf78e) flowfiles 
> [StandardFlowFileRecord[uuid=ba299cec-7cbf-4750-a766-c348b5cd9c73,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1532469012962-1, 
> container=content002, section=1], offset=2159750, 
> length=534625],offset=0,name=900966573101260,size=534625]]
>  
> It looks like 'vpce' is being extracted from the url as the region name when 
> it should be getting 'us-east-2'. We were able to get this processor to work 
> correctly by explicitly passing in the region and service using 
> 'setEndpoint(String endpoint, String serviceName, String regionId)' instead 
> of 'setEndpoint(String endpoint)' in 
> 'nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java'
>  line 289



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5478) PutS3Object support for new Storage Classes for Infrequent Access

2018-08-01 Thread Randy Bovay (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Bovay updated NIFI-5478:
--
Description: 
The PutS3Object processor currently supports StorageClass of 'Standard' and 
'ReducedRedundancy', but not the 2 additional Storage classes which have been 
released which cost less.

This processor should also provide these new storage classes as an option.
    S3 Standard-Infrequent Access (S3 Standard-IA) Storage
    S3 One Zone-Infrequent Access (S3 One Zone-IA) Storage

[https://aws.amazon.com/s3/storage-classes/]



The Reduced Redundancy Class seems to be still available, although appears to 
be deprecated.  I would recommend no change to that as it's not fully retired 
and AWS appears to be letting this die on the vine.

 

  was:
The PutS3Object processor currently supports StorageClass of 'Standard' and 
'ReducedRedundancy', but not the 2 addtional Storage classes which have been 
released which cost less.


This processor should now provide these storage classes as an option.
   S3 Standard-Infrequent Access (S3 Standard-IA) Storage
   S3 One Zone-Infrequent Access (S3 One Zone-IA) Storage

 

The Reduced Redundancy Class seems to be still available, although appears to 
be deprecated.  I would recommend no change to that as it's not fully retired 
and AWS appears to be letting this die on the vine.

 


> PutS3Object support for new Storage Classes for Infrequent Access
> -
>
> Key: NIFI-5478
> URL: https://issues.apache.org/jira/browse/NIFI-5478
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.0, 1.7.1
>Reporter: Randy Bovay
>Priority: Minor
>
> The PutS3Object processor currently supports StorageClass of 'Standard' and 
> 'ReducedRedundancy', but not the 2 additional Storage classes which have been 
> released which cost less.
> This processor should also provide these new storage classes as an option.
>     S3 Standard-Infrequent Access (S3 Standard-IA) Storage
>     S3 One Zone-Infrequent Access (S3 One Zone-IA) Storage
> [https://aws.amazon.com/s3/storage-classes/]
> The Reduced Redundancy Class seems to be still available, although appears to 
> be deprecated.  I would recommend no change to that as it's not fully retired 
> and AWS appears to be letting this die on the vine.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5478) PutS3Object support for new Storage Classes for Infrequent Access

2018-08-01 Thread Randy Bovay (JIRA)
Randy Bovay created NIFI-5478:
-

 Summary: PutS3Object support for new Storage Classes for 
Infrequent Access
 Key: NIFI-5478
 URL: https://issues.apache.org/jira/browse/NIFI-5478
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 1.7.1, 1.7.0
Reporter: Randy Bovay


The PutS3Object processor currently supports StorageClass of 'Standard' and 
'ReducedRedundancy', but not the 2 addtional Storage classes which have been 
released which cost less.


This processor should now provide these storage classes as an option.
   S3 Standard-Infrequent Access (S3 Standard-IA) Storage
   S3 One Zone-Infrequent Access (S3 One Zone-IA) Storage

 

The Reduced Redundancy Class seems to be still available, although appears to 
be deprecated.  I would recommend no change to that as it's not fully retired 
and AWS appears to be letting this die on the vine.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5456) PutKinesisStream - Fails to work with AWS Private Link endpoint

2018-07-27 Thread Randy Bovay (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559997#comment-16559997
 ] 

Randy Bovay edited comment on NIFI-5456 at 7/27/18 6:44 PM:


We believe this will apply to all AWS processors, not just putKinesisStream.
 So, if NiFi can't use PrivateLink for any processors, that may change priority.


was (Author: randy_b):
[~joewitt], We believe this will apply to all AWS processors, not just 
putKinesisStream.
So, if NiFi can't use PrivateLink for any processors, that may change priority.

> PutKinesisStream - Fails to work with AWS Private Link endpoint
> ---
>
> Key: NIFI-5456
> URL: https://issues.apache.org/jira/browse/NIFI-5456
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0, 1.7.1
> Environment: RedHat 6
>Reporter: Ariel Godinez
>Priority: Major
>  Labels: easyfix
>
> NiFi version: 1.6.0
> PutKinesisStream fails to put due to invalid signing information when using 
> an AWS Private Link as the endpoint override URL. The endpoint override URL 
> pattern for private links is like below along with the error that NiFi 
> outputs when we attempt to use this type of URL as the 'Endpoint Override 
> URL' property value.
> Endpoint Override URL: 
> [https://vpce-|https://vpce-/].kinesis.us-east-2.vpce.amazonaws.com
> ERROR [Timer-Driven Process Thread-11] "o.a.n.p.a.k.stream.PutKinesisStream" 
> PutKinesisStream[id=4c314e25-0164-1000--9bd79c77] Failed to publish 
> due to exception com.amazonaws.services.kinesis.model.AmazonKinesisException: 
> Credential should be scoped to a valid region, not 'vpce'.  (Service: 
> AmazonKinesis; Status Code: 400; Error Code: InvalidSignatureException; 
> Request ID: 6330b83c-a64e-4acf-b892-a505621cf78e) flowfiles 
> [StandardFlowFileRecord[uuid=ba299cec-7cbf-4750-a766-c348b5cd9c73,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1532469012962-1, 
> container=content002, section=1], offset=2159750, 
> length=534625],offset=0,name=900966573101260,size=534625]]
>  
> It looks like 'vpce' is being extracted from the url as the region name when 
> it should be getting 'us-east-2'. We were able to get this processor to work 
> correctly by explicitly passing in the region and service using 
> 'setEndpoint(String endpoint, String serviceName, String regionId)' instead 
> of 'setEndpoint(String endpoint)' in 
> 'nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java'
>  line 289



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5456) PutKinesisStream - Fails to work with AWS Private Link endpoint

2018-07-27 Thread Randy Bovay (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559997#comment-16559997
 ] 

Randy Bovay commented on NIFI-5456:
---

[~joewitt], We believe this will apply to all AWS processors, not just 
putKinesisStream.
So, if NiFi can't use PrivateLink for any processors, that may change priority.

> PutKinesisStream - Fails to work with AWS Private Link endpoint
> ---
>
> Key: NIFI-5456
> URL: https://issues.apache.org/jira/browse/NIFI-5456
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0, 1.7.1
> Environment: RedHat 6
>Reporter: Ariel Godinez
>Priority: Major
>  Labels: easyfix
>
> NiFi version: 1.6.0
> PutKinesisStream fails to put due to invalid signing information when using 
> an AWS Private Link as the endpoint override URL. The endpoint override URL 
> pattern for private links is like below along with the error that NiFi 
> outputs when we attempt to use this type of URL as the 'Endpoint Override 
> URL' property value.
> Endpoint Override URL: 
> [https://vpce-|https://vpce-/].kinesis.us-east-2.vpce.amazonaws.com
> ERROR [Timer-Driven Process Thread-11] "o.a.n.p.a.k.stream.PutKinesisStream" 
> PutKinesisStream[id=4c314e25-0164-1000--9bd79c77] Failed to publish 
> due to exception com.amazonaws.services.kinesis.model.AmazonKinesisException: 
> Credential should be scoped to a valid region, not 'vpce'.  (Service: 
> AmazonKinesis; Status Code: 400; Error Code: InvalidSignatureException; 
> Request ID: 6330b83c-a64e-4acf-b892-a505621cf78e) flowfiles 
> [StandardFlowFileRecord[uuid=ba299cec-7cbf-4750-a766-c348b5cd9c73,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1532469012962-1, 
> container=content002, section=1], offset=2159750, 
> length=534625],offset=0,name=900966573101260,size=534625]]
>  
> It looks like 'vpce' is being extracted from the url as the region name when 
> it should be getting 'us-east-2'. We were able to get this processor to work 
> correctly by explicitly passing in the region and service using 
> 'setEndpoint(String endpoint, String serviceName, String regionId)' instead 
> of 'setEndpoint(String endpoint)' in 
> 'nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java'
>  line 289



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5456) PutKinesisStream fails to put due to invalid signing information

2018-07-25 Thread Randy Bovay (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556294#comment-16556294
 ] 

Randy Bovay commented on NIFI-5456:
---

While we have only tested with 1.6.0 and 1.7.1, I anticipate this will fail on 
all versions of NiFi.

May want to title this one as 'PutKinesisStream - Fails to work with Private 
Link'
I would also bump the Priority higher as we cannot use this processor w/o 
re-compiling it ourselves to make this work.

> PutKinesisStream fails to put due to invalid signing information
> 
>
> Key: NIFI-5456
> URL: https://issues.apache.org/jira/browse/NIFI-5456
> Project: Apache NiFi
>  Issue Type: Bug
> Environment: RedHat 6
>Reporter: Ariel Godinez
>Priority: Minor
>  Labels: easyfix
>
> NiFi version: 1.6.0
> PutKinesisStream fails to put due to invalid signing information when using 
> an AWS Private Link as the endpoint override URL. The endpoint override URL 
> pattern for private links is like below along with the error that NiFi 
> outputs when we attempt to use this type of URL as the 'Endpoint Override 
> URL' property value.
> Endpoint Override URL: 
> [https://vpce-|https://vpce-/].kinesis.us-east-2.vpce.amazonaws.com
> ERROR [Timer-Driven Process Thread-11] "o.a.n.p.a.k.stream.PutKinesisStream" 
> PutKinesisStream[id=4c314e25-0164-1000--9bd79c77] Failed to publish 
> due to exception com.amazonaws.services.kinesis.model.AmazonKinesisException: 
> Credential should be scoped to a valid region, not 'vpce'.  (Service: 
> AmazonKinesis; Status Code: 400; Error Code: InvalidSignatureException; 
> Request ID: 6330b83c-a64e-4acf-b892-a505621cf78e) flowfiles 
> [StandardFlowFileRecord[uuid=ba299cec-7cbf-4750-a766-c348b5cd9c73,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1532469012962-1, 
> container=content002, section=1], offset=2159750, 
> length=534625],offset=0,name=900966573101260,size=534625]]
>  
> It looks like 'vpce' is being extracted from the url as the region name when 
> it should be getting 'us-east-2'. We were able to get this processor to work 
> correctly by explicitly passing in the region and service using 
> 'setEndpoint(String endpoint, String serviceName, String regionId)' instead 
> of 'setEndpoint(String endpoint)' in 
> 'nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/AbstractAWSProcessor.java'
>  line 289



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5205) NiFi SysAdmin Guide max.appendable.size Default Value

2018-05-17 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479471#comment-16479471
 ] 

Randy Bovay commented on NIFI-5205:
---

It appears this was 10MB before, so it's been decreased to 1MB.  Interesting.

I think the 'bug' is that the website is not up to date with the 1.6 
properties.  I don't believe that's restricted to just this 1 property either.  

> NiFi SysAdmin Guide max.appendable.size Default Value
> -
>
> Key: NIFI-5205
> URL: https://issues.apache.org/jira/browse/NIFI-5205
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 1.6.0
>Reporter: Michael
>Assignee: Andrew Lim
>Priority: Minor
> Fix For: 1.6.0
>
> Attachments: Screen Shot 2018-05-16 at 3.56.48 PM.png
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Documentation for nifi.content.claim.max.appendable.size does not match 
> default value of 1 MB in nifi.properties for 1.6.0.
> Keep in mind that I am making the assumption that NiFi defaults to this same 
> value of 1 MB when it starts up. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4508) AMQP Processor that uses basicConsume

2018-05-17 Thread Randy Bovay (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Bovay updated NIFI-4508:
--
Affects Version/s: 1.7.0
   1.4.0
   1.5.0
   1.6.0

> AMQP Processor that uses basicConsume
> -
>
> Key: NIFI-4508
> URL: https://issues.apache.org/jira/browse/NIFI-4508
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0
>Reporter: Randy Bovay
>Priority: Major
>
> Due to poor performance of the AMQP Processor, we need to be able to have a 
> basicConsume based interface to RabbitMQ.
> https://community.hortonworks.com/questions/66799/consumeamqp-performance-issue-less-than-50-msgs-se.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-1741) Help Guide should Say what RELEASE capabilities are available for.

2018-05-17 Thread Randy Bovay (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Bovay updated NIFI-1741:
--
Affects Version/s: 1.7.0
   1.6.0

> Help Guide should Say what RELEASE capabilities are available for.
> --
>
> Key: NIFI-1741
> URL: https://issues.apache.org/jira/browse/NIFI-1741
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Documentation  Website
>Affects Versions: 0.6.0, 0.5.1, 0.6.1, 1.6.0, 1.7.0
>Reporter: Randy Bovay
>Priority: Minor
>
> As new capabilities are released, and people are using older versions, it 
> would be good to see if the version of NiFi being used supports that 
> particular function.
> i.e, getDelimitedFields is not available until 0.6.  The website doesn't 
> reflect that, so someone may start to develop against that not realizing they 
> were on 0.4, or 0.5.
> https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4508) AMQP Processor that uses basicConsume

2018-02-19 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369556#comment-16369556
 ] 

Randy Bovay commented on NIFI-4508:
---

bump!  Would be good to get [~msclarke] to think about this maybe.

> AMQP Processor that uses basicConsume
> -
>
> Key: NIFI-4508
> URL: https://issues.apache.org/jira/browse/NIFI-4508
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Randy Bovay
>Priority: Major
>
> Due to poor performance of the AMQP Processor, we need to be able to have a 
> basicConsume based interface to RabbitMQ.
> https://community.hortonworks.com/questions/66799/consumeamqp-performance-issue-less-than-50-msgs-se.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-1625) ExtractText - Description of Capture Group is not clear

2017-10-19 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212122#comment-16212122
 ] 

Randy Bovay commented on NIFI-1625:
---

Rekha,  I would agree, let's close this out

> ExtractText - Description of Capture Group is not clear
> ---
>
> Key: NIFI-1625
> URL: https://issues.apache.org/jira/browse/NIFI-1625
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.4.1
>Reporter: Randy Bovay
>Priority: Trivial
>
> ExtractText ONLY captures the first 1024 (default) characters.
> The help text says this applies to the capture group values.  It's not clear 
> that this is on the 'input', but leads one to believe it's on the actual new 
> properties that are being captured.
>  
> Better wording should be 
> "Specifies the Maximum length of the input record that will be evaluated for 
> the capture.  The input record will only be evaluated Up TO this length, and 
> the rest will be ignored"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3348) G1 Values Not staying in correct column on refresh. In Cluster UI JVM Tab.

2017-10-19 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212110#comment-16212110
 ] 

Randy Bovay commented on NIFI-3348:
---

Pierre,
Yes, that looks fine now.  Can close this out as fixed.

> G1 Values Not staying in correct column on refresh. In Cluster UI JVM Tab.
> --
>
> Key: NIFI-3348
> URL: https://issues.apache.org/jira/browse/NIFI-3348
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.1
>Reporter: Randy Bovay
>Priority: Minor
>
> The Values in the G1 Old Generation and G1 Young Generation will flip back 
> and forth in each column for the node as you hit refresh from the UI.
> These are in the Cluster UI, JVM Tab.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4508) AMQP Processor that uses basicConsume

2017-10-19 Thread Randy Bovay (JIRA)
Randy Bovay created NIFI-4508:
-

 Summary: AMQP Processor that uses basicConsume
 Key: NIFI-4508
 URL: https://issues.apache.org/jira/browse/NIFI-4508
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.3.0
Reporter: Randy Bovay


Due to poor performance of the AMQP Processor, we need to be able to have a 
basicConsume based interface to RabbitMQ.

https://community.hortonworks.com/questions/66799/consumeamqp-performance-issue-less-than-50-msgs-se.html




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4507) consumeAMQP Expression Language support

2017-10-19 Thread Randy Bovay (JIRA)
Randy Bovay created NIFI-4507:
-

 Summary: consumeAMQP Expression Language support
 Key: NIFI-4507
 URL: https://issues.apache.org/jira/browse/NIFI-4507
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.3.0
Reporter: Randy Bovay


The ConsumeAMQP Processors do not have ExpressionLanguage support.
Would like to add this so that I can consume from multiple Queues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4427) Default for FlowFile's filename should be the FlowFile's UUID

2017-09-28 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184338#comment-16184338
 ] 

Randy Bovay commented on NIFI-4427:
---

!Multiple Duplicate Filenames.png!

[2017-09-28 11:39:28,426] ERROR [Timer-Driven Process Thread-31] 
"o.a.nifi.processors.script.ExecuteScript" 
ExecuteScript[id=23b6e4e6-015c-1000-bf56-01369e55cfd2] Failed to process 
session due to org.apache.nifi.processor.exception.ProcessException: 
javax.script.ScriptException: 
org.apache.nifi.controller.repository.ContentNotFoundException: 
org.apache.nifi.controller.repository.ContentNotFoundException: Could not find 
content for StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=1506450019018-35994, 
container=content001, section=154], offset=1007892, length=1114] in 

[jira] [Updated] (NIFI-4427) Default for FlowFile's filename should be the FlowFile's UUID

2017-09-28 Thread Randy Bovay (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Bovay updated NIFI-4427:
--
Attachment: Multiple Duplicate Filenames.png

> Default for FlowFile's filename should be the FlowFile's UUID
> -
>
> Key: NIFI-4427
> URL: https://issues.apache.org/jira/browse/NIFI-4427
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
> Attachments: Multiple Duplicate Filenames.png
>
>
> Currently, when a new FlowFile is created without any parents, the filename 
> is set to System.nanoTime(). This is likely to result in filename collisions 
> when operating at a high rate and the extra System call could be avoided. 
> Since we are already generating a unique UUID we should just use that as the 
> filename.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4427) Default for FlowFile's filename should be the FlowFile's UUID

2017-09-27 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182804#comment-16182804
 ] 

Randy Bovay commented on NIFI-4427:
---

How would we identify that we are having filename collisions?
I do have a high volume system, so would like to be able to identify that.

> Default for FlowFile's filename should be the FlowFile's UUID
> -
>
> Key: NIFI-4427
> URL: https://issues.apache.org/jira/browse/NIFI-4427
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>
> Currently, when a new FlowFile is created without any parents, the filename 
> is set to System.nanoTime(). This is likely to result in filename collisions 
> when operating at a high rate and the extra System call could be avoided. 
> Since we are already generating a unique UUID we should just use that as the 
> filename.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4092) ClassCastException Warning during cluster sync

2017-08-31 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150042#comment-16150042
 ] 

Randy Bovay commented on NIFI-4092:
---

[~msclarke] May want to see this.

> ClassCastException Warning during cluster sync
> --
>
> Key: NIFI-4092
> URL: https://issues.apache.org/jira/browse/NIFI-4092
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Joseph Gresock
>
> This is the strack trace I receive, though I'm not sure it affects anything, 
> since the cluster is eventually able to connect.
> 2017-06-20 13:46:44,680 WARN [Reconnect ip-172-31-55-36.ec2.internal:8443] 
> o.a.n.c.c.node.NodeClusterCoordinator Problem encountered issuing 
> reconnection request to node ip-172-31-55-36.ec2.internal:8443
> java.io.IOException: 
> org.apache.nifi.controller.serialization.FlowSerializationException: 
> java.lang.ClassCastException: 
> org.apache.nifi.web.api.dto.TemplateDTO$JaxbAccessorM_getDescription_setDescription_java_lang_String
>  cannot be cast to com.sun.xml.internal.bind.v2.runtime.reflect.Accessor
> at 
> org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.save(StandardXMLFlowConfigurationDAO.java:143)
> at 
> org.apache.nifi.controller.StandardFlowService.createDataFlowFromController(StandardFlowService.java:607)
> at 
> org.apache.nifi.controller.StandardFlowService.createDataFlowFromController(StandardFlowService.java:100)
> at 
> org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator$2.run(NodeClusterCoordinator.java:706)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.nifi.controller.serialization.FlowSerializationException: 
> java.lang.ClassCastException: 
> org.apache.nifi.web.api.dto.TemplateDTO$JaxbAccessorM_getDescription_setDescription_java_lang_String
>  cannot be cast to com.sun.xml.internal.bind.v2.runtime.reflect.Accessor
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addTemplate(StandardFlowSerializer.java:546)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addProcessGroup(StandardFlowSerializer.java:203)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addProcessGroup(StandardFlowSerializer.java:187)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addProcessGroup(StandardFlowSerializer.java:187)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.serialize(StandardFlowSerializer.java:97)
> at 
> org.apache.nifi.controller.FlowController.serialize(FlowController.java:1544)
> at 
> org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.save(StandardXMLFlowConfigurationDAO.java:141)
> ... 4 common frames omitted
> Caused by: java.lang.ClassCastException: 
> org.apache.nifi.web.api.dto.TemplateDTO$JaxbAccessorM_getDescription_setDescription_java_lang_String
>  cannot be cast to com.sun.xml.internal.bind.v2.runtime.reflect.Accessor
> at 
> com.sun.xml.internal.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.instanciate(OptimizedAccessorFactory.java:190)
> at 
> com.sun.xml.internal.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.get(OptimizedAccessorFactory.java:129)
> at 
> com.sun.xml.internal.bind.v2.runtime.reflect.Accessor$GetterSetterReflection.optimize(Accessor.java:388)
> at 
> com.sun.xml.internal.bind.v2.runtime.property.SingleElementLeafProperty.(SingleElementLeafProperty.java:77)
> at sun.reflect.GeneratedConstructorAccessor435.newInstance(Unknown 
> Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> com.sun.xml.internal.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:113)
> at 
> com.sun.xml.internal.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:166)
> at 
> com.sun.xml.internal.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:488)
> at 
> com.sun.xml.internal.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4023) WriteAheadProvenanceRepository indexing and query failure under high rate stress testing

2017-08-10 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122855#comment-16122855
 ] 

Randy Bovay commented on NIFI-4023:
---

We are experiencing this as well.  Good to see there is already an issue 
tracking this.  
Happy to test with you more to help find a solution to this.

> WriteAheadProvenanceRepository indexing and query failure under high rate 
> stress testing
> 
>
> Key: NIFI-4023
> URL: https://issues.apache.org/jira/browse/NIFI-4023
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Joseph Witt
>
> 2017-06-06 00:32:35,995 INFO [pool-10-thread-1] 
> org.wali.MinimalLockingWriteAheadLog 
> org.wali.MinimalLockingWriteAheadLog@5ce7ab6f checkpointed with 5737 Records 
> and 0 Swap Files in 467 milliseconds (Stop-the-world time = 172 milliseconds, 
> Clear Edit Logs time = 137 millis), max Transaction ID 5739
> 2017-06-06 00:32:35,996 INFO [pool-10-thread-1] 
> o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile 
> Repository with 5737 records in 467 milliseconds
> 2017-06-06 00:33:35,418 ERROR [Index Provenance Events-2] 
> o.a.n.p.index.lucene.EventIndexTask Failed to index Provenance Events
> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
> NativeFSLock@/Users/jwitt/build-verify/nifi-1.3.0/nifi-assembly/target/nifi-1.3.0-bin/nifi-1.3.0/provenance_repository/index-1496723454612/write.lock
>   at org.apache.lucene.store.Lock.obtain(Lock.java:89)
>   at org.apache.lucene.index.IndexWriter.(IndexWriter.java:755)
>   at 
> org.apache.nifi.provenance.lucene.SimpleIndexManager.createWriter(SimpleIndexManager.java:198)
>   at 
> org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowIndexWriter(SimpleIndexManager.java:227)
>   at 
> org.apache.nifi.provenance.index.lucene.EventIndexTask.index(EventIndexTask.java:184)
>   at 
> org.apache.nifi.provenance.index.lucene.EventIndexTask.run(EventIndexTask.java:104)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-06-06 00:33:36,420 ERROR [Index Provenance Events-1] 
> o.a.n.p.index.lucene.EventIndexTask Failed to index Provenance Events
> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
> NativeFSLock@/Users/jwitt/build-verify/nifi-1.3.0/nifi-assembly/target/nifi-1.3.0-bin/nifi-1.3.0/provenance_repository/index-1496723454612/write.lock
>   at org.apache.lucene.store.Lock.obtain(Lock.java:89)
>   at org.apache.lucene.index.IndexWriter.(IndexWriter.java:755)
>   at 
> org.apache.nifi.provenance.lucene.SimpleIndexManager.createWriter(SimpleIndexManager.java:198)
>   at 
> org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowIndexWriter(SimpleIndexManager.java:227)
>   at 
> org.apache.nifi.provenance.index.lucene.EventIndexTask.index(EventIndexTask.java:184)
>   at 
> org.apache.nifi.provenance.index.lucene.EventIndexTask.run(EventIndexTask.java:104)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-06-06 00:33:37,425 ERROR [Index Provenance Events-2] 
> o.a.n.p.index.lucene.EventIndexTask Failed to index Provenance Events
> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
> NativeFSLock@/Users/jwitt/build-verify/nifi-1.3.0/nifi-assembly/target/nifi-1.3.0-bin/nifi-1.3.0/provenance_repository/index-1496723454612/write.lock
>   at org.apache.lucene.store.Lock.obtain(Lock.java:89)
>   at org.apache.lucene.index.IndexWriter.(IndexWriter.java:755)
>   at 
> org.apache.nifi.provenance.lucene.SimpleIndexManager.createWriter(SimpleIndexManager.java:198)
>   at 
> org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowIndexWriter(SimpleIndexManager.java:227)
>   at 
> org.apache.nifi.provenance.index.lucene.EventIndexTask.index(EventIndexTask.java:184)
>   at 
> org.apache.nifi.provenance.index.lucene.EventIndexTask.run(EventIndexTask.java:104)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at 

[jira] [Created] (NIFI-3474) GeoEnrich - stop using exceptions for mainline flow control.

2017-02-13 Thread Randy Bovay (JIRA)
Randy Bovay created NIFI-3474:
-

 Summary: GeoEnrich - stop using exceptions for mainline flow 
control.
 Key: NIFI-3474
 URL: https://issues.apache.org/jira/browse/NIFI-3474
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Randy Bovay
Priority: Minor


modification to the code to stop using exceptions for mainline flow control. 
Specifically we don't want to throw an exception simply because an address was 
not found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-2661) Create enrichment processor supporting GeoLite ASN

2017-02-13 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864516#comment-15864516
 ] 

Randy Bovay commented on NIFI-2661:
---

Agreed.  And Accuracy Radius.
https://www.maxmind.com/en/geoip2-precision-city-service

> Create enrichment processor supporting GeoLite ASN
> --
>
> Key: NIFI-2661
> URL: https://issues.apache.org/jira/browse/NIFI-2661
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
>Assignee: Andre
>
> Current EnrichGeoIP does not support MaxMind's GeoLite ASN API and database.
> It would be great to have a processor capable of doing so.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-2008) GeoEnrichIP - Routing IPs with Null attributes as Found

2017-02-13 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864506#comment-15864506
 ] 

Randy Bovay commented on NIFI-2008:
---

Joe,
If you can get to an older maxmind data set, this will show you the example.  
216.151.180.197
It's not in the maxMind db online version now though, but since we still have 
the MaxMind version from 2Q2016, this example works.

Now, in the latest release notes, 
(https://dev.maxmind.com/geoip/geoip2/release-notes/) MaxMind mentions 
centering the Lat/Lon in the middle of the Lowest Resolution that comes back.  
That may negate this issue, but I've yet to test it out.
If it does, then I'd hope to be able to use their accuracyRadius as well.  
-
Note: The GeoLite2 City database now includes Accuracy Radius data. For the 
GeoLite2 City CSV database, a new accuracy_radius column will be appended to 
the IPv4 and IPv6 blocks files. Please test your integration to ensure 
compatibility before updating the GeoLite2 City CSV database. 
Learn more.
https://www.maxmind.com/en/geoip2-precision-city-service


> GeoEnrichIP - Routing IPs with Null attributes as Found
> ---
>
> Key: NIFI-2008
> URL: https://issues.apache.org/jira/browse/NIFI-2008
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.6.1
>Reporter: Randy Bovay
>
> I have an IP that is being found in the MaxMind database, but has no values 
> returned. (See below to check yourself)
> NiFi is routing this to 'found', but since there are no Attributes populated, 
> I cannot work with the return values.
> IP: 216.151.180.197
> https://www.maxmind.com/en/geoip-demo



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-2008) GeoEnrichIP - Routing IPs with Null attributes as Found

2017-02-13 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864394#comment-15864394
 ] 

Randy Bovay commented on NIFI-2008:
---

Pierre, I suspect I could eek out a way to check if each attribute is populated 
before I used it.
I see at times the City/Country/ISO_Code are created, but not populated. 
Whereas the Latitude and Longitude aren't created.
So, I would need a few extra steps to create that.   

I'd like to hear your feedback though.

> GeoEnrichIP - Routing IPs with Null attributes as Found
> ---
>
> Key: NIFI-2008
> URL: https://issues.apache.org/jira/browse/NIFI-2008
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.6.1
>Reporter: Randy Bovay
>
> I have an IP that is being found in the MaxMind database, but has no values 
> returned. (See below to check yourself)
> NiFi is routing this to 'found', but since there are no Attributes populated, 
> I cannot work with the return values.
> IP: 216.151.180.197
> https://www.maxmind.com/en/geoip-demo



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3095) PutElasticSearch (2.x Bulk API) - Add expression language support to 'Elasticsearch Hosts' and 'Cluster Name' fields

2016-11-23 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15691221#comment-15691221
 ] 

Randy Bovay commented on NIFI-3095:
---

I've updated the title to reflect adding this to the existing 2.x processor

> PutElasticSearch (2.x Bulk API) - Add expression language support to 
> 'Elasticsearch Hosts' and 'Cluster Name' fields
> 
>
> Key: NIFI-3095
> URL: https://issues.apache.org/jira/browse/NIFI-3095
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Randy Bovay
>
> Being able to a) Use Expression Language in the Hoss field, and ClusterName 
> fields.  
> This way we can provide a variable input to the service, and have it differ 
> in our different data centers and environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3095) PutElasticSearch (2.x Bulk API) - Add expression language support to 'Elasticsearch Hosts' and 'Cluster Name' fields

2016-11-23 Thread Randy Bovay (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Bovay updated NIFI-3095:
--
Summary: PutElasticSearch (2.x Bulk API) - Add expression language support 
to 'Elasticsearch Hosts' and 'Cluster Name' fields  (was: Bulk API - Add 
expression language support to 'Elasticsearch Hosts' and 'Cluster Name' fields)

> PutElasticSearch (2.x Bulk API) - Add expression language support to 
> 'Elasticsearch Hosts' and 'Cluster Name' fields
> 
>
> Key: NIFI-3095
> URL: https://issues.apache.org/jira/browse/NIFI-3095
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Randy Bovay
>
> Being able to a) Use Expression Language in the Hoss field, and ClusterName 
> fields.  
> This way we can provide a variable input to the service, and have it differ 
> in our different data centers and environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3094) ElasticSearch processor - Bulk API Should accept a list of Hostnames

2016-11-23 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15691160#comment-15691160
 ] 

Randy Bovay commented on NIFI-3094:
---

Matt,  I retested, and see that it's working that way.

There is good behavior on the nodes that are unavailable too.  If a node isn't 
reachable, it enables, but doesn't add it to the list.  If all are unavailable, 
it gives a good error message, but still enables.  

Tx, let's close this out.

> ElasticSearch processor - Bulk API Should accept a list of Hostnames
> 
>
> Key: NIFI-3094
> URL: https://issues.apache.org/jira/browse/NIFI-3094
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Randy Bovay
>
> Transport Clients should be able to pass in a list of hosts.
> This allows us to have a primary and list of backup servers  in that list.  
> Effectively reducing our need for a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-23 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15691030#comment-15691030
 ] 

Randy Bovay commented on NIFI-3011:
---

Joe,
Thanks for adjusting those items to Improvements.
NIFI-3094 | Bulk API Should accept a list of Hostnames
NIFI-3095 | Bulk API - Add expression language support to 'Elasticsearch Hosts' 
and 'Cluster Name' fields

> Support Elasticsearch 5.0 for Put/FetchElasticsearch processors
> ---
>
> Key: NIFI-3011
> URL: https://issues.apache.org/jira/browse/NIFI-3011
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.1.0
>
>
> Now that Elastic has released a new major version (5.0) of Elasticsearch, the 
> Put/FetchElasticsearch processors would need to be upgraded (or duplicated) 
> as the major version of the transport client needs to match the major version 
> of the Elasticsearch cluster.
> If upgrade is selected, then Put/FetchES will no longer work with 
> Elasticsearch 2.x clusters, so in that case users would want to switch to the 
> Http versions of those processors. However this might not be desirable (due 
> to performance concerns with the HTTP API vs the transport API), so care must 
> be taken when deciding whether to upgrade the existing processors or create 
> new ones.
> Creating new versions of these processors (to use the 5.0 transport client) 
> will also take some consideration, as it is unlikely the different versions 
> can coexist in the same NAR due to classloading issues (multiple versions of 
> JARs containing the same class names, e.g.). It may be necessary to create an 
> "elasticsearch-5.0" version of the NAR, containing only the new versions of 
> these processors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-23 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15690965#comment-15690965
 ] 

Randy Bovay commented on NIFI-3011:
---

There are 2 enhancements that would improve this processor. 
1) We'd like to have expression language support for ElasticSearch Hosts, and 
ClusterName field. This is good for developers, and allows us to vary our 
targets per data center and environment. Otherwise we would have to deploy code 
and then manually update it.
2) Also, the bulk transport clients should allow for configuring multiple hosts 
in the elasticsearch hosts field. This allows us to have a Primary, and many 
secondary lists of servers. So that if one goes offline, the processor will 
connect to a backup to retrieve the routing table information.
This shows how to set multiple addresses. 
https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html

> Support Elasticsearch 5.0 for Put/FetchElasticsearch processors
> ---
>
> Key: NIFI-3011
> URL: https://issues.apache.org/jira/browse/NIFI-3011
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.1.0
>
>
> Now that Elastic has released a new major version (5.0) of Elasticsearch, the 
> Put/FetchElasticsearch processors would need to be upgraded (or duplicated) 
> as the major version of the transport client needs to match the major version 
> of the Elasticsearch cluster.
> If upgrade is selected, then Put/FetchES will no longer work with 
> Elasticsearch 2.x clusters, so in that case users would want to switch to the 
> Http versions of those processors. However this might not be desirable (due 
> to performance concerns with the HTTP API vs the transport API), so care must 
> be taken when deciding whether to upgrade the existing processors or create 
> new ones.
> Creating new versions of these processors (to use the 5.0 transport client) 
> will also take some consideration, as it is unlikely the different versions 
> can coexist in the same NAR due to classloading issues (multiple versions of 
> JARs containing the same class names, e.g.). It may be necessary to create an 
> "elasticsearch-5.0" version of the NAR, containing only the new versions of 
> these processors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1594) Add option to bulk using Index or Update to PutElasticsearch

2016-11-23 Thread Randy Bovay (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15690963#comment-15690963
 ] 

Randy Bovay commented on NIFI-1594:
---

There are 2 enhancements that would improve this processor.  
1) We'd like to have expression language support for ElasticSearch Hosts, and 
ClusterName field.  This is good for developers, and allows us to vary our 
targets per data center and environment.  Otherwise we would have to deploy 
code and then manually update it.

2) Also, the bulk transport clients should allow for configuring multiple hosts 
in the elasticsearch hosts field.  This allows us to have a Primary, and many 
secondary lists of servers.  So that if one goes offline, the processor will 
connect to a backup to retrieve the routing table information.
This shows how to set multiple addresses.  
https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html

> Add option to bulk using Index or Update to PutElasticsearch
> 
>
> Key: NIFI-1594
> URL: https://issues.apache.org/jira/browse/NIFI-1594
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: João Henrique Ferreira de Freitas
>Priority: Minor
> Fix For: 1.0.0, 0.7.0, 1.0.0-Beta
>
>
> I have a use case where two flowfiles needs to be write using 
> PutElasticsearch. Both will write to the same document but in different 
> properties. 
> The proposal is to let the user choice if an Update operation or Index is 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3095) Bulk API - Add expression language support to 'Elasticsearch Hosts' and 'Cluster Name' fields

2016-11-23 Thread Randy Bovay (JIRA)
Randy Bovay created NIFI-3095:
-

 Summary: Bulk API - Add expression language support to 
'Elasticsearch Hosts' and 'Cluster Name' fields
 Key: NIFI-3095
 URL: https://issues.apache.org/jira/browse/NIFI-3095
 Project: Apache NiFi
  Issue Type: Sub-task
Reporter: Randy Bovay


Being able to a) Use Expression Language in the Hoss field, and ClusterName 
fields.  
This way we can provide a variable input to the service, and have it differ in 
our different data centers and environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3094) Bulk API Should accept a list of Hostnames

2016-11-23 Thread Randy Bovay (JIRA)
Randy Bovay created NIFI-3094:
-

 Summary: Bulk API Should accept a list of Hostnames
 Key: NIFI-3094
 URL: https://issues.apache.org/jira/browse/NIFI-3094
 Project: Apache NiFi
  Issue Type: Sub-task
Reporter: Randy Bovay


Transport Clients should be able to pass in a list of hosts.
This allows us to have a primary and list of backup servers  in that list.  
Effectively reducing our need for a load balancer.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)