[GitHub] nifi issue #2180: Added GetMongoAggregation to support running Mongo aggrega...

2018-02-01 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2180
  
There's a conflict now, can you rebase? I will try to take a look as soon 
as I can afterwards...


---


[jira] [Commented] (NIFI-4731) BigQuery processors

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349759#comment-16349759
 ] 

ASF GitHub Bot commented on NIFI-4731:
--

Github user nologic commented on the issue:

https://github.com/apache/nifi/pull/2420
  
@MikeThomsen seems like a lot of violations, what's the best way to clear 
these up?


> BigQuery processors
> ---
>
> Key: NIFI-4731
> URL: https://issues.apache.org/jira/browse/NIFI-4731
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mikhail Sosonkin
>Priority: Major
>
> NIFI should have processors for putting data into BigQuery (Streaming and 
> Batch).
> Initial working processors can be found this repository: 
> https://github.com/nologic/nifi/tree/NIFI-4731/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/bigquery
> I'd like to get them into Nifi proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2420: NIFI-4731

2018-02-01 Thread nologic
Github user nologic commented on the issue:

https://github.com/apache/nifi/pull/2420
  
@MikeThomsen seems like a lot of violations, what's the best way to clear 
these up?


---


[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-02-01 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349370#comment-16349370
 ] 

Michael Moser commented on NIFI-2630:
-

This patch no longer applies cleanly, so I took the liberty of adding a commit 
on top of the patch to clean it up.  I will submit a PR after NIFI-4834 is 
complete, because that one touches the same files as this one.

> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-02-01 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-2630:
---

Assignee: Michael Moser

> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-02-01 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-2630:

Fix Version/s: (was: 0.6.0)

> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2018-02-01 Thread vakshorton
Github user vakshorton closed the pull request at:

https://github.com/apache/nifi/pull/2181


---


[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349316#comment-16349316
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user vakshorton closed the pull request at:

https://github.com/apache/nifi/pull/2181


> Implement PutDruid Processor and Controller
> ---
>
> Key: NIFI-4428
> URL: https://issues.apache.org/jira/browse/NIFI-4428
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.3.0
>Reporter: Vadim Vaks
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> Implement a PutDruid Processor and Controller using Tranquility API. This 
> will enable Nifi to index contents of flow files in Druid. The implementation 
> should also be able to handle late arriving data (event timestamp points to 
> Druid indexing task that has closed, segment granularity and grace window 
> period expired). Late arriving data is typically dropped. Nifi should allow 
> late arriving data to be diverted to FAILED or DROPPED relationship. That 
> would allow late arriving data to be stored on HDFS or S3 until a re-indexing 
> task can merge it into the correct segment in deep storage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-135) BucketFlowsApi.create_flow_version returns an incorrect version count

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349280#comment-16349280
 ] 

ASF GitHub Bot commented on NIFIREG-135:


Github user Chaffelson commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/98#discussion_r165489168
  
--- Diff: 
nifi-registry-framework/src/main/java/org/apache/nifi/registry/service/RegistryService.java
 ---
@@ -589,8 +589,15 @@ public VersionedFlowSnapshot createFlowSnapshot(final 
VersionedFlowSnapshot flow
 // update the modified date on the flow
 metadataService.updateFlow(existingFlow);
 
+// get the updated flow, we need to use "with counts" here so 
we can return this is a part of the response
+final FlowEntity updatedFlow = 
metadataService.getFlowByIdWithSnapshotCounts(snapshotMetadata.getFlowIdentifier());
+if (updatedFlow == null) {
+throw new ResourceNotFoundException("Versioned flow does 
not exist for identifier " + snapshotMetadata.getFlowIdentifier());
+}
+final VersionedFlow updatedVersionedFlow = 
DataModelMapper.map(existingBucket, updatedFlow);
+
--- End diff --

Sounds good.
Now if only I had a maven builder to Docker for NiFi-Registry...


> BucketFlowsApi.create_flow_version returns an incorrect version count
> -
>
> Key: NIFIREG-135
> URL: https://issues.apache.org/jira/browse/NIFIREG-135
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 0.2.0
>
>
> Using the Python version of the Swagger defined client at:
> [https://github.com/Chaffelson/nipyapi/blob/master/nipyapi/registry/apis/bucket_flows_api.py]
> The VersionedFlow sub-object in the VersionedFlowSnapshot returned by the 
> create_flow_version call does not correctly increment the version_count 
> variable.
> If the Flow is subsequently retrieved using get_flow_version, the 
> version_count is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #98: NIFIREG-135 Fix versionCount for createFlowV...

2018-02-01 Thread Chaffelson
Github user Chaffelson commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/98#discussion_r165489168
  
--- Diff: 
nifi-registry-framework/src/main/java/org/apache/nifi/registry/service/RegistryService.java
 ---
@@ -589,8 +589,15 @@ public VersionedFlowSnapshot createFlowSnapshot(final 
VersionedFlowSnapshot flow
 // update the modified date on the flow
 metadataService.updateFlow(existingFlow);
 
+// get the updated flow, we need to use "with counts" here so 
we can return this is a part of the response
+final FlowEntity updatedFlow = 
metadataService.getFlowByIdWithSnapshotCounts(snapshotMetadata.getFlowIdentifier());
+if (updatedFlow == null) {
+throw new ResourceNotFoundException("Versioned flow does 
not exist for identifier " + snapshotMetadata.getFlowIdentifier());
+}
+final VersionedFlow updatedVersionedFlow = 
DataModelMapper.map(existingBucket, updatedFlow);
+
--- End diff --

Sounds good.
Now if only I had a maven builder to Docker for NiFi-Registry...


---


[jira] [Commented] (NIFIREG-135) BucketFlowsApi.create_flow_version returns an incorrect version count

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349271#comment-16349271
 ] 

ASF GitHub Bot commented on NIFIREG-135:


Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/98#discussion_r165487223
  
--- Diff: 
nifi-registry-framework/src/main/java/org/apache/nifi/registry/service/RegistryService.java
 ---
@@ -589,8 +589,15 @@ public VersionedFlowSnapshot createFlowSnapshot(final 
VersionedFlowSnapshot flow
 // update the modified date on the flow
 metadataService.updateFlow(existingFlow);
 
+// get the updated flow, we need to use "with counts" here so 
we can return this is a part of the response
+final FlowEntity updatedFlow = 
metadataService.getFlowByIdWithSnapshotCounts(snapshotMetadata.getFlowIdentifier());
+if (updatedFlow == null) {
+throw new ResourceNotFoundException("Versioned flow does 
not exist for identifier " + snapshotMetadata.getFlowIdentifier());
+}
+final VersionedFlow updatedVersionedFlow = 
DataModelMapper.map(existingBucket, updatedFlow);
+
--- End diff --

For the above block, I considered just modifying the in-scope versionedFlow 
object before returning, but thought it would be better (though more expensive) 
to just re-retrieve the flow, in case we add other fields to flow in the future 
that would be modified upon saving a new flow version.


> BucketFlowsApi.create_flow_version returns an incorrect version count
> -
>
> Key: NIFIREG-135
> URL: https://issues.apache.org/jira/browse/NIFIREG-135
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 0.2.0
>
>
> Using the Python version of the Swagger defined client at:
> [https://github.com/Chaffelson/nipyapi/blob/master/nipyapi/registry/apis/bucket_flows_api.py]
> The VersionedFlow sub-object in the VersionedFlowSnapshot returned by the 
> create_flow_version call does not correctly increment the version_count 
> variable.
> If the Flow is subsequently retrieved using get_flow_version, the 
> version_count is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #98: NIFIREG-135 Fix versionCount for createFlowV...

2018-02-01 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/98#discussion_r165487223
  
--- Diff: 
nifi-registry-framework/src/main/java/org/apache/nifi/registry/service/RegistryService.java
 ---
@@ -589,8 +589,15 @@ public VersionedFlowSnapshot createFlowSnapshot(final 
VersionedFlowSnapshot flow
 // update the modified date on the flow
 metadataService.updateFlow(existingFlow);
 
+// get the updated flow, we need to use "with counts" here so 
we can return this is a part of the response
+final FlowEntity updatedFlow = 
metadataService.getFlowByIdWithSnapshotCounts(snapshotMetadata.getFlowIdentifier());
+if (updatedFlow == null) {
+throw new ResourceNotFoundException("Versioned flow does 
not exist for identifier " + snapshotMetadata.getFlowIdentifier());
+}
+final VersionedFlow updatedVersionedFlow = 
DataModelMapper.map(existingBucket, updatedFlow);
+
--- End diff --

For the above block, I considered just modifying the in-scope versionedFlow 
object before returning, but thought it would be better (though more expensive) 
to just re-retrieve the flow, in case we add other fields to flow in the future 
that would be modified upon saving a new flow version.


---


[jira] [Commented] (NIFIREG-135) BucketFlowsApi.create_flow_version returns an incorrect version count

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349269#comment-16349269
 ] 

ASF GitHub Bot commented on NIFIREG-135:


GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/98

NIFIREG-135 Fix versionCount for createFlowVersion result



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-135

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/98.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #98


commit 9a399d4c36f641337e2002700e970b9fc136cb45
Author: Kevin Doran 
Date:   2018-02-01T21:00:20Z

NIFIREG-135 Fix versionCount for createFlowVersion result




> BucketFlowsApi.create_flow_version returns an incorrect version count
> -
>
> Key: NIFIREG-135
> URL: https://issues.apache.org/jira/browse/NIFIREG-135
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 0.2.0
>
>
> Using the Python version of the Swagger defined client at:
> [https://github.com/Chaffelson/nipyapi/blob/master/nipyapi/registry/apis/bucket_flows_api.py]
> The VersionedFlow sub-object in the VersionedFlowSnapshot returned by the 
> create_flow_version call does not correctly increment the version_count 
> variable.
> If the Flow is subsequently retrieved using get_flow_version, the 
> version_count is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFIREG-135) BucketFlowsApi.create_flow_version returns an incorrect version count

2018-02-01 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-135:

Fix Version/s: 0.2.0

> BucketFlowsApi.create_flow_version returns an incorrect version count
> -
>
> Key: NIFIREG-135
> URL: https://issues.apache.org/jira/browse/NIFIREG-135
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 0.2.0
>
>
> Using the Python version of the Swagger defined client at:
> [https://github.com/Chaffelson/nipyapi/blob/master/nipyapi/registry/apis/bucket_flows_api.py]
> The VersionedFlow sub-object in the VersionedFlowSnapshot returned by the 
> create_flow_version call does not correctly increment the version_count 
> variable.
> If the Flow is subsequently retrieved using get_flow_version, the 
> version_count is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #98: NIFIREG-135 Fix versionCount for createFlowV...

2018-02-01 Thread kevdoran
GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/98

NIFIREG-135 Fix versionCount for createFlowVersion result



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-135

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/98.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #98


commit 9a399d4c36f641337e2002700e970b9fc136cb45
Author: Kevin Doran 
Date:   2018-02-01T21:00:20Z

NIFIREG-135 Fix versionCount for createFlowVersion result




---


[jira] [Updated] (NIFIREG-135) BucketFlowsApi.create_flow_version returns an incorrect version count

2018-02-01 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-135:

Affects Version/s: (was: 0.2.0)

> BucketFlowsApi.create_flow_version returns an incorrect version count
> -
>
> Key: NIFIREG-135
> URL: https://issues.apache.org/jira/browse/NIFIREG-135
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
>
> Using the Python version of the Swagger defined client at:
> [https://github.com/Chaffelson/nipyapi/blob/master/nipyapi/registry/apis/bucket_flows_api.py]
> The VersionedFlow sub-object in the VersionedFlowSnapshot returned by the 
> create_flow_version call does not correctly increment the version_count 
> variable.
> If the Flow is subsequently retrieved using get_flow_version, the 
> version_count is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-135) BucketFlowsApi.create_flow_version returns an incorrect version count

2018-02-01 Thread Daniel Chaffelson (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349209#comment-16349209
 ] 

Daniel Chaffelson commented on NIFIREG-135:
---

I was hoping it'd be in the swagger spec so we can quickly fix it, but as it's 
in the backend I'll mark the test as pending NIFIREG-135 and move on.
Thanks [~kdoran]

> BucketFlowsApi.create_flow_version returns an incorrect version count
> -
>
> Key: NIFIREG-135
> URL: https://issues.apache.org/jira/browse/NIFIREG-135
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0, 0.2.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
>
> Using the Python version of the Swagger defined client at:
> [https://github.com/Chaffelson/nipyapi/blob/master/nipyapi/registry/apis/bucket_flows_api.py]
> The VersionedFlow sub-object in the VersionedFlowSnapshot returned by the 
> create_flow_version call does not correctly increment the version_count 
> variable.
> If the Flow is subsequently retrieved using get_flow_version, the 
> version_count is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-135) BucketFlowsApi.create_flow_version returns an incorrect version count

2018-02-01 Thread Kevin Doran (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349205#comment-16349205
 ] 

Kevin Doran commented on NIFIREG-135:
-

I reproduced this with the REST API. When calling 
{{BucketFlowResource.createFlowVersion(...)}}, the returned 
VersionedFlowSnapshot.flow.versionCount is not updated to reflect the new flow 
version that was just created. 

The root cause is in the {{RegistryService.createFlowSnapshot(...)}}. The 
existing flow is retrieved before the snapshot is persisted, and the count is 
not updated.

> BucketFlowsApi.create_flow_version returns an incorrect version count
> -
>
> Key: NIFIREG-135
> URL: https://issues.apache.org/jira/browse/NIFIREG-135
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0, 0.2.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
>
> Using the Python version of the Swagger defined client at:
> [https://github.com/Chaffelson/nipyapi/blob/master/nipyapi/registry/apis/bucket_flows_api.py]
> The VersionedFlow sub-object in the VersionedFlowSnapshot returned by the 
> create_flow_version call does not correctly increment the version_count 
> variable.
> If the Flow is subsequently retrieved using get_flow_version, the 
> version_count is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFIREG-135) BucketFlowsApi.create_flow_version returns an incorrect version count

2018-02-01 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran reassigned NIFIREG-135:
---

Assignee: Kevin Doran

> BucketFlowsApi.create_flow_version returns an incorrect version count
> -
>
> Key: NIFIREG-135
> URL: https://issues.apache.org/jira/browse/NIFIREG-135
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0, 0.2.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
>
> Using the Python version of the Swagger defined client at:
> [https://github.com/Chaffelson/nipyapi/blob/master/nipyapi/registry/apis/bucket_flows_api.py]
> The VersionedFlow sub-object in the VersionedFlowSnapshot returned by the 
> create_flow_version call does not correctly increment the version_count 
> variable.
> If the Flow is subsequently retrieved using get_flow_version, the 
> version_count is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFIREG-135) BucketFlowsApi.create_flow_version returns an incorrect version count

2018-02-01 Thread Daniel Chaffelson (JIRA)
Daniel Chaffelson created NIFIREG-135:
-

 Summary: BucketFlowsApi.create_flow_version returns an incorrect 
version count
 Key: NIFIREG-135
 URL: https://issues.apache.org/jira/browse/NIFIREG-135
 Project: NiFi Registry
  Issue Type: Bug
Affects Versions: 0.1.0, 0.2.0
Reporter: Daniel Chaffelson


Using the Python version of the Swagger defined client at:
[https://github.com/Chaffelson/nipyapi/blob/master/nipyapi/registry/apis/bucket_flows_api.py]

The VersionedFlow sub-object in the VersionedFlowSnapshot returned by the 
create_flow_version call does not correctly increment the version_count 
variable.

If the Flow is subsequently retrieved using get_flow_version, the version_count 
is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-388) DockerBuild.sh missing controller/ and LibExample/ dirs

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349125#comment-16349125
 ] 

ASF GitHub Bot commented on MINIFICPP-388:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/253


> DockerBuild.sh missing controller/ and LibExample/ dirs
> ---
>
> Key: MINIFICPP-388
> URL: https://issues.apache.org/jira/browse/MINIFICPP-388
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> The controller/ and LibExample/ dirs are not copied to docker/minificppsource 
> during make docker, leading to a swift build failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-391) MiNiFi/NiFi versions are incorrect in DockerVerify & int test code

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349128#comment-16349128
 ] 

ASF GitHub Bot commented on MINIFICPP-391:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/256


> MiNiFi/NiFi versions are incorrect in DockerVerify & int test code
> --
>
> Key: MINIFICPP-391
> URL: https://issues.apache.org/jira/browse/MINIFICPP-391
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> MiNiFi/NiFi versions are incorrect in DockerVerify & int test code causing 
> test failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-387) Allow TF path to be manually specified via env var

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349124#comment-16349124
 ] 

ASF GitHub Bot commented on MINIFICPP-387:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/252


> Allow TF path to be manually specified via env var
> --
>
> Key: MINIFICPP-387
> URL: https://issues.apache.org/jira/browse/MINIFICPP-387
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Minor
>
> If TensorFlow is installed in a custom or unusual location, we should allow 
> the user to specify the path via environment variable as part of the CMake 
> build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-390) EL generated sources from host are copied to docker build

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349127#comment-16349127
 ] 

ASF GitHub Bot commented on MINIFICPP-390:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/255


> EL generated sources from host are copied to docker build
> -
>
> Key: MINIFICPP-390
> URL: https://issues.apache.org/jira/browse/MINIFICPP-390
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> EL source files generated for the host environment are copied to the docker 
> environment, causing build failure. These files should instead be generated 
> as part of the docker build process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #256: MINIFICPP-391 Corrected MiNiFi/NiFi versi...

2018-02-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/256


---


[jira] [Commented] (MINIFICPP-389) FlexLexer.h missing, causing broken Alpine docker build

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349126#comment-16349126
 ] 

ASF GitHub Bot commented on MINIFICPP-389:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/254


> FlexLexer.h missing, causing broken Alpine docker build
> ---
>
> Key: MINIFICPP-389
> URL: https://issues.apache.org/jira/browse/MINIFICPP-389
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Needs flex-dev in the docker build deps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #255: MINIFICPP-390 Remove files generated from...

2018-02-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/255


---


[GitHub] nifi-minifi-cpp pull request #252: MINIFICPP-387 Allow TF path to be specifi...

2018-02-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/252


---


[GitHub] nifi-minifi-cpp pull request #253: MINIFICPP-388 Added missing controller/ a...

2018-02-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/253


---


[GitHub] nifi-minifi-cpp pull request #254: MINIFICPP-389 Added missing flex-dev depe...

2018-02-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/254


---


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349091#comment-16349091
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165449601
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends 

[jira] [Commented] (MINIFICPP-391) MiNiFi/NiFi versions are incorrect in DockerVerify & int test code

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349107#comment-16349107
 ] 

ASF GitHub Bot commented on MINIFICPP-391:
--

Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/256
  
Merging and verifying all docker fixes now. 


> MiNiFi/NiFi versions are incorrect in DockerVerify & int test code
> --
>
> Key: MINIFICPP-391
> URL: https://issues.apache.org/jira/browse/MINIFICPP-391
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> MiNiFi/NiFi versions are incorrect in DockerVerify & int test code causing 
> test failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #256: MINIFICPP-391 Corrected MiNiFi/NiFi versions for...

2018-02-01 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/256
  
Merging and verifying all docker fixes now. 


---


[jira] [Updated] (MINIFICPP-392) DockerBuild should copy dirs based on a black list vs a white list to avoid missing copied directories

2018-02-01 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-392:
-
Priority: Minor  (was: Major)

> DockerBuild should copy dirs based on a black list vs a white list to avoid 
> missing copied directories
> --
>
> Key: MINIFICPP-392
> URL: https://issues.apache.org/jira/browse/MINIFICPP-392
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>Assignee: Andrew Christianson
>Priority: Minor
> Fix For: 0.5.0
>
>
> See Convo from [https://github.com/apache/nifi-minifi-cpp/pull/253] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-392) DockerBuild should copy dirs based on a black list vs a white list to avoid missing copied directories

2018-02-01 Thread marco polo (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349097#comment-16349097
 ] 

marco polo commented on MINIFICPP-392:
--

We can move the version if there isn't time for 0.5.0 since the priority is 
pretty low on this one. 

> DockerBuild should copy dirs based on a black list vs a white list to avoid 
> missing copied directories
> --
>
> Key: MINIFICPP-392
> URL: https://issues.apache.org/jira/browse/MINIFICPP-392
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>Assignee: Andrew Christianson
>Priority: Minor
> Fix For: 0.5.0
>
>
> See Convo from [https://github.com/apache/nifi-minifi-cpp/pull/253] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-392) DockerBuild should copy dirs based on a black list vs a white list to avoid missing copied directories

2018-02-01 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-392:
-
Fix Version/s: 0.5.0

> DockerBuild should copy dirs based on a black list vs a white list to avoid 
> missing copied directories
> --
>
> Key: MINIFICPP-392
> URL: https://issues.apache.org/jira/browse/MINIFICPP-392
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>Assignee: Andrew Christianson
>Priority: Major
> Fix For: 0.5.0
>
>
> See Convo from [https://github.com/apache/nifi-minifi-cpp/pull/253] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2446: NIFI-4833 Add ScanHBase processor

2018-02-01 Thread bdesert
Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165449601
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends AbstractProcessor {
+   //enhanced regex for columns to allow "-" in column qualifier names
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+(:(\\w|-)+)?(?:,\\w+(:(\\w|-)+)?)*");
+static final byte[] nl = 

[GitHub] nifi pull request #2446: NIFI-4833 Add ScanHBase processor

2018-02-01 Thread bdesert
Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165449486
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends AbstractProcessor {
+   //enhanced regex for columns to allow "-" in column qualifier names
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+(:(\\w|-)+)?(?:,\\w+(:(\\w|-)+)?)*");
+static final byte[] nl = 

[jira] [Created] (MINIFICPP-392) DockerBuild should copy dirs based on a black list vs a white list to avoid missing copied directories

2018-02-01 Thread marco polo (JIRA)
marco polo created MINIFICPP-392:


 Summary: DockerBuild should copy dirs based on a black list vs a 
white list to avoid missing copied directories
 Key: MINIFICPP-392
 URL: https://issues.apache.org/jira/browse/MINIFICPP-392
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: marco polo


See Convo from [https://github.com/apache/nifi-minifi-cpp/pull/253] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349079#comment-16349079
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165448494
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends 

[GitHub] nifi pull request #2446: NIFI-4833 Add ScanHBase processor

2018-02-01 Thread bdesert
Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165448494
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends AbstractProcessor {
+   //enhanced regex for columns to allow "-" in column qualifier names
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+(:(\\w|-)+)?(?:,\\w+(:(\\w|-)+)?)*");
+static final byte[] nl = 

[GitHub] nifi pull request #2446: NIFI-4833 Add ScanHBase processor

2018-02-01 Thread bdesert
Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165447440
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends AbstractProcessor {
+   //enhanced regex for columns to allow "-" in column qualifier names
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+(:(\\w|-)+)?(?:,\\w+(:(\\w|-)+)?)*");
+static final byte[] nl = 

[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349069#comment-16349069
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165447440
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends 

[jira] [Commented] (NIFI-4410) PutElasticsearchHttp needs better error handling and logging

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348997#comment-16348997
 ] 

ASF GitHub Bot commented on NIFI-4410:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2175
  
So yeah, +1 on merge.


> PutElasticsearchHttp needs better error handling and logging
> 
>
> Key: NIFI-4410
> URL: https://issues.apache.org/jira/browse/NIFI-4410
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Matt Burgess
>Priority: Major
>
> https://github.com/apache/nifi/blob/6b5015e39b4233cf230151fb45bebcb21df03730/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttp.java#L364-L366
> If it cannot extract the reason text it provides a very generic error and 
> there is nothing else logged.  You get no context as to what went wrong and 
> further the condition doesn't cause yielding or anything so there is just a 
> massive flood of errors in logs that dont' advise the user of the problem.
> We need to make sure the information can be made available to help 
> troubleshoot and we need to cause yielding so that such cases do not cause 
> continuous floods of errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2175: NIFI-4410: Improved error handling/logging in PutElasticse...

2018-02-01 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2175
  
So yeah, +1 on merge.


---


[GitHub] nifi issue #2175: NIFI-4410: Improved error handling/logging in PutElasticse...

2018-02-01 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2175
  
Ok. I just went back and tried out PutElasticSearchHttp (don't have time to 
retry the record one right now), and the behavior was as expected. It treats 
any delete that doesn't result in a server-side exception as a success. 


---


[jira] [Updated] (NIFIREG-134) Enable Spring Boot Actuator REST API endpoints

2018-02-01 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-134:

Description: 
Spring Boot comes with an optional module known as Actuator which enables 
remote administration, management, and monitoring of the application through 
the REST API:
https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#production-ready-endpoints

This task is to enable the Actuator module and expose it in such a way that 
access is gated by the standard NiFi Registry Authorization framework.

  was:
Spring Boot comes with an optional module known as Actuator which enables 
remote administration, management, and monitoring of the application through 
the REST API.

This task is to enable the Actuator module and expose it in such a way that 
access is gated by the standard NiFi Registry Authorization framework.


> Enable Spring Boot Actuator REST API endpoints
> --
>
> Key: NIFIREG-134
> URL: https://issues.apache.org/jira/browse/NIFIREG-134
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Minor
> Fix For: 0.2.0
>
>
> Spring Boot comes with an optional module known as Actuator which enables 
> remote administration, management, and monitoring of the application through 
> the REST API:
> https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#production-ready-endpoints
> This task is to enable the Actuator module and expose it in such a way that 
> access is gated by the standard NiFi Registry Authorization framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-134) Enable Spring Boot Actuator REST API endpoints

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348916#comment-16348916
 ] 

ASF GitHub Bot commented on NIFIREG-134:


GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/97

NIFIREG-134 Enable SpringBoot Actuator endpoints

Contains the following changes:
- Configures Jersey as a filter (previously was a servlet) that
  forwards requests to /actuator/* so they can be handled by Actuator
- Adds a ResourceAuthorizationFilter that performs authorization in
  the filter chain, and configures it to gate access to /actuator/*
- Adds test cases for ResourceAuthorizationFilter

For reviewers:
- Verify /nifi-registry-api/actuator is accessible in unsecured mode. It 
will return a listing of other endpoints (/nifi-registry-api/actuator/*) that 
you can also try.
- In secured mode, an initial admin should have access, as should any user 
who is granted access to the "/actuator" resource

Future enhancements (will create JIRAs if this is merged):
- Add Actuator usage instructions to Admin guide
- Add health checks that roll up into /actuator/health
- Add ActuatorClient to NiFiRegistryClient


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-134

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/97.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #97


commit 44c19851f9bc5eb21ade2445422a677ca4d208ef
Author: Kevin Doran 
Date:   2018-01-25T14:14:53Z

NIFIREG-134 Enable SpringBoot Actuator endpoints

- Configures Jersey as a filter (previously was a servlet) that
  forwards requests to /actuator/* so they can be handled by Actuator
- Adds a ResourceAuthorizationFilter that performs authorization in
  the filter chain, and configures it to gate access to /actuator/*
- Adds test cases for ResourceAuthorizationFilter




> Enable Spring Boot Actuator REST API endpoints
> --
>
> Key: NIFIREG-134
> URL: https://issues.apache.org/jira/browse/NIFIREG-134
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Minor
> Fix For: 0.2.0
>
>
> Spring Boot comes with an optional module known as Actuator which enables 
> remote administration, management, and monitoring of the application through 
> the REST API.
> This task is to enable the Actuator module and expose it in such a way that 
> access is gated by the standard NiFi Registry Authorization framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #97: NIFIREG-134 Enable SpringBoot Actuator endpo...

2018-02-01 Thread kevdoran
GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/97

NIFIREG-134 Enable SpringBoot Actuator endpoints

Contains the following changes:
- Configures Jersey as a filter (previously was a servlet) that
  forwards requests to /actuator/* so they can be handled by Actuator
- Adds a ResourceAuthorizationFilter that performs authorization in
  the filter chain, and configures it to gate access to /actuator/*
- Adds test cases for ResourceAuthorizationFilter

For reviewers:
- Verify /nifi-registry-api/actuator is accessible in unsecured mode. It 
will return a listing of other endpoints (/nifi-registry-api/actuator/*) that 
you can also try.
- In secured mode, an initial admin should have access, as should any user 
who is granted access to the "/actuator" resource

Future enhancements (will create JIRAs if this is merged):
- Add Actuator usage instructions to Admin guide
- Add health checks that roll up into /actuator/health
- Add ActuatorClient to NiFiRegistryClient


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-134

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/97.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #97


commit 44c19851f9bc5eb21ade2445422a677ca4d208ef
Author: Kevin Doran 
Date:   2018-01-25T14:14:53Z

NIFIREG-134 Enable SpringBoot Actuator endpoints

- Configures Jersey as a filter (previously was a servlet) that
  forwards requests to /actuator/* so they can be handled by Actuator
- Adds a ResourceAuthorizationFilter that performs authorization in
  the filter chain, and configures it to gate access to /actuator/*
- Adds test cases for ResourceAuthorizationFilter




---


[jira] [Created] (NIFIREG-134) Enable Spring Boot Actuator REST API endpoints

2018-02-01 Thread Kevin Doran (JIRA)
Kevin Doran created NIFIREG-134:
---

 Summary: Enable Spring Boot Actuator REST API endpoints
 Key: NIFIREG-134
 URL: https://issues.apache.org/jira/browse/NIFIREG-134
 Project: NiFi Registry
  Issue Type: New Feature
Reporter: Kevin Doran
Assignee: Kevin Doran
 Fix For: 0.2.0


Spring Boot comes with an optional module known as Actuator which enables 
remote administration, management, and monitoring of the application through 
the REST API.

This task is to enable the Actuator module and expose it in such a way that 
access is gated by the standard NiFi Registry Authorization framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4836) Allow QueryDatabaseTables to send out batches of flow files while result set is being processed

2018-02-01 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4836:
---
Status: Patch Available  (was: In Progress)

> Allow QueryDatabaseTables to send out batches of flow files while result set 
> is being processed
> ---
>
> Key: NIFI-4836
> URL: https://issues.apache.org/jira/browse/NIFI-4836
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently QueryDatabaseTable (QDT) will not transfer the outgoing flowfiles 
> to the downstream relationship(s) until the entire result set has been 
> processed (regardless of whether Max Rows Per Flow File is set). This is so 
> the maxvalue.* and fragment.count attributes can be set correctly for each 
> flow file.
> However for very large result sets, the initial fetch can take a long time, 
> and depending on the setting of Max Rows Per FlowFile, there could be a great 
> number of FlowFiles transferred downstream as a large burst at the end of QDT 
> execution.
> It would be nice for the user to be able to choose to have FlowFiles be 
> transferred downstream while the result set is still being processed. This 
> alleviates the "large burst at the end" by replacing it with smaller output 
> batches during processing. The tradeoff will be that if an Output Batch Size 
> is set, then the maxvalue.* and fragment.count attributes will not be set on 
> the outgoing flow files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4836) Allow QueryDatabaseTables to send out batches of flow files while result set is being processed

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348725#comment-16348725
 ] 

ASF GitHub Bot commented on NIFI-4836:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2447

NIFI-4836: Allow output of FlowFiles during result set processing in 
QueryDatabaseTable

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-4836

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2447.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2447


commit f4d3e60b5cb0183b6e077680e26ce69e5be584ad
Author: Matthew Burgess 
Date:   2018-02-01T15:08:09Z

NIFI-4836: Allow output of FlowFiles during result set processing in 
QueryDatabaseTable




> Allow QueryDatabaseTables to send out batches of flow files while result set 
> is being processed
> ---
>
> Key: NIFI-4836
> URL: https://issues.apache.org/jira/browse/NIFI-4836
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently QueryDatabaseTable (QDT) will not transfer the outgoing flowfiles 
> to the downstream relationship(s) until the entire result set has been 
> processed (regardless of whether Max Rows Per Flow File is set). This is so 
> the maxvalue.* and fragment.count attributes can be set correctly for each 
> flow file.
> However for very large result sets, the initial fetch can take a long time, 
> and depending on the setting of Max Rows Per FlowFile, there could be a great 
> number of FlowFiles transferred downstream as a large burst at the end of QDT 
> execution.
> It would be nice for the user to be able to choose to have FlowFiles be 
> transferred downstream while the result set is still being processed. This 
> alleviates the "large burst at the end" by replacing it with smaller output 
> batches during processing. The tradeoff will be that if an Output Batch Size 
> is set, then the maxvalue.* and fragment.count attributes will not be set on 
> the outgoing flow files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2447: NIFI-4836: Allow output of FlowFiles during result ...

2018-02-01 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2447

NIFI-4836: Allow output of FlowFiles during result set processing in 
QueryDatabaseTable

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-4836

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2447.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2447


commit f4d3e60b5cb0183b6e077680e26ce69e5be584ad
Author: Matthew Burgess 
Date:   2018-02-01T15:08:09Z

NIFI-4836: Allow output of FlowFiles during result set processing in 
QueryDatabaseTable




---


[jira] [Assigned] (NIFI-3502) Upgrade D3 to the latest 4.x version

2018-02-01 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reassigned NIFI-3502:
-

Assignee: Matt Gilman

> Upgrade D3 to the latest 4.x version
> 
>
> Key: NIFI-3502
> URL: https://issues.apache.org/jira/browse/NIFI-3502
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Scott Aslan
>Assignee: Matt Gilman
>Priority: Major
>
> The NIFI canvas web application is using version 3.x of the D3 library which 
> is a major version behind and is due to be upgraded. This will be a bit of an 
> effort as the API's have changed in 4.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-4621) Allow inputs to ListSFTP

2018-02-01 Thread Soumya Shanta Ghosh (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348705#comment-16348705
 ] 

Soumya Shanta Ghosh edited comment on NIFI-4621 at 2/1/18 3:12 PM:
---

Hi [~puspendu.baner...@gmail.com]

The description itself is my the case.
This is a real scenario where we have to read list a directory and subsequently 
fetch the files in that directory using SFTP with the help of NiFi.
Now this directory is on an edge node in the company internal network and not 
external network.

I have written a custom processor extending the ListSFTP and deployed it as a 
nar file.
This custom processor accepts inputs to it, transfers the attributes (which 
contain the username/password/keypassphrase etc.) to the out going flow file.
Please let me know your views on this.
Also, let me know if there is any alternate solution to this case.

Thanks and Regards,
Soumya Shanta Ghosh


was (Author: soumya.ghosh):
Hi [~puspendu.baner...@gmail.com]

The description itself is my the case.
This is a real scenario where we have to read list a directory and subsequently 
fetch the files in that directory using SFTP with the help of NiFi.
Now this directory is on an edge node in the company internal network and not 
external network.

I have written a custom processor extending the ListSFTP and deployed it as a 
nar file.
This custom processor accepts inputs to it, transfers the attributes (which 
contain the username/password/keypassphrase etc.) to the out going flow file.
Please let me know your views on this.

Thanks and Regards,
Soumya Shanta Ghosh

> Allow inputs to ListSFTP
> 
>
> Key: NIFI-4621
> URL: https://issues.apache.org/jira/browse/NIFI-4621
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Soumya Shanta Ghosh
>Assignee: Puspendu Banerjee
>Priority: Critical
>
> ListSFTP supports listing of the supplied directory (Remote Path) 
> out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" 
> / "Private Key Passphrase". 
> The password can change at a regular interval (depending on organization 
> policy) or the Hostname or the Remote Path can change based on some other 
> requirement.
> This is a case to allow ListSFTP to leverage the use of Nifi Expression 
> language so that the values of Hostname, Password and/or Remote Path can be 
> set based on the attributes of an incoming flow file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4080) ValidateCSV - Add support for Expression Language

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348709#comment-16348709
 ] 

ASF GitHub Bot commented on NIFI-4080:
--

Github user mgaido91 commented on the issue:

https://github.com/apache/nifi/pull/2226
  
@mattyb149 there is a test failure because I just cherry-picked my changes 
on top of your PR, so when I run the tests, I didn't run the test you added 
here (sorry, my bad). This test is not valid anymore, since we cannot validate 
the schema since we have it only when we get the flow files. May you please 
remove `testValidateWithEL` then please? Thanks.


> ValidateCSV - Add support for Expression Language 
> --
>
> Key: NIFI-4080
> URL: https://issues.apache.org/jira/browse/NIFI-4080
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The ValidateCSV processor could benefit if the following fields supported 
> Expression Language evaluation:
> - Schema
> - Quote character
> - Delimiter character
> - End of line symbols



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2226: NIFI-4080: Added EL support to fields in ValidateCSV

2018-02-01 Thread mgaido91
Github user mgaido91 commented on the issue:

https://github.com/apache/nifi/pull/2226
  
@mattyb149 there is a test failure because I just cherry-picked my changes 
on top of your PR, so when I run the tests, I didn't run the test you added 
here (sorry, my bad). This test is not valid anymore, since we cannot validate 
the schema since we have it only when we get the flow files. May you please 
remove `testValidateWithEL` then please? Thanks.


---


[jira] [Commented] (NIFI-4410) PutElasticsearchHttp needs better error handling and logging

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348706#comment-16348706
 ] 

ASF GitHub Bot commented on NIFI-4410:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2175
  
This is my PR. If you give it a +1 we can have a committer merge it, thanks!


> PutElasticsearchHttp needs better error handling and logging
> 
>
> Key: NIFI-4410
> URL: https://issues.apache.org/jira/browse/NIFI-4410
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Matt Burgess
>Priority: Major
>
> https://github.com/apache/nifi/blob/6b5015e39b4233cf230151fb45bebcb21df03730/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttp.java#L364-L366
> If it cannot extract the reason text it provides a very generic error and 
> there is nothing else logged.  You get no context as to what went wrong and 
> further the condition doesn't cause yielding or anything so there is just a 
> massive flood of errors in logs that dont' advise the user of the problem.
> We need to make sure the information can be made available to help 
> troubleshoot and we need to cause yielding so that such cases do not cause 
> continuous floods of errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4621) Allow inputs to ListSFTP

2018-02-01 Thread Soumya Shanta Ghosh (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348705#comment-16348705
 ] 

Soumya Shanta Ghosh commented on NIFI-4621:
---

Hi [~puspendu.baner...@gmail.com]

The description itself is my the case.
This is a real scenario where we have to read list a directory and subsequently 
fetch the files in that directory using SFTP with the help of NiFi.
Now this directory is on an edge node in the company internal network and not 
external network.

I have written a custom processor extending the ListSFTP and deployed it as a 
nar file.
This custom processor accepts inputs to it, transfers the attributes (which 
contain the username/password/keypassphrase etc.) to the out going flow file.
Please let me know your views on this.

Thanks and Regards,
Soumya Shanta Ghosh

> Allow inputs to ListSFTP
> 
>
> Key: NIFI-4621
> URL: https://issues.apache.org/jira/browse/NIFI-4621
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Soumya Shanta Ghosh
>Assignee: Puspendu Banerjee
>Priority: Critical
>
> ListSFTP supports listing of the supplied directory (Remote Path) 
> out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" 
> / "Private Key Passphrase". 
> The password can change at a regular interval (depending on organization 
> policy) or the Hostname or the Remote Path can change based on some other 
> requirement.
> This is a case to allow ListSFTP to leverage the use of Nifi Expression 
> language so that the values of Hostname, Password and/or Remote Path can be 
> set based on the attributes of an incoming flow file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2175: NIFI-4410: Improved error handling/logging in PutElasticse...

2018-02-01 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2175
  
This is my PR. If you give it a +1 we can have a committer merge it, thanks!


---


[jira] [Assigned] (NIFI-4836) Allow QueryDatabaseTables to send out batches of flow files while result set is being processed

2018-02-01 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-4836:
--

Assignee: Matt Burgess

> Allow QueryDatabaseTables to send out batches of flow files while result set 
> is being processed
> ---
>
> Key: NIFI-4836
> URL: https://issues.apache.org/jira/browse/NIFI-4836
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently QueryDatabaseTable (QDT) will not transfer the outgoing flowfiles 
> to the downstream relationship(s) until the entire result set has been 
> processed (regardless of whether Max Rows Per Flow File is set). This is so 
> the maxvalue.* and fragment.count attributes can be set correctly for each 
> flow file.
> However for very large result sets, the initial fetch can take a long time, 
> and depending on the setting of Max Rows Per FlowFile, there could be a great 
> number of FlowFiles transferred downstream as a large burst at the end of QDT 
> execution.
> It would be nice for the user to be able to choose to have FlowFiles be 
> transferred downstream while the result set is still being processed. This 
> alleviates the "large burst at the end" by replacing it with smaller output 
> batches during processing. The tradeoff will be that if an Output Batch Size 
> is set, then the maxvalue.* and fragment.count attributes will not be set on 
> the outgoing flow files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4836) Allow QueryDatabaseTables to send out batches of flow files while result set is being processed

2018-02-01 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-4836:
--

 Summary: Allow QueryDatabaseTables to send out batches of flow 
files while result set is being processed
 Key: NIFI-4836
 URL: https://issues.apache.org/jira/browse/NIFI-4836
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Matt Burgess


Currently QueryDatabaseTable (QDT) will not transfer the outgoing flowfiles to 
the downstream relationship(s) until the entire result set has been processed 
(regardless of whether Max Rows Per Flow File is set). This is so the 
maxvalue.* and fragment.count attributes can be set correctly for each flow 
file.

However for very large result sets, the initial fetch can take a long time, and 
depending on the setting of Max Rows Per FlowFile, there could be a great 
number of FlowFiles transferred downstream as a large burst at the end of QDT 
execution.

It would be nice for the user to be able to choose to have FlowFiles be 
transferred downstream while the result set is still being processed. This 
alleviates the "large burst at the end" by replacing it with smaller output 
batches during processing. The tradeoff will be that if an Output Batch Size is 
set, then the maxvalue.* and fragment.count attributes will not be set on the 
outgoing flow files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4410) PutElasticsearchHttp needs better error handling and logging

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348696#comment-16348696
 ] 

ASF GitHub Bot commented on NIFI-4410:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2175
  
@mattyb149 Do you want to keep reviewing this or close it out?


> PutElasticsearchHttp needs better error handling and logging
> 
>
> Key: NIFI-4410
> URL: https://issues.apache.org/jira/browse/NIFI-4410
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Matt Burgess
>Priority: Major
>
> https://github.com/apache/nifi/blob/6b5015e39b4233cf230151fb45bebcb21df03730/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttp.java#L364-L366
> If it cannot extract the reason text it provides a very generic error and 
> there is nothing else logged.  You get no context as to what went wrong and 
> further the condition doesn't cause yielding or anything so there is just a 
> massive flood of errors in logs that dont' advise the user of the problem.
> We need to make sure the information can be made available to help 
> troubleshoot and we need to cause yielding so that such cases do not cause 
> continuous floods of errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2175: NIFI-4410: Improved error handling/logging in PutElasticse...

2018-02-01 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2175
  
@mattyb149 Do you want to keep reviewing this or close it out?


---


[jira] [Commented] (NIFI-4164) Realistic Time Series Processor Simulator

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348564#comment-16348564
 ] 

ASF GitHub Bot commented on NIFI-4164:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1997#discussion_r165352409
  
--- Diff: 
nifi-nar-bundles/nifi-simulator-bundle/nifi-simulator-processors/src/main/java/com/apache/nifi/processors/simulator/GenerateTimeSeriesFlowFile.java
 ---
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package com.apache.nifi.processors.simulator;
+
+import be.cetic.tsimulus.config.Configuration;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.joda.time.LocalDateTime;
+import scala.Some;
+import scala.Tuple3;
+import scala.collection.JavaConverters;
+
+import java.util.List;
+import java.util.Set;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.ArrayList;
+
+@Tags({"Simulator, Timeseries, IOT, Testing"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Generates realistic time series data using the 
TSimulus time series generator, and places the values into the flowfile in a 
CSV format.")
+public class GenerateTimeSeriesFlowFile extends AbstractProcessor {
+
+private Configuration simConfig = null;
+private boolean isTest = false;
+
+public static final PropertyDescriptor SIMULATOR_CONFIG = new 
PropertyDescriptor
+.Builder().name("SIMULATOR_CONFIG")
+.displayName("Simulator Configuration File")
+.description("The JSON configuration file to use to configure 
TSimulus")
+.required(true)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
--- End diff --

I think you should consider creating your own validator that does a 
combination of file existence check and testing whether or not you can create a 
config that the API will accept from it.


> Realistic Time Series Processor Simulator
> -
>
> Key: NIFI-4164
> URL: https://issues.apache.org/jira/browse/NIFI-4164
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Chris Herrera
>Assignee: Chris Herrera
>Priority: Minor
>  Labels: features
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to validate several flows that deal with sensor data, it would be 
> good to have a built in time series simulator processor that generates data 
> and can send it out via a flow file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348559#comment-16348559
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2181
  
@mattyb149 @joewitt PutDruid exists now. Should this PR be closed?


> Implement PutDruid Processor and Controller
> ---
>
> Key: NIFI-4428
> URL: https://issues.apache.org/jira/browse/NIFI-4428
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.3.0
>Reporter: Vadim Vaks
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> Implement a PutDruid Processor and Controller using Tranquility API. This 
> will enable Nifi to index contents of flow files in Druid. The implementation 
> should also be able to handle late arriving data (event timestamp points to 
> Druid indexing task that has closed, segment granularity and grace window 
> period expired). Late arriving data is typically dropped. Nifi should allow 
> late arriving data to be diverted to FAILED or DROPPED relationship. That 
> would allow late arriving data to be stored on HDFS or S3 until a re-indexing 
> task can merge it into the correct segment in deep storage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2181: NIFI-4428: - Implement PutDruid Processor and Controller

2018-02-01 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2181
  
@mattyb149 @joewitt PutDruid exists now. Should this PR be closed?


---


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348556#comment-16348556
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2446
  
I have a patch that I've been working on for adding support for HBase 
visibility labels to the existing processors. Might want to think about how to 
integrate that into this processor.


> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2446: NIFI-4833 Add ScanHBase processor

2018-02-01 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2446
  
I have a patch that I've been working on for adding support for HBase 
visibility labels to the existing processors. Might want to think about how to 
integrate that into this processor.


---


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348553#comment-16348553
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165338502
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends 

[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348552#comment-16348552
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165304550
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends 

[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348550#comment-16348550
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165304841
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends 

[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348551#comment-16348551
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165304629
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends 

[GitHub] nifi pull request #2446: NIFI-4833 Add ScanHBase processor

2018-02-01 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165304841
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends AbstractProcessor {
+   //enhanced regex for columns to allow "-" in column qualifier names
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+(:(\\w|-)+)?(?:,\\w+(:(\\w|-)+)?)*");
+static final byte[] nl = 

[GitHub] nifi pull request #2446: NIFI-4833 Add ScanHBase processor

2018-02-01 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165304629
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends AbstractProcessor {
+   //enhanced regex for columns to allow "-" in column qualifier names
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+(:(\\w|-)+)?(?:,\\w+(:(\\w|-)+)?)*");
+static final byte[] nl = 

[GitHub] nifi pull request #2446: NIFI-4833 Add ScanHBase processor

2018-02-01 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165338502
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends AbstractProcessor {
+   //enhanced regex for columns to allow "-" in column qualifier names
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+(:(\\w|-)+)?(?:,\\w+(:(\\w|-)+)?)*");
+static final byte[] nl = 

[GitHub] nifi pull request #2446: NIFI-4833 Add ScanHBase processor

2018-02-01 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2446#discussion_r165304550
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,564 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. Could be null (not present) if transfered 
to FAILURE")
+})
+public class ScanHBase extends AbstractProcessor {
+   //enhanced regex for columns to allow "-" in column qualifier names
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+(:(\\w|-)+)?(?:,\\w+(:(\\w|-)+)?)*");
+static final byte[] nl = 

[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348308#comment-16348308
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2446
  
I'll take a stab at reviewing, but you should git cherry pick this onto a 
git branch and not do it off your master branch.


> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2446: NIFI-4833 Add ScanHBase processor

2018-02-01 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2446
  
I'll take a stab at reviewing, but you should git cherry pick this onto a 
git branch and not do it off your master branch.


---