[jira] [Commented] (NIFI-1192) Allow Get/PutKafka to honor dynamic properties

2015-11-19 Thread Tony Kurc (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015343#comment-15015343
 ] 

Tony Kurc commented on NIFI-1192:
-

[~joewitt] - if you recall me wailing in pain over the kerberos hadoop options, 
and having to copy/paste all over the place... are properties the best way to 
configure this? or would a configuration controllerservice be better? (also, I 
recognize this should not hold up this ticket, but as you commented about a 
broker misbehaving, I recalled my hadoop pain and thought "if I had a bunch of 
kafka processors that all needed a property change to talk to to that 
broker...")

> Allow Get/PutKafka to honor dynamic properties
> --
>
> Key: NIFI-1192
> URL: https://issues.apache.org/jira/browse/NIFI-1192
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>Priority: Critical
> Fix For: 0.4.0
>
>
> Currently Kafka does not honor dynamic properties which means aside from 8 
> properties exposed none others could be set



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1192) Allow Get/PutKafka to honor dynamic properties

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015328#comment-15015328
 ] 

ASF GitHub Bot commented on NIFI-1192:
--

Github user trkurc commented on the pull request:

https://github.com/apache/nifi/pull/129#issuecomment-158299735
  
I believe that this description: "These properties will be set on the Kafka 
configuration after loading any provided configuration properties"

I think it is a bit ... left open for interpretation. (and maybe not grave 
enough?) should it state something to the extent of "these properties will 
*override* any previously set Kafka configuration properties". I couldn't come 
up with a wording I liked a lot because you could form a description using the 
work 'properties' which means several different things.


> Allow Get/PutKafka to honor dynamic properties
> --
>
> Key: NIFI-1192
> URL: https://issues.apache.org/jira/browse/NIFI-1192
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>Priority: Critical
> Fix For: 0.4.0
>
>
> Currently Kafka does not honor dynamic properties which means aside from 8 
> properties exposed none others could be set



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1192) Allow Get/PutKafka to honor dynamic properties

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015318#comment-15015318
 ] 

ASF GitHub Bot commented on NIFI-1192:
--

Github user trkurc commented on the pull request:

https://github.com/apache/nifi/pull/129#issuecomment-158296710
  
Should the descriptions of the fields like "ZooKeeper Communications 
Timeout" include the property name (zk.connectiontimeout.ms) to make it obvious 
that they'll be clobbered if set as a dynamic property?


> Allow Get/PutKafka to honor dynamic properties
> --
>
> Key: NIFI-1192
> URL: https://issues.apache.org/jira/browse/NIFI-1192
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>Priority: Critical
> Fix For: 0.4.0
>
>
> Currently Kafka does not honor dynamic properties which means aside from 8 
> properties exposed none others could be set



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-1094) Searching Provenance Repo by component name results in js error

2015-11-19 Thread Randy Gelhausen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Gelhausen resolved NIFI-1094.
---
Resolution: Cannot Reproduce

> Searching Provenance Repo by component name results in js error
> ---
>
> Key: NIFI-1094
> URL: https://issues.apache.org/jira/browse/NIFI-1094
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Randy Gelhausen
>Assignee: Matt Gilman
>
> Reproduction Steps:
> 1. Open the Provenance Repository UI
> 2. Type some component name
> 3. Open js debug console (Command+Shift+J on macs)
> 4. Hit enter
> See the following error in the console:
> "Uncaught TypeError: Cannot read property 'search' of undefined"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1094) Searching Provenance Repo by component name results in js error

2015-11-19 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015123#comment-15015123
 ] 

Randy Gelhausen commented on NIFI-1094:
---

A build of .4 from last night no longer gives me this issue. Closing.

> Searching Provenance Repo by component name results in js error
> ---
>
> Key: NIFI-1094
> URL: https://issues.apache.org/jira/browse/NIFI-1094
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Randy Gelhausen
>Assignee: Matt Gilman
>
> Reproduction Steps:
> 1. Open the Provenance Repository UI
> 2. Type some component name
> 3. Open js debug console (Command+Shift+J on macs)
> 4. Hit enter
> See the following error in the console:
> "Uncaught TypeError: Cannot read property 'search' of undefined"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-1205) Allow GetFile's "File Filter" property to support expression language

2015-11-19 Thread Randy Gelhausen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Gelhausen resolved NIFI-1205.
---
Resolution: Duplicate

> Allow GetFile's "File Filter" property to support expression language
> -
>
> Key: NIFI-1205
> URL: https://issues.apache.org/jira/browse/NIFI-1205
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Randy Gelhausen
>Priority: Minor
>
> GetFile is useful as a scheduled processor, but currently assumes 
> foreknowledge of which files the user desires to match.
> There may be external signals, e.g. a process crash, that users desire to 
> trigger GetFile to pickup and do work on artifacts specific to the trigger.
> Adding expression language support for the File Filter property would allow 
> incoming flowfiles to point the processor at different files of interest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1205) Allow GetFile's "File Filter" property to support expression language

2015-11-19 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015094#comment-15015094
 ] 

Randy Gelhausen commented on NIFI-1205:
---

agreed. closing.

> Allow GetFile's "File Filter" property to support expression language
> -
>
> Key: NIFI-1205
> URL: https://issues.apache.org/jira/browse/NIFI-1205
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Randy Gelhausen
>Priority: Minor
>
> GetFile is useful as a scheduled processor, but currently assumes 
> foreknowledge of which files the user desires to match.
> There may be external signals, e.g. a process crash, that users desire to 
> trigger GetFile to pickup and do work on artifacts specific to the trigger.
> Adding expression language support for the File Filter property would allow 
> incoming flowfiles to point the processor at different files of interest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1205) Allow GetFile's "File Filter" property to support expression language

2015-11-19 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015078#comment-15015078
 ] 

Joseph Witt commented on NIFI-1205:
---

Randy i think the pattern you seek will be better handled when ListFile and 
FetchFile are ready to roll.  Work for them is underway here  
https://issues.apache.org/jira/browse/NIFI-631

Take a look and if you agree we can close this ticket out.

Thanks
Joe

> Allow GetFile's "File Filter" property to support expression language
> -
>
> Key: NIFI-1205
> URL: https://issues.apache.org/jira/browse/NIFI-1205
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Randy Gelhausen
>Priority: Minor
>
> GetFile is useful as a scheduled processor, but currently assumes 
> foreknowledge of which files the user desires to match.
> There may be external signals, e.g. a process crash, that users desire to 
> trigger GetFile to pickup and do work on artifacts specific to the trigger.
> Adding expression language support for the File Filter property would allow 
> incoming flowfiles to point the processor at different files of interest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-3) Must handle exception better in WebClusterManager (esp. merging responses)

2015-11-19 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt resolved NIFI-3.

Resolution: Cannot Reproduce

> Must handle exception better in WebClusterManager (esp. merging responses)
> --
>
> Key: NIFI-3
> URL: https://issues.apache.org/jira/browse/NIFI-3
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Joseph Witt
>Priority: Minor
>
> in this case we received a socket timeout attempting to merge responses and 
> it wasn't being handled.
> Component: Core Framework



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3) Must handle exception better in WebClusterManager (esp. merging responses)

2015-11-19 Thread Matt Gilman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015051#comment-15015051
 ] 

Matt Gilman commented on NIFI-3:


My guess is that this was coming from a case when a single node was taking a 
long time to respond. Since we were merging responses in this case, we would 
need to wait for all nodes to respond. When this happened, it sounds like we 
were returning a poor error message to the user. I don't recall ever address 
anything like this, but maybe someone did while addressing another ticket.

We could close as not an issue until we encounter it again (especially since 
the details here are short).

> Must handle exception better in WebClusterManager (esp. merging responses)
> --
>
> Key: NIFI-3
> URL: https://issues.apache.org/jira/browse/NIFI-3
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Joseph Witt
>Priority: Minor
>
> in this case we received a socket timeout attempting to merge responses and 
> it wasn't being handled.
> Component: Core Framework



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3) Must handle exception better in WebClusterManager (esp. merging responses)

2015-11-19 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015034#comment-15015034
 ] 

Joseph Witt commented on NIFI-3:


[~mcgilman]

Not sure actually.  The first 100 or so were pre-apache days.

> Must handle exception better in WebClusterManager (esp. merging responses)
> --
>
> Key: NIFI-3
> URL: https://issues.apache.org/jira/browse/NIFI-3
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Joseph Witt
>Priority: Minor
>
> in this case we received a socket timeout attempting to merge responses and 
> it wasn't being handled.
> Component: Core Framework



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3) Must handle exception better in WebClusterManager (esp. merging responses)

2015-11-19 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015033#comment-15015033
 ] 

Joseph Witt commented on NIFI-3:


[~mcgilman]

Not sure actually.  The first 100 or so were pre-apache days.

> Must handle exception better in WebClusterManager (esp. merging responses)
> --
>
> Key: NIFI-3
> URL: https://issues.apache.org/jira/browse/NIFI-3
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Joseph Witt
>Priority: Minor
>
> in this case we received a socket timeout attempting to merge responses and 
> it wasn't being handled.
> Component: Core Framework



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3) Must handle exception better in WebClusterManager (esp. merging responses)

2015-11-19 Thread Tony Kurc (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014788#comment-15014788
 ] 

Tony Kurc commented on NIFI-3:
--

[~joewitt] is this still valid?

> Must handle exception better in WebClusterManager (esp. merging responses)
> --
>
> Key: NIFI-3
> URL: https://issues.apache.org/jira/browse/NIFI-3
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Joseph Witt
>Priority: Minor
>
> in this case we received a socket timeout attempting to merge responses and 
> it wasn't being handled.
> Component: Core Framework



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1205) Allow GetFile's "File Filter" property to support expression language

2015-11-19 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created NIFI-1205:
-

 Summary: Allow GetFile's "File Filter" property to support 
expression language
 Key: NIFI-1205
 URL: https://issues.apache.org/jira/browse/NIFI-1205
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Randy Gelhausen
Priority: Minor


GetFile is useful as a scheduled processor, but currently assumes foreknowledge 
of which files the user desires to match.

There may be external signals, e.g. a process crash, that users desire to 
trigger GetFile to pickup and do work on artifacts specific to the trigger.

Adding expression language support for the File Filter property would allow 
incoming flowfiles to point the processor at different files of interest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1196) FETCH Events are not properly handled in the framework and UI

2015-11-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014633#comment-15014633
 ] 

ASF subversion and git services commented on NIFI-1196:
---

Commit 08d59e437462dfcec3e1f7347bcd6941ee47818a in nifi's branch 
refs/heads/master from [~aldrin]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=08d59e4 ]

NIFI-1196 Providing handling of FETCH provenance events for their "unique" 
property, transit URI, within the framework and UI.

Reviewed by Tony Kurc (tk...@apache.org)


> FETCH Events are not properly handled in the framework and UI
> -
>
> Key: NIFI-1196
> URL: https://issues.apache.org/jira/browse/NIFI-1196
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
> Fix For: 0.4.0
>
> Attachments: 
> 0001-NIFI-1196-Providing-handling-of-FETCH-provenance-eve.patch, 
> NIFI-1196.001.patch
>
>
> When the FETCH ProvenanceEventType was incorporated some of its handling was 
> not propagated throughout the framework.  This is chiefly inclusive of:
> * Presenting transit uri in the Provenance event info within the UI
> * Writing/Reading provenance events into the StandardRecordReader/Writer
> * Handling in the StandardProcessSession for counting flowfiles in/out and 
> their bytes
> The net result is that unique properties, in this case believed to be only 
> transit URI were never persisted and/or received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


nifi git commit: NIFI-1196 Providing handling of FETCH provenance events for their "unique" property, transit URI, within the framework and UI.

2015-11-19 Thread aldrin
Repository: nifi
Updated Branches:
  refs/heads/master 40dd8a0a8 -> 08d59e437


NIFI-1196 Providing handling of FETCH provenance events for their "unique" 
property, transit URI, within the framework and UI.

Reviewed by Tony Kurc (tk...@apache.org)


Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/08d59e43
Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/08d59e43
Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/08d59e43

Branch: refs/heads/master
Commit: 08d59e437462dfcec3e1f7347bcd6941ee47818a
Parents: 40dd8a0
Author: Aldrin Piri 
Authored: Thu Nov 19 02:52:01 2015 -0500
Committer: Aldrin Piri 
Committed: Thu Nov 19 17:42:15 2015 -0500

--
 .../java/org/apache/nifi/provenance/StandardLineageResult.java  | 1 +
 .../apache/nifi/provenance/StandardProvenanceEventRecord.java   | 1 +
 .../nifi/controller/repository/StandardProcessSession.java  | 4 +++-
 .../src/main/webapp/js/nf/provenance/nf-provenance-table.js | 5 +
 .../java/org/apache/nifi/provenance/StandardRecordReader.java   | 2 ++
 .../java/org/apache/nifi/provenance/StandardRecordWriter.java   | 2 ++
 6 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/nifi/blob/08d59e43/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardLineageResult.java
--
diff --git 
a/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardLineageResult.java
 
b/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardLineageResult.java
index 63c53d0..cf16fc0 100644
--- 
a/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardLineageResult.java
+++ 
b/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardLineageResult.java
@@ -288,6 +288,7 @@ public class StandardLineageResult implements 
ComputeLineageResult {
 }
 break;
 case RECEIVE:
+case FETCH:
 case CREATE: {
 // for a receive event, we want to create a FlowFile Node 
that represents the FlowFile received
 // and create an edge from the Receive Event to the 
FlowFile Node

http://git-wip-us.apache.org/repos/asf/nifi/blob/08d59e43/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardProvenanceEventRecord.java
--
diff --git 
a/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardProvenanceEventRecord.java
 
b/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardProvenanceEventRecord.java
index 4eb7001..b504b04 100644
--- 
a/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardProvenanceEventRecord.java
+++ 
b/nifi-commons/nifi-data-provenance-utils/src/main/java/org/apache/nifi/provenance/StandardProvenanceEventRecord.java
@@ -728,6 +728,7 @@ public final class StandardProvenanceEventRecord implements 
ProvenanceEventRecor
 }
 break;
 case RECEIVE:
+case FETCH:
 case SEND:
 assertSet(transitUri, "Transit URI");
 break;

http://git-wip-us.apache.org/repos/asf/nifi/blob/08d59e43/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
--
diff --git 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
index 2ab90cd..d447ddd 100644
--- 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
+++ 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
@@ -457,6 +457,7 @@ public final class StandardProcessSession implements 
ProcessSession, ProvenanceE
 bytesSent += event.getFileSize();
 break;
 case RECEIVE:
+case FETCH:
 flowFilesReceived++;
 bytesReceived += event.getFileSize();
 break;
@@ -616,7 +617,8 @@ public final class Stand

[jira] [Created] (NIFI-1204) Improve cluster logging message when mismatch prevents Node from joining.

2015-11-19 Thread Matthew Clarke (JIRA)
Matthew Clarke created NIFI-1204:


 Summary: Improve cluster logging message when mismatch prevents 
Node from joining.
 Key: NIFI-1204
 URL: https://issues.apache.org/jira/browse/NIFI-1204
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 0.3.0
Reporter: Matthew Clarke
Priority: Minor


Nodes cannot join a cluster if either there flow.xml or templates do not match 
what is on the NCM.

The current ERROR message is the same for either case:

ERROR [NiFi logging handler] org.apache.nifi.StdErr Failed to start web server: 
Unable to load flow due to: java.io.IOException: 
org.apache.nifi.cluster.ConnectionException: Failed to connect node to cluster 
because local flow is different than cluster flow. 

This leads to confusion when the templates are the real mismatch.  These should 
be validated separately and an appropriate error message logged indicating 
which does not match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1196) FETCH Events are not properly handled in the framework and UI

2015-11-19 Thread Tony Kurc (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014512#comment-15014512
 ] 

Tony Kurc commented on NIFI-1196:
-

so, I think that mean "+1 if copy paste error and fixed. Please explain why you 
have identical conditions in an if otherwise"

> FETCH Events are not properly handled in the framework and UI
> -
>
> Key: NIFI-1196
> URL: https://issues.apache.org/jira/browse/NIFI-1196
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
> Fix For: 0.4.0
>
> Attachments: 
> 0001-NIFI-1196-Providing-handling-of-FETCH-provenance-eve.patch, 
> NIFI-1196.001.patch
>
>
> When the FETCH ProvenanceEventType was incorporated some of its handling was 
> not propagated throughout the framework.  This is chiefly inclusive of:
> * Presenting transit uri in the Provenance event info within the UI
> * Writing/Reading provenance events into the StandardRecordReader/Writer
> * Handling in the StandardProcessSession for counting flowfiles in/out and 
> their bytes
> The net result is that unique properties, in this case believed to be only 
> transit URI were never persisted and/or received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1196) FETCH Events are not properly handled in the framework and UI

2015-11-19 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014498#comment-15014498
 ] 

Aldrin Piri commented on NIFI-1196:
---

Awesome, thanks.  Will do.

> FETCH Events are not properly handled in the framework and UI
> -
>
> Key: NIFI-1196
> URL: https://issues.apache.org/jira/browse/NIFI-1196
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
> Fix For: 0.4.0
>
> Attachments: 
> 0001-NIFI-1196-Providing-handling-of-FETCH-provenance-eve.patch, 
> NIFI-1196.001.patch
>
>
> When the FETCH ProvenanceEventType was incorporated some of its handling was 
> not propagated throughout the framework.  This is chiefly inclusive of:
> * Presenting transit uri in the Provenance event info within the UI
> * Writing/Reading provenance events into the StandardRecordReader/Writer
> * Handling in the StandardProcessSession for counting flowfiles in/out and 
> their bytes
> The net result is that unique properties, in this case believed to be only 
> transit URI were never persisted and/or received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1196) FETCH Events are not properly handled in the framework and UI

2015-11-19 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014495#comment-15014495
 ] 

Aldrin Piri commented on NIFI-1196:
---

You are correct with that assumption.  I am going to blame it on autocompleting 
too heavily in the wee hours of the morning.  Will adjust and then commit as it 
seems like we are good otherwise.

> FETCH Events are not properly handled in the framework and UI
> -
>
> Key: NIFI-1196
> URL: https://issues.apache.org/jira/browse/NIFI-1196
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
> Fix For: 0.4.0
>
> Attachments: 
> 0001-NIFI-1196-Providing-handling-of-FETCH-provenance-eve.patch, 
> NIFI-1196.001.patch
>
>
> When the FETCH ProvenanceEventType was incorporated some of its handling was 
> not propagated throughout the framework.  This is chiefly inclusive of:
> * Presenting transit uri in the Provenance event info within the UI
> * Writing/Reading provenance events into the StandardRecordReader/Writer
> * Handling in the StandardProcessSession for counting flowfiles in/out and 
> their bytes
> The net result is that unique properties, in this case believed to be only 
> transit URI were never persisted and/or received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1196) FETCH Events are not properly handled in the framework and UI

2015-11-19 Thread Tony Kurc (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014493#comment-15014493
 ] 

Tony Kurc commented on NIFI-1196:
-

Attached a patch which fixes the typo if I'm right. If you apply, should be 
'fixup' squashed on yours.

> FETCH Events are not properly handled in the framework and UI
> -
>
> Key: NIFI-1196
> URL: https://issues.apache.org/jira/browse/NIFI-1196
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
> Fix For: 0.4.0
>
> Attachments: 
> 0001-NIFI-1196-Providing-handling-of-FETCH-provenance-eve.patch, 
> NIFI-1196.001.patch
>
>
> When the FETCH ProvenanceEventType was incorporated some of its handling was 
> not propagated throughout the framework.  This is chiefly inclusive of:
> * Presenting transit uri in the Provenance event info within the UI
> * Writing/Reading provenance events into the StandardRecordReader/Writer
> * Handling in the StandardProcessSession for counting flowfiles in/out and 
> their bytes
> The net result is that unique properties, in this case believed to be only 
> transit URI were never persisted and/or received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1196) FETCH Events are not properly handled in the framework and UI

2015-11-19 Thread Tony Kurc (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony Kurc updated NIFI-1196:

Attachment: NIFI-1196.001.patch

> FETCH Events are not properly handled in the framework and UI
> -
>
> Key: NIFI-1196
> URL: https://issues.apache.org/jira/browse/NIFI-1196
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
> Fix For: 0.4.0
>
> Attachments: 
> 0001-NIFI-1196-Providing-handling-of-FETCH-provenance-eve.patch, 
> NIFI-1196.001.patch
>
>
> When the FETCH ProvenanceEventType was incorporated some of its handling was 
> not propagated throughout the framework.  This is chiefly inclusive of:
> * Presenting transit uri in the Provenance event info within the UI
> * Writing/Reading provenance events into the StandardRecordReader/Writer
> * Handling in the StandardProcessSession for counting flowfiles in/out and 
> their bytes
> The net result is that unique properties, in this case believed to be only 
> transit URI were never persisted and/or received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1196) FETCH Events are not properly handled in the framework and UI

2015-11-19 Thread Tony Kurc (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014464#comment-15014464
 ] 

Tony Kurc commented on NIFI-1196:
-

I don't think I can argue with the logic, presuming the answer to the above 
question is "oops, thats a typo". Patch applies clean, no rat or checkstyle 
problems, no test failures.

> FETCH Events are not properly handled in the framework and UI
> -
>
> Key: NIFI-1196
> URL: https://issues.apache.org/jira/browse/NIFI-1196
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
> Fix For: 0.4.0
>
> Attachments: 
> 0001-NIFI-1196-Providing-handling-of-FETCH-provenance-eve.patch
>
>
> When the FETCH ProvenanceEventType was incorporated some of its handling was 
> not propagated throughout the framework.  This is chiefly inclusive of:
> * Presenting transit uri in the Provenance event info within the UI
> * Writing/Reading provenance events into the StandardRecordReader/Writer
> * Handling in the StandardProcessSession for counting flowfiles in/out and 
> their bytes
> The net result is that unique properties, in this case believed to be only 
> transit URI were never persisted and/or received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1203) Processors that require input still show as valid with only self-looping connections

2015-11-19 Thread Mark Payne (JIRA)
Mark Payne created NIFI-1203:


 Summary: Processors that require input still show as valid with 
only self-looping connections
 Key: NIFI-1203
 URL: https://issues.apache.org/jira/browse/NIFI-1203
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.4.0
Reporter: Mark Payne
 Fix For: 0.4.0


To demonstrate this, create a Processor such as EncryptContent. Configure a 
password and connect its 'success' Relationship to another Processor. Connect 
the 'failure' relationship back to itself. The processor now indicates that it 
is valid as it has an incoming connection. However, since the 'failure' 
connection is a self-loop, it should not count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1202) Allow user to configure Batch Size for site-to-site

2015-11-19 Thread Mark Payne (JIRA)
Mark Payne created NIFI-1202:


 Summary: Allow user to configure Batch Size for site-to-site
 Key: NIFI-1202
 URL: https://issues.apache.org/jira/browse/NIFI-1202
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework, Core UI, Documentation & Website
Reporter: Mark Payne
 Fix For: 0.5.0


Currently, there is no way for a user to specify the batch size that 
Site-to-Site will use. The framework decides this for you. However, if we want 
to use the List/Fetch Pattern, it will be helpful to specify a small batch size 
so that a small number of things that are listed are still well distributed 
across the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1201) Allow ExecuteSQL to run queries with that use a variable timestamp or sequence id

2015-11-19 Thread Randy Gelhausen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Gelhausen updated NIFI-1201:
--
Priority: Minor  (was: Major)

> Allow ExecuteSQL to run queries with that use a variable timestamp or 
> sequence id
> -
>
> Key: NIFI-1201
> URL: https://issues.apache.org/jira/browse/NIFI-1201
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Randy Gelhausen
>Priority: Minor
>
> Users are employing ExecuteSQL as a means to schedule periodic queries 
> against remote databases. Other tools that do this type of task include the 
> ability to maintain and automatically increment a sequence or timestamp used 
> in query predicates.
> For example:
> select * from src_table where created_at > "2015-11-19 12:00:00"
> Then a minute later:
> select * from src_table where created_at > "2015-11-19 12:01:00"
> Or:
> insert into my_table values (${prev_id}+1, ${now()})
> Today users can implement the same logic with a series of processors, but 
> much work could be saved by allowing ExecuteSQL to maintain these bits of 
> state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1201) Allow ExecuteSQL to run queries with that use a variable timestamp or sequence id

2015-11-19 Thread Randy Gelhausen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Gelhausen updated NIFI-1201:
--
Issue Type: Improvement  (was: Bug)

> Allow ExecuteSQL to run queries with that use a variable timestamp or 
> sequence id
> -
>
> Key: NIFI-1201
> URL: https://issues.apache.org/jira/browse/NIFI-1201
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Randy Gelhausen
>
> Users are employing ExecuteSQL as a means to schedule periodic queries 
> against remote databases. Other tools that do this type of task include the 
> ability to maintain and automatically increment a sequence or timestamp used 
> in query predicates.
> For example:
> select * from src_table where created_at > "2015-11-19 12:00:00"
> Then a minute later:
> select * from src_table where created_at > "2015-11-19 12:01:00"
> Or:
> insert into my_table values (${prev_id}+1, ${now()})
> Today users can implement the same logic with a series of processors, but 
> much work could be saved by allowing ExecuteSQL to maintain these bits of 
> state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1201) Allow ExecuteSQL to run queries with that use a variable timestamp or sequence id

2015-11-19 Thread Randy Gelhausen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Gelhausen updated NIFI-1201:
--
Summary: Allow ExecuteSQL to run queries with that use a variable timestamp 
or sequence id  (was: Allow ExecuteSQL to run queries with that use a variable 
timestamp or sequenced)

> Allow ExecuteSQL to run queries with that use a variable timestamp or 
> sequence id
> -
>
> Key: NIFI-1201
> URL: https://issues.apache.org/jira/browse/NIFI-1201
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Randy Gelhausen
>
> Users are employing ExecuteSQL as a means to schedule periodic queries 
> against remote databases. Other tools that do this type of task include the 
> ability to maintain and automatically increment a sequence or timestamp used 
> in query predicates.
> For example:
> select * from src_table where created_at > "2015-11-19 12:00:00"
> Then a minute later:
> select * from src_table where created_at > "2015-11-19 12:01:00"
> Or:
> insert into my_table values (${prev_id}+1, ${now()})
> Today users can implement the same logic with a series of processors, but 
> much work could be saved by allowing ExecuteSQL to maintain these bits of 
> state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1201) Allow ExecuteSQL to run queries with that use a variable timestamp or sequenced

2015-11-19 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created NIFI-1201:
-

 Summary: Allow ExecuteSQL to run queries with that use a variable 
timestamp or sequenced
 Key: NIFI-1201
 URL: https://issues.apache.org/jira/browse/NIFI-1201
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Randy Gelhausen


Users are employing ExecuteSQL as a means to schedule periodic queries against 
remote databases. Other tools that do this type of task include the ability to 
maintain and automatically increment a sequence or timestamp used in query 
predicates.

For example:
select * from src_table where created_at > "2015-11-19 12:00:00"

Then a minute later:
select * from src_table where created_at > "2015-11-19 12:01:00"

Or:
insert into my_table values (${prev_id}+1, ${now()})

Today users can implement the same logic with a series of processors, but much 
work could be saved by allowing ExecuteSQL to maintain these bits of state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1196) FETCH Events are not properly handled in the framework and UI

2015-11-19 Thread Tony Kurc (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014420#comment-15014420
 ] 

Tony Kurc commented on NIFI-1196:
-

[~aldrin] This guy: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java

Is this supposed to be FETCH?
{code}
+|| registeredTypes.contains(ProvenanceEventType.FORK)) 
{
{code}

> FETCH Events are not properly handled in the framework and UI
> -
>
> Key: NIFI-1196
> URL: https://issues.apache.org/jira/browse/NIFI-1196
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
> Fix For: 0.4.0
>
> Attachments: 
> 0001-NIFI-1196-Providing-handling-of-FETCH-provenance-eve.patch
>
>
> When the FETCH ProvenanceEventType was incorporated some of its handling was 
> not propagated throughout the framework.  This is chiefly inclusive of:
> * Presenting transit uri in the Provenance event info within the UI
> * Writing/Reading provenance events into the StandardRecordReader/Writer
> * Handling in the StandardProcessSession for counting flowfiles in/out and 
> their bytes
> The net result is that unique properties, in this case believed to be only 
> transit URI were never persisted and/or received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1196) FETCH Events are not properly handled in the framework and UI

2015-11-19 Thread Tony Kurc (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014356#comment-15014356
 ] 

Tony Kurc commented on NIFI-1196:
-

reviewing now

> FETCH Events are not properly handled in the framework and UI
> -
>
> Key: NIFI-1196
> URL: https://issues.apache.org/jira/browse/NIFI-1196
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
> Fix For: 0.4.0
>
> Attachments: 
> 0001-NIFI-1196-Providing-handling-of-FETCH-provenance-eve.patch
>
>
> When the FETCH ProvenanceEventType was incorporated some of its handling was 
> not propagated throughout the framework.  This is chiefly inclusive of:
> * Presenting transit uri in the Provenance event info within the UI
> * Writing/Reading provenance events into the StandardRecordReader/Writer
> * Handling in the StandardProcessSession for counting flowfiles in/out and 
> their bytes
> The net result is that unique properties, in this case believed to be only 
> transit URI were never persisted and/or received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1200) FileSystemRepository saturates CPU when archive directories are empty

2015-11-19 Thread Oleg Zhurakousky (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014336#comment-15014336
 ] 

Oleg Zhurakousky commented on NIFI-1200:


Thanks Adam!

> FileSystemRepository saturates CPU when archive directories are empty
> -
>
> Key: NIFI-1200
> URL: https://issues.apache.org/jira/browse/NIFI-1200
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Oleg Zhurakousky
>Priority: Minor
> Fix For: 0.5.0
>
>
> Was reported in the dev thread by adamond...@gmail.com
> The piece of code responsible is 
> {code}
> for (int i = 0; i < SECTIONS_PER_CONTAINER; i++) {
> . . .
>if (!Files.exists(archive)) {
> continue;
>}
> . . . 
> }
> {code}
> . .  where continue happens without any delay. 
> It was also confirmed by Adam that small Thread.slleep(..) takes care of the 
> problem. What puzzles me is that the loop itself has a finite end so, need to 
> look how the parent operation is invoked.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1200) FileSystemRepository saturates CPU when archive directories are empty

2015-11-19 Thread Adam Lamar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014320#comment-15014320
 ] 

Adam Lamar commented on NIFI-1200:
--

[~ozhurakousky] Thanks for filing this issue! Please note that this issue was 
much worse when running on OpenJDK. CPU usage held around 20-30% without the 
sleep, but the Oracle JDK wasn't nearly as busy.

> FileSystemRepository saturates CPU when archive directories are empty
> -
>
> Key: NIFI-1200
> URL: https://issues.apache.org/jira/browse/NIFI-1200
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Oleg Zhurakousky
>Priority: Minor
> Fix For: 0.5.0
>
>
> Was reported in the dev thread by adamond...@gmail.com
> The piece of code responsible is 
> {code}
> for (int i = 0; i < SECTIONS_PER_CONTAINER; i++) {
> . . .
>if (!Files.exists(archive)) {
> continue;
>}
> . . . 
> }
> {code}
> . .  where continue happens without any delay. 
> It was also confirmed by Adam that small Thread.slleep(..) takes care of the 
> problem. What puzzles me is that the loop itself has a finite end so, need to 
> look how the parent operation is invoked.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1193) Add Hive support to Kite storage processor

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014235#comment-15014235
 ] 

ASF GitHub Bot commented on NIFI-1193:
--

Github user joey commented on the pull request:

https://github.com/apache/nifi/pull/128#issuecomment-158172948
  
Here's what we've been using to avoid `hive-exec`:

```
   
  org.apache.hive.hcatalog
  hive-hcatalog-core
  

  com.google.code.findbugs
  jsr305


  jersey-servlet
  com.sun.jersey


  jersey-core
  com.sun.jersey


  jersey-server
  com.sun.jersey


  servlet-api
  javax.servlet


  jetty-all
  org.eclipse.jetty.aggregate


  org.apache.hive
  hive-exec


  parquet-hadoop-bundle
  com.twitter

  

```


> Add Hive support to Kite storage processor
> --
>
> Key: NIFI-1193
> URL: https://issues.apache.org/jira/browse/NIFI-1193
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 0.5.0
>
>
> When the Kite processors were initially added in NIFI-238, we removed support 
> for sending data directly to Hive tables because the dependencies were too 
> large. Contacting the Hive MetaStore pulled in all of hive-exec and 
> hive-metastore. I've created an alternative that increases the size by only 
> 6.7MB (about 10% of what it was before).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1188) Update developer guide to capture the added functionality of having nonloop connections

2015-11-19 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated NIFI-1188:
--
Fix Version/s: (was: 0.4.0)

> Update developer guide to capture the added functionality of having nonloop 
> connections
> ---
>
> Key: NIFI-1188
> URL: https://issues.apache.org/jira/browse/NIFI-1188
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Documentation & Website
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>
> Reflect the work of NIFI-1168 in the Developer Guide in terms of when 
> processors have work and introduce the concept of hasNonLoopConnection as 
> exhibited in ExecuteSQL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1200) FileSystemRepository saturates CPU when archive directories are empty

2015-11-19 Thread Oleg Zhurakousky (JIRA)
Oleg Zhurakousky created NIFI-1200:
--

 Summary: FileSystemRepository saturates CPU when archive 
directories are empty
 Key: NIFI-1200
 URL: https://issues.apache.org/jira/browse/NIFI-1200
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 0.3.0
Reporter: Oleg Zhurakousky
Priority: Minor
 Fix For: 0.5.0


Was reported in the dev thread by adamond...@gmail.com
The piece of code responsible is 
{code}
for (int i = 0; i < SECTIONS_PER_CONTAINER; i++) {
. . .
   if (!Files.exists(archive)) {
continue;
   }
. . . 
}
{code}
. .  where continue happens without any delay. 
It was also confirmed by Adam that small Thread.slleep(..) takes care of the 
problem. What puzzles me is that the loop itself has a finite end so, need to 
look how the parent operation is invoked.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1199) Display processor name/type in the lineage diagram tooltips

2015-11-19 Thread Andrew Grande (JIRA)
Andrew Grande created NIFI-1199:
---

 Summary: Display processor name/type in the lineage diagram 
tooltips
 Key: NIFI-1199
 URL: https://issues.apache.org/jira/browse/NIFI-1199
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core UI
Affects Versions: 0.4.0
Reporter: Andrew Grande
Priority: Trivial


When a lineage graph is charted there's no way to quickly see which component 
took action at every step. One has to right-click -> View details.

*Proposal*: add a hover tooltip for each step which would display "processor 
name/processor type". Both of these bits are already available in View Details 
screen.

I understand the performance optimization concerns, the call can be performed 
on demand only when the user physically moves the cursor over the node in a 
graph.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-1174) Create a Put HBase processor that can put multiple cells

2015-11-19 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-1174.
---
Resolution: Fixed

> Create a Put HBase processor that can put multiple cells
> 
>
> Key: NIFI-1174
> URL: https://issues.apache.org/jira/browse/NIFI-1174
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 0.4.0
>
> Attachments: NIFI-1174-Complex-Field-Improvements.patch, 
> NIFI-1174.patch
>
>
> We recently added a PutHBaseCell processor which works great for writing one 
> individual cell at a time, but it can require a significant amount of work in 
> a flow to create a row with multiple cells. 
> We should support a variation of this processor that can accept a flow file 
> with key/value pairs in the content of the flow file (possibly json). The 
> key/value pairs then turned into the cells for the given row and get added in 
> one put operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1174) Create a Put HBase processor that can put multiple cells

2015-11-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014156#comment-15014156
 ] 

ASF subversion and git services commented on NIFI-1174:
---

Commit 40dd8a0a845ef5f4d4fde451f02376ab2fab9758 in nifi's branch 
refs/heads/master from [~bbende]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=40dd8a0 ]

NIFI-1174 Refactoring the HBase client API and adding a PutHBaseJSON which can 
write a whole row from a single json document - Adding Complex Field Strategy 
to PutHBaseJSON to allow more control of complex fields - Improving error 
messages to indicate what the problem was with an invalid row

Signed-off-by: Bryan Bende 


> Create a Put HBase processor that can put multiple cells
> 
>
> Key: NIFI-1174
> URL: https://issues.apache.org/jira/browse/NIFI-1174
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 0.4.0
>
> Attachments: NIFI-1174-Complex-Field-Improvements.patch, 
> NIFI-1174.patch
>
>
> We recently added a PutHBaseCell processor which works great for writing one 
> individual cell at a time, but it can require a significant amount of work in 
> a flow to create a row with multiple cells. 
> We should support a variation of this processor that can accept a flow file 
> with key/value pairs in the content of the flow file (possibly json). The 
> key/value pairs then turned into the cells for the given row and get added in 
> one put operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] nifi git commit: NIFI-1174 Refactoring the HBase client API and adding a PutHBaseJSON which can write a whole row from a single json document - Adding Complex Field Strategy to PutHBaseJSON to a

2015-11-19 Thread bbende
Repository: nifi
Updated Branches:
  refs/heads/master 8c2323dc8 -> 40dd8a0a8


http://git-wip-us.apache.org/repos/asf/nifi/blob/40dd8a0a/nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/test/java/org/apache/nifi/hbase/TestHBase_1_1_2_ClientService.java
--
diff --git 
a/nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/test/java/org/apache/nifi/hbase/TestHBase_1_1_2_ClientService.java
 
b/nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/test/java/org/apache/nifi/hbase/TestHBase_1_1_2_ClientService.java
index 1575f3c..513ea9c 100644
--- 
a/nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/test/java/org/apache/nifi/hbase/TestHBase_1_1_2_ClientService.java
+++ 
b/nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/test/java/org/apache/nifi/hbase/TestHBase_1_1_2_ClientService.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.filter.Filter;
 import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.hbase.put.PutColumn;
 import org.apache.nifi.hbase.put.PutFlowFile;
 import org.apache.nifi.hbase.scan.Column;
 import org.apache.nifi.hbase.scan.ResultCell;
@@ -41,6 +42,7 @@ import java.nio.charset.StandardCharsets;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.LinkedHashMap;
 import java.util.List;
@@ -130,8 +132,9 @@ public class TestHBase_1_1_2_ClientService {
 final String columnQualifier = "qualifier1";
 final String content = "content1";
 
-final PutFlowFile putFlowFile = new PutFlowFile(tableName, row, 
columnFamily, columnQualifier,
-content.getBytes(StandardCharsets.UTF_8), null);
+final Collection columns = Collections.singletonList(new 
PutColumn(columnFamily, columnQualifier,
+content.getBytes(StandardCharsets.UTF_8)));
+final PutFlowFile putFlowFile = new PutFlowFile(tableName, row, 
columns, null);
 
 final TestRunner runner = 
TestRunners.newTestRunner(TestProcessor.class);
 
@@ -168,11 +171,13 @@ public class TestHBase_1_1_2_ClientService {
 final String content1 = "content1";
 final String content2 = "content2";
 
-final PutFlowFile putFlowFile1 = new PutFlowFile(tableName, row, 
columnFamily, columnQualifier,
-content1.getBytes(StandardCharsets.UTF_8), null);
+final Collection columns1 = Collections.singletonList(new 
PutColumn(columnFamily, columnQualifier,
+content1.getBytes(StandardCharsets.UTF_8)));
+final PutFlowFile putFlowFile1 = new PutFlowFile(tableName, row, 
columns1, null);
 
-final PutFlowFile putFlowFile2 = new PutFlowFile(tableName, row, 
columnFamily, columnQualifier,
-content2.getBytes(StandardCharsets.UTF_8), null);
+final Collection columns2 = Collections.singletonList(new 
PutColumn(columnFamily, columnQualifier,
+content2.getBytes(StandardCharsets.UTF_8)));
+final PutFlowFile putFlowFile2 = new PutFlowFile(tableName, row, 
columns2, null);
 
 final TestRunner runner = 
TestRunners.newTestRunner(TestProcessor.class);
 
@@ -214,11 +219,13 @@ public class TestHBase_1_1_2_ClientService {
 final String content1 = "content1";
 final String content2 = "content2";
 
-final PutFlowFile putFlowFile1 = new PutFlowFile(tableName, row1, 
columnFamily, columnQualifier,
-content1.getBytes(StandardCharsets.UTF_8), null);
+final Collection columns1 = Collections.singletonList(new 
PutColumn(columnFamily, columnQualifier,
+content1.getBytes(StandardCharsets.UTF_8)));
+final PutFlowFile putFlowFile1 = new PutFlowFile(tableName, row1, 
columns1, null);
 
-final PutFlowFile putFlowFile2 = new PutFlowFile(tableName, row2, 
columnFamily, columnQualifier,
-content2.getBytes(StandardCharsets.UTF_8), null);
+final Collection columns2 = Collections.singletonList(new 
PutColumn(columnFamily, columnQualifier,
+content2.getBytes(StandardCharsets.UTF_8)));
+final PutFlowFile putFlowFile2 = new PutFlowFile(tableName, row2, 
columns2, null);
 
 final TestRunner runner = 
TestRunners.newTestRunner(TestProcessor.class);
 



[2/2] nifi git commit: NIFI-1174 Refactoring the HBase client API and adding a PutHBaseJSON which can write a whole row from a single json document - Adding Complex Field Strategy to PutHBaseJSON to a

2015-11-19 Thread bbende
NIFI-1174 Refactoring the HBase client API and adding a PutHBaseJSON which can 
write a whole row from a single json document - Adding Complex Field Strategy 
to PutHBaseJSON to allow more control of complex fields - Improving error 
messages to indicate what the problem was with an invalid row

Signed-off-by: Bryan Bende 


Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/40dd8a0a
Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/40dd8a0a
Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/40dd8a0a

Branch: refs/heads/master
Commit: 40dd8a0a845ef5f4d4fde451f02376ab2fab9758
Parents: 8c2323d
Author: Bryan Bende 
Authored: Wed Nov 18 17:24:49 2015 -0500
Committer: Bryan Bende 
Committed: Thu Nov 19 13:49:02 2015 -0500

--
 .../nifi-hbase-processors/pom.xml   |   4 +
 .../nifi/hbase/AbstractHBaseProcessor.java  |  23 -
 .../org/apache/nifi/hbase/AbstractPutHBase.java | 183 
 .../java/org/apache/nifi/hbase/GetHBase.java|   3 +-
 .../org/apache/nifi/hbase/PutHBaseCell.java | 153 +--
 .../org/apache/nifi/hbase/PutHBaseJSON.java | 230 ++
 .../org.apache.nifi.processor.Processor |   3 +-
 .../org/apache/nifi/hbase/HBaseTestUtil.java|  87 
 .../nifi/hbase/MockHBaseClientService.java  |  14 +-
 .../org/apache/nifi/hbase/TestPutHBaseCell.java |  60 ++-
 .../org/apache/nifi/hbase/TestPutHBaseJSON.java | 423 +++
 .../apache/nifi/hbase/HBaseClientService.java   |  11 +
 .../org/apache/nifi/hbase/put/PutColumn.java|  47 +++
 .../org/apache/nifi/hbase/put/PutFlowFile.java  |  38 +-
 .../nifi/hbase/HBase_1_1_2_ClientService.java   |  25 +-
 .../hbase/TestHBase_1_1_2_ClientService.java|  27 +-
 16 files changed, 1119 insertions(+), 212 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/nifi/blob/40dd8a0a/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/pom.xml
--
diff --git a/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/pom.xml 
b/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/pom.xml
index b474c6a..abbe4c9 100644
--- a/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/pom.xml
+++ b/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/pom.xml
@@ -50,6 +50,10 @@
 commons-lang3
 3.4
 
+
+org.codehaus.jackson
+jackson-mapper-asl
+
 

 org.apache.nifi

http://git-wip-us.apache.org/repos/asf/nifi/blob/40dd8a0a/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractHBaseProcessor.java
--
diff --git 
a/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractHBaseProcessor.java
 
b/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractHBaseProcessor.java
deleted file mode 100644
index 9cce35e..000
--- 
a/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractHBaseProcessor.java
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.nifi.hbase;
-
-import org.apache.nifi.processor.AbstractProcessor;
-
-public abstract class AbstractHBaseProcessor extends AbstractProcessor {
-
-}

http://git-wip-us.apache.org/repos/asf/nifi/blob/40dd8a0a/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
--
diff --git 
a/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 
b/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
new file mode 100644
index 000..87424f9
--- /dev/null
+++ 
b/nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-proc

[jira] [Updated] (NIFI-1174) Create a Put HBase processor that can put multiple cells

2015-11-19 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-1174:
--
Fix Version/s: 0.4.0

> Create a Put HBase processor that can put multiple cells
> 
>
> Key: NIFI-1174
> URL: https://issues.apache.org/jira/browse/NIFI-1174
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 0.4.0
>
> Attachments: NIFI-1174-Complex-Field-Improvements.patch, 
> NIFI-1174.patch
>
>
> We recently added a PutHBaseCell processor which works great for writing one 
> individual cell at a time, but it can require a significant amount of work in 
> a flow to create a row with multiple cells. 
> We should support a variation of this processor that can accept a flow file 
> with key/value pairs in the content of the flow file (possibly json). The 
> key/value pairs then turned into the cells for the given row and get added in 
> one put operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1174) Create a Put HBase processor that can put multiple cells

2015-11-19 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014131#comment-15014131
 ] 

Bryan Bende commented on NIFI-1174:
---

Thanks [~markap14], I updated the capability description to include the UTF-8 
requirement and the proper description of complex field handling. Going to 
assign this to 0.4.0 and push to master.

> Create a Put HBase processor that can put multiple cells
> 
>
> Key: NIFI-1174
> URL: https://issues.apache.org/jira/browse/NIFI-1174
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 0.4.0
>
> Attachments: NIFI-1174-Complex-Field-Improvements.patch, 
> NIFI-1174.patch
>
>
> We recently added a PutHBaseCell processor which works great for writing one 
> individual cell at a time, but it can require a significant amount of work in 
> a flow to create a row with multiple cells. 
> We should support a variation of this processor that can accept a flow file 
> with key/value pairs in the content of the flow file (possibly json). The 
> key/value pairs then turned into the cells for the given row and get added in 
> one put operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1193) Add Hive support to Kite storage processor

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014034#comment-15014034
 ] 

ASF GitHub Bot commented on NIFI-1193:
--

Github user joewitt commented on the pull request:

https://github.com/apache/nifi/pull/128#issuecomment-158137742
  
@rdblue @joey @busbey maybe we just don't add the nar to the assembly 
itself but put this in the source tree.  Once we get a template/extension 
registry built then this is ok anyway.  What do you think?


> Add Hive support to Kite storage processor
> --
>
> Key: NIFI-1193
> URL: https://issues.apache.org/jira/browse/NIFI-1193
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 0.5.0
>
>
> When the Kite processors were initially added in NIFI-238, we removed support 
> for sending data directly to Hive tables because the dependencies were too 
> large. Contacting the Hive MetaStore pulled in all of hive-exec and 
> hive-metastore. I've created an alternative that increases the size by only 
> 6.7MB (about 10% of what it was before).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1193) Add Hive support to Kite storage processor

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014031#comment-15014031
 ] 

ASF GitHub Bot commented on NIFI-1193:
--

Github user rdblue commented on the pull request:

https://github.com/apache/nifi/pull/128#issuecomment-158137120
  
@joey, that's exactly the problem. If you have a solution that avoids 
hive-exec, that would be great!


> Add Hive support to Kite storage processor
> --
>
> Key: NIFI-1193
> URL: https://issues.apache.org/jira/browse/NIFI-1193
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 0.5.0
>
>
> When the Kite processors were initially added in NIFI-238, we removed support 
> for sending data directly to Hive tables because the dependencies were too 
> large. Contacting the Hive MetaStore pulled in all of hive-exec and 
> hive-metastore. I've created an alternative that increases the size by only 
> 6.7MB (about 10% of what it was before).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1193) Add Hive support to Kite storage processor

2015-11-19 Thread Ryan Blue (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014027#comment-15014027
 ] 

Ryan Blue commented on NIFI-1193:
-

Yeah, that's completely understandable.

> Add Hive support to Kite storage processor
> --
>
> Key: NIFI-1193
> URL: https://issues.apache.org/jira/browse/NIFI-1193
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 0.5.0
>
>
> When the Kite processors were initially added in NIFI-238, we removed support 
> for sending data directly to Hive tables because the dependencies were too 
> large. Contacting the Hive MetaStore pulled in all of hive-exec and 
> hive-metastore. I've created an alternative that increases the size by only 
> 6.7MB (about 10% of what it was before).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1174) Create a Put HBase processor that can put multiple cells

2015-11-19 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014021#comment-15014021
 ] 

Mark Payne commented on NIFI-1174:
--

[~bbende] - After reviewing code, the only comment that I have is that the 
Processor is interpreting the incoming JSON as UTF-8 encoded JSON. I think this 
is probably okay, as there are a lot of processors that do this and we have a 
ConvertCharacterSet processor. However, I would mention this in the Capability 
Description.

So if you update the capability description to indicate that the JSON should be 
in UTF-8 format and update the wording about the fields with arrays being 
skipped, then I'm a +1. Great work!

Thanks
-Mark

> Create a Put HBase processor that can put multiple cells
> 
>
> Key: NIFI-1174
> URL: https://issues.apache.org/jira/browse/NIFI-1174
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Attachments: NIFI-1174-Complex-Field-Improvements.patch, 
> NIFI-1174.patch
>
>
> We recently added a PutHBaseCell processor which works great for writing one 
> individual cell at a time, but it can require a significant amount of work in 
> a flow to create a row with multiple cells. 
> We should support a variation of this processor that can accept a flow file 
> with key/value pairs in the content of the flow file (possibly json). The 
> key/value pairs then turned into the cells for the given row and get added in 
> one put operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1174) Create a Put HBase processor that can put multiple cells

2015-11-19 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014008#comment-15014008
 ] 

Mark Payne commented on NIFI-1174:
--

I've not had a chance to review the code yet. Will soon. Just wanted to share 
feedback from testing. Everything works just as expected, and I think this is a 
great add to the NiFi repertoire. Only thing i noticed is that the Capability 
Description indicates: 

{quote}
Any fields where the value is null or an array will be skipped.
{quote}

However, if I have the Complex Field Strategy property set to "Text", the array 
does in fact make its way in as text. So may just need to revisit the 
capability description now that the additional Complex Field Strategy property 
was added. Otherwise, great job! Will review code and provide any more feedback 
that I have.

Thanks!


> Create a Put HBase processor that can put multiple cells
> 
>
> Key: NIFI-1174
> URL: https://issues.apache.org/jira/browse/NIFI-1174
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Attachments: NIFI-1174-Complex-Field-Improvements.patch, 
> NIFI-1174.patch
>
>
> We recently added a PutHBaseCell processor which works great for writing one 
> individual cell at a time, but it can require a significant amount of work in 
> a flow to create a row with multiple cells. 
> We should support a variation of this processor that can accept a flow file 
> with key/value pairs in the content of the flow file (possibly json). The 
> key/value pairs then turned into the cells for the given row and get added in 
> one put operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1198) Processor InputRequirement indicating invalid when it should be valid

2015-11-19 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013925#comment-15013925
 ] 

Bryan Bende commented on NIFI-1198:
---

A browser refresh appears to clear the validation error, refresh stats does not.

> Processor InputRequirement indicating invalid when it should be valid
> -
>
> Key: NIFI-1198
> URL: https://issues.apache.org/jira/browse/NIFI-1198
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Bryan Bende
>Priority: Minor
> Fix For: 0.4.0
>
>
> I had connected several processors in a line and had them all valid and ready 
> to be started. I then decide to remove one processor from the path, so lets 
> say I had Proc1 connected to Proc2 connected to Proc3. I deleted the 
> connection between Proc2 and Proc3, then dragged the existing connection 
> between Proc1 and Proc2 to make it go from Proc1 to Proc3. At this point 
> everything should have been valid, but Proc3 said it was invalid because it 
> required input. If I deleted this connection and created a new one between 
> Proc1 and Proc3 then all was good. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1174) Create a Put HBase processor that can put multiple cells

2015-11-19 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013908#comment-15013908
 ] 

Mark Payne commented on NIFI-1174:
--

[~bbende] - sorry, it was my mistake on the first comment, where it didn't have 
the row id. Not all of the JSON that I was sending in had the proper JSON 
element. I tested again with only JSON that has the proper fields, and it did 
function properly. However, it is certainly good to clarify the error message, 
as the one that comes back is fairly vague.

I'll check out the new patch. THanks!

> Create a Put HBase processor that can put multiple cells
> 
>
> Key: NIFI-1174
> URL: https://issues.apache.org/jira/browse/NIFI-1174
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Attachments: NIFI-1174-Complex-Field-Improvements.patch, 
> NIFI-1174.patch
>
>
> We recently added a PutHBaseCell processor which works great for writing one 
> individual cell at a time, but it can require a significant amount of work in 
> a flow to create a row with multiple cells. 
> We should support a variation of this processor that can accept a flow file 
> with key/value pairs in the content of the flow file (possibly json). The 
> key/value pairs then turned into the cells for the given row and get added in 
> one put operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1198) Processor InputRequirement indicating invalid when it should be valid

2015-11-19 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-1198:
--
Description: I had connected several processors in a line and had them all 
valid and ready to be started. I then decide to remove one processor from the 
path, so lets say I had Proc1 connected to Proc2 connected to Proc3. I deleted 
the connection between Proc2 and Proc3, then dragged the existing connection 
between Proc1 and Proc2 to make it go from Proc1 to Proc3. At this point 
everything should have been valid, but Proc3 said it was invalid because it 
required input. If I deleted this connection and created a new one between 
Proc1 and Proc3 then all was good.   (was: I had connected several processors 
in a line and had them all valid and ready to be started. I then decide to 
remove one processor from the path, so lets say I had Proc1 connected to Proc2 
connected to Proc3. I deleted the connection between Proc2 and Proc3, then 
dragged the existing between Proc1 and Proc2 to make it go from Proc1 to Proc3. 
At this point everything should have been valid, but Proc3 said it was invalid 
because it required input. If I deleted this connection and created a new one 
between Proc1 and Proc3 then all was good. )

> Processor InputRequirement indicating invalid when it should be valid
> -
>
> Key: NIFI-1198
> URL: https://issues.apache.org/jira/browse/NIFI-1198
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Bryan Bende
>Priority: Minor
> Fix For: 0.4.0
>
>
> I had connected several processors in a line and had them all valid and ready 
> to be started. I then decide to remove one processor from the path, so lets 
> say I had Proc1 connected to Proc2 connected to Proc3. I deleted the 
> connection between Proc2 and Proc3, then dragged the existing connection 
> between Proc1 and Proc2 to make it go from Proc1 to Proc3. At this point 
> everything should have been valid, but Proc3 said it was invalid because it 
> required input. If I deleted this connection and created a new one between 
> Proc1 and Proc3 then all was good. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1198) Processor InputRequirement indicating invalid when it should be valid

2015-11-19 Thread Bryan Bende (JIRA)
Bryan Bende created NIFI-1198:
-

 Summary: Processor InputRequirement indicating invalid when it 
should be valid
 Key: NIFI-1198
 URL: https://issues.apache.org/jira/browse/NIFI-1198
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 0.4.0
Reporter: Bryan Bende
Priority: Minor
 Fix For: 0.4.0


I had connected several processors in a line and had them all valid and ready 
to be started. I then decide to remove one processor from the path, so lets say 
I had Proc1 connected to Proc2 connected to Proc3. I deleted the connection 
between Proc2 and Proc3, then dragged the existing between Proc1 and Proc2 to 
make it go from Proc1 to Proc3. At this point everything should have been 
valid, but Proc3 said it was invalid because it required input. If I deleted 
this connection and created a new one between Proc1 and Proc3 then all was 
good. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1174) Create a Put HBase processor that can put multiple cells

2015-11-19 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-1174:
--
Attachment: NIFI-1174-Complex-Field-Improvements.patch

[~markap14] thanks for taking a look at this. 

Based on your comments I improved the error handling so that the message that 
gets logged will indicate why the flow file was "invalid" and in the case of 
missing the id field it will log the field name it was looking for and the 
fields names it processed so it will be clear why it didn't find it.

I also added a new property called Complex Field Strategy which can be set to 
Fail, Warn, Ignore, and Text. It works as your described and I added the Text 
option if someone wants to store the string value of the complex element. You 
can see how it works in the unit tests.

The new patch is meant to be applied on top of the other one.

> Create a Put HBase processor that can put multiple cells
> 
>
> Key: NIFI-1174
> URL: https://issues.apache.org/jira/browse/NIFI-1174
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Attachments: NIFI-1174-Complex-Field-Improvements.patch, 
> NIFI-1174.patch
>
>
> We recently added a PutHBaseCell processor which works great for writing one 
> individual cell at a time, but it can require a significant amount of work in 
> a flow to create a row with multiple cells. 
> We should support a variation of this processor that can accept a flow file 
> with key/value pairs in the content of the flow file (possibly json). The 
> key/value pairs then turned into the cells for the given row and get added in 
> one put operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1193) Add Hive support to Kite storage processor

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013791#comment-15013791
 ] 

ASF GitHub Bot commented on NIFI-1193:
--

Github user busbey commented on the pull request:

https://github.com/apache/nifi/pull/128#issuecomment-158108067
  
yes pleas @joey 


> Add Hive support to Kite storage processor
> --
>
> Key: NIFI-1193
> URL: https://issues.apache.org/jira/browse/NIFI-1193
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 0.5.0
>
>
> When the Kite processors were initially added in NIFI-238, we removed support 
> for sending data directly to Hive tables because the dependencies were too 
> large. Contacting the Hive MetaStore pulled in all of hive-exec and 
> hive-metastore. I've created an alternative that increases the size by only 
> 6.7MB (about 10% of what it was before).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-812) InvokeHTTP should optionally store response body as a flowfile attribute

2015-11-19 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved NIFI-812.
--
Resolution: Fixed

> InvokeHTTP should optionally store response body as a flowfile attribute
> 
>
> Key: NIFI-812
> URL: https://issues.apache.org/jira/browse/NIFI-812
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Joseph Percivall
>Priority: Minor
> Fix For: 0.4.0
>
>
> On a thread from Aug 3 on dev@nifi titled 'Route Original Flow File Base on 
> InvokeHTTP Response' it was suggested that it could be useful to capture the 
> response body as a flow file attribute.  This would allow for 
> RouteOnAttribute, for example, to be used to route the flowfile based on 
> content within that response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-812) InvokeHTTP should optionally store response body as a flowfile attribute

2015-11-19 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013711#comment-15013711
 ] 

Aldrin Piri commented on NIFI-812:
--

This was merged with NIFI-1086, commit hash 
8c2323dc8d0e107f1a99898370c7515fa9603122 but missed including a reference to 
this issue specifically.

> InvokeHTTP should optionally store response body as a flowfile attribute
> 
>
> Key: NIFI-812
> URL: https://issues.apache.org/jira/browse/NIFI-812
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Joseph Percivall
>Priority: Minor
> Fix For: 0.4.0
>
>
> On a thread from Aug 3 on dev@nifi titled 'Route Original Flow File Base on 
> InvokeHTTP Response' it was suggested that it could be useful to capture the 
> response body as a flow file attribute.  This would allow for 
> RouteOnAttribute, for example, to be used to route the flowfile based on 
> content within that response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1193) Add Hive support to Kite storage processor

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013670#comment-15013670
 ] 

ASF GitHub Bot commented on NIFI-1193:
--

Github user joey commented on the pull request:

https://github.com/apache/nifi/pull/128#issuecomment-158084660
  
@rdblue I'm guessing that you used the shade plugin due to the hive-exec 
jar embedding so many libraries in unshaded package names?

If so, I've got a pom that pulls in direct dependencies that can talk to 
the Hive metastore without the hive-exec jar. Let me know if that would help 
here.


> Add Hive support to Kite storage processor
> --
>
> Key: NIFI-1193
> URL: https://issues.apache.org/jira/browse/NIFI-1193
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 0.5.0
>
>
> When the Kite processors were initially added in NIFI-238, we removed support 
> for sending data directly to Hive tables because the dependencies were too 
> large. Contacting the Hive MetaStore pulled in all of hive-exec and 
> hive-metastore. I've created an alternative that increases the size by only 
> 6.7MB (about 10% of what it was before).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1123) Extend the "Delete Attributes Expression" to support Expression Language

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013662#comment-15013662
 ] 

ASF GitHub Bot commented on NIFI-1123:
--

Github user jskora closed the pull request at:

https://github.com/apache/nifi/pull/116


> Extend the "Delete Attributes Expression" to support Expression Language
> 
>
> Key: NIFI-1123
> URL: https://issues.apache.org/jira/browse/NIFI-1123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joe Skora
>Assignee: Joe Skora
>Priority: Minor
>  Labels: easyfix, features, patch
>
> Allow the "Delete Attributes Expression" to accept Expression Language to 
> dynamically produce the regular expression to identify attributes to be 
> deleted per discussion on 
> [NIFI-641|https://issues.apache.org/jira/browse/NIFI-641].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1123) Extend the "Delete Attributes Expression" to support Expression Language

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013661#comment-15013661
 ] 

ASF GitHub Bot commented on NIFI-1123:
--

Github user jskora commented on the pull request:

https://github.com/apache/nifi/pull/116#issuecomment-158083303
  
Closed by apache/nifi commits 52b24b93d9f7763744c792c0cfed8974f8e6cb83 and 
9e2f6df20511b814b761726e40f2d3b1f498cc9f.


> Extend the "Delete Attributes Expression" to support Expression Language
> 
>
> Key: NIFI-1123
> URL: https://issues.apache.org/jira/browse/NIFI-1123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joe Skora
>Assignee: Joe Skora
>Priority: Minor
>  Labels: easyfix, features, patch
>
> Allow the "Delete Attributes Expression" to accept Expression Language to 
> dynamically produce the regular expression to identify attributes to be 
> deleted per discussion on 
> [NIFI-641|https://issues.apache.org/jira/browse/NIFI-641].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1193) Add Hive support to Kite storage processor

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013614#comment-15013614
 ] 

ASF GitHub Bot commented on NIFI-1193:
--

Github user joewitt commented on the pull request:

https://github.com/apache/nifi/pull/128#issuecomment-158074311
  
Ryan,

Definitely appreciate you trying to make it less size prohibitive.  I think 
we'll want to avoid having shaded jars and such being utilized for this.  It 
complicates the licensing and related concerns and we have done an extremely 
good job getting those clean even to the point of every binary artifact we 
produce (nars) having embedded license/notice data correct to all 
sub-dependencies.

This is an area which really highlights our need to tackle the extension 
registry.  We need to, on the other side of this release, figure out how we as 
a community can get more agility for releasing extensions like this versus the 
core framework.

OlegZ: We do need to tackle any copyright assertions on contributed source 
and ensure all proper license and notice adherence occurs.  I'm overly 
generalizing here but there are like maybe five people on earth (I'm looking at 
you Sean Busbey) that care about following the strict guidance of licensing and 
notices at the level we do.  In basically every contrib that brings in 
dependencies we'll have to help others most likely.  Feels like a fine trade in 
exchange for contributions of helpful things the community will benefit from.

We also need to ensure that there is appropriate testing.  However, above 
all else we need to keep in mind this community is powered by contributions.  
So in every exchange let's make sure our discussions stay focused on helping 
folks bring contribs along.  As specific example consider the lack of unit 
tests.  We could as part of the review build them.  Or as part of the feedback 
ask if there are ideas on how to include some.  Some extensions and 
contributions are inherently really hard to unit test.  I don't know if this 
one is or isn't.  Adding unit tests or asking if unit tests can be included is 
more powerful than saying we can't accept the contrib without them.  The 
difference can at times be subtle but the effect on the community and tenor of 
discussion can be dramatic.  

So far everyone in the community has done an awesome job of helping each 
other find the middle ground on contributions so that we can be inclusive and 
encouraging while increasing quality as well.

Thanks
Joe


> Add Hive support to Kite storage processor
> --
>
> Key: NIFI-1193
> URL: https://issues.apache.org/jira/browse/NIFI-1193
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 0.5.0
>
>
> When the Kite processors were initially added in NIFI-238, we removed support 
> for sending data directly to Hive tables because the dependencies were too 
> large. Contacting the Hive MetaStore pulled in all of hive-exec and 
> hive-metastore. I've created an alternative that increases the size by only 
> 6.7MB (about 10% of what it was before).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1193) Add Hive support to Kite storage processor

2015-11-19 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013601#comment-15013601
 ] 

Joseph Witt commented on NIFI-1193:
---

[~rdblue] Hey cool that you found a potential new way to tackle this stuff.  
This is a big deal item in terms of review/implication and given our current 
efforts to buckle down on 0.4.0 i've pushed this to 0.5.0.

Will add some comments to the PR too.

Thanks
Joe

> Add Hive support to Kite storage processor
> --
>
> Key: NIFI-1193
> URL: https://issues.apache.org/jira/browse/NIFI-1193
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 0.5.0
>
>
> When the Kite processors were initially added in NIFI-238, we removed support 
> for sending data directly to Hive tables because the dependencies were too 
> large. Contacting the Hive MetaStore pulled in all of hive-exec and 
> hive-metastore. I've created an alternative that increases the size by only 
> 6.7MB (about 10% of what it was before).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1193) Add Hive support to Kite storage processor

2015-11-19 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-1193:
--
Fix Version/s: (was: 0.4.0)
   0.5.0

> Add Hive support to Kite storage processor
> --
>
> Key: NIFI-1193
> URL: https://issues.apache.org/jira/browse/NIFI-1193
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 0.5.0
>
>
> When the Kite processors were initially added in NIFI-238, we removed support 
> for sending data directly to Hive tables because the dependencies were too 
> large. Contacting the Hive MetaStore pulled in all of hive-exec and 
> hive-metastore. I've created an alternative that increases the size by only 
> 6.7MB (about 10% of what it was before).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1174) Create a Put HBase processor that can put multiple cells

2015-11-19 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013584#comment-15013584
 ] 

Mark Payne commented on NIFI-1174:
--

[~bbende] - I tried connecting GetTwitter to PutHBaseJSON. When I set the Row 
Identifier to ${uuid}, all worked perfectly. When I instead used set the Row 
Identifier Field Name to "id", I got the error message:

{code}
015-11-19 13:07:29,055 ERROR [Timer-Driven Process Thread-8] 
org.apache.nifi.hbase.PutHBaseJSON 
PutHBaseJSON[id=1662f7a6-f7b0-4157-a67a-80beca08c8b3] Invalid FlowFile 
StandardFlowFileRecord[uuid=f55c5704-f039-4e46-80d1-f189ad5c160c,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1447938319221-136, container=default, 
section=136], offset=121236, 
length=156],offset=0,name=1171666305633.json,size=156] missing table, row, 
column familiy, or column qualifier; routing to failure
{code}

I also tried setting the field name to "id_str" since the JSON has two fields, 
one that is numeric and one that is a string version. Got the same result 
either way.

I also am concerned about the number of WARN log messages that are produced. 
Since there are 4 or 5 different "complex" fields in the JSON, I see a lot of 
warning messages indicating that those fields are not being transferred. I 
would recommend that rather than warning for each of those, we build up a 
single message indicating the fields that are not being sent and then 
generating only a single message. Even then, though, we end up warning on each 
message. How do you feel about having a property that allows user to specify 
how to handle objects that have "complex" fields (non-flat JSON)? Provide maybe 
3 options: Fail (route flowfile to failure), Warn (log), Ignore (just log at a 
debug level)?

Otherwise, it works very well! Since I already had an HBase Client Service 
created to test the PutHBaseCell, this was super simple to setup. Very nicely 
done overall!


> Create a Put HBase processor that can put multiple cells
> 
>
> Key: NIFI-1174
> URL: https://issues.apache.org/jira/browse/NIFI-1174
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Attachments: NIFI-1174.patch
>
>
> We recently added a PutHBaseCell processor which works great for writing one 
> individual cell at a time, but it can require a significant amount of work in 
> a flow to create a row with multiple cells. 
> We should support a variation of this processor that can accept a flow file 
> with key/value pairs in the content of the flow file (possibly json). The 
> key/value pairs then turned into the cells for the given row and get added in 
> one put operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-655) Provide support for multiple authentication mechanisms

2015-11-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013535#comment-15013535
 ] 

ASF subversion and git services commented on NIFI-655:
--

Commit 2a0439ca06b81b27cbbed2058307af778169d9e6 in nifi's branch 
refs/heads/NIFI-655 from [~mcgilman]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=2a0439c ]

NIFI-655:
- Fixing checkstyle issues.
- Showing the progress spinner while submitting account justification.

> Provide support for multiple authentication mechanisms
> --
>
> Key: NIFI-655
> URL: https://issues.apache.org/jira/browse/NIFI-655
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Configuration, Core Framework, Core UI, Documentation & 
> Website
>Reporter: Mark Payne
>Assignee: Matt Gilman
> Fix For: 0.4.0
>
>
> NiFi provides a pluggable authorization mechanism but authentication is done 
> only via browser certificates. We should offer support for multiple 
> authentication mechanisms. A feature proposal has been created [1].
> Important implementations to support include Active Directory, LDAP, and 
> Kerberos.
> [1] https://cwiki.apache.org/confluence/display/NIFI/Pluggable+Authentication



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


nifi git commit: NIFI-655: - Fixing checkstyle issues. - Showing the progress spinner while submitting account justification.

2015-11-19 Thread mcgilman
Repository: nifi
Updated Branches:
  refs/heads/NIFI-655 9f60411b1 -> 2a0439ca0


NIFI-655:
- Fixing checkstyle issues.
- Showing the progress spinner while submitting account justification.

Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/2a0439ca
Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/2a0439ca
Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/2a0439ca

Branch: refs/heads/NIFI-655
Commit: 2a0439ca06b81b27cbbed2058307af778169d9e6
Parents: 9f60411
Author: Matt Gilman 
Authored: Thu Nov 19 08:29:39 2015 -0500
Committer: Matt Gilman 
Committed: Thu Nov 19 08:29:39 2015 -0500

--
 .../nifi/security/util/CertificateUtils.java|  22 +---
 .../src/main/java/org/apache/nifi/key/Key.java  |   6 +-
 .../web/security/x509/X509IdentityProvider.java |   4 +-
 .../nifi/web/security/jwt/JwtServiceTest.java   | 128 ---
 .../WEB-INF/partials/login/login-progress.jsp   |   2 +-
 .../src/main/webapp/js/nf/login/nf-login.js |   7 +
 .../java/org/apache/nifi/ldap/LdapProvider.java |   4 +-
 7 files changed, 77 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/nifi/blob/2a0439ca/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/CertificateUtils.java
--
diff --git 
a/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/CertificateUtils.java
 
b/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/CertificateUtils.java
index ea3a6c6..6236d8e 100644
--- 
a/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/CertificateUtils.java
+++ 
b/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/CertificateUtils.java
@@ -34,8 +34,7 @@ public final class CertificateUtils {
 private static final Logger logger = 
LoggerFactory.getLogger(CertificateUtils.class);
 
 /**
- * Returns true if the given keystore can be loaded using the given 
keystore
- * type and password. Returns false otherwise.
+ * Returns true if the given keystore can be loaded using the given 
keystore type and password. Returns false otherwise.
  *
  * @param keystore the keystore to validate
  * @param keystoreType the type of the keystore
@@ -77,10 +76,8 @@ public final class CertificateUtils {
 }
 
 /**
- * Extracts the username from the specified DN. If the username cannot be
- * extracted because the CN is in an unrecognized format, the entire CN is
- * returned. If the CN cannot be extracted because the DN is in an
- * unrecognized format, the entire DN is returned.
+ * Extracts the username from the specified DN. If the username cannot be 
extracted because the CN is in an unrecognized format, the entire CN is 
returned. If the CN cannot be extracted because
+ * the DN is in an unrecognized format, the entire DN is returned.
  *
  * @param dn the dn to extract the username from
  * @return the exatracted username
@@ -92,7 +89,7 @@ public final class CertificateUtils {
 if (StringUtils.isNotBlank(dn)) {
 // determine the separate
 final String separator = StringUtils.indexOfIgnoreCase(dn, "/cn=") 
> 0 ? "/" : ",";
-
+
 // attempt to locate the cd
 final String cnPattern = "cn=";
 final int cnIndex = StringUtils.indexOfIgnoreCase(dn, cnPattern);
@@ -110,9 +107,7 @@ public final class CertificateUtils {
 }
 
 /**
- * Returns a list of subject alternative names. Any name that is 
represented
- * as a String by X509Certificate.getSubjectAlternativeNames() is converted
- * to lowercase and returned.
+ * Returns a list of subject alternative names. Any name that is 
represented as a String by X509Certificate.getSubjectAlternativeNames() is 
converted to lowercase and returned.
  *
  * @param certificate a certificate
  * @return a list of subject alternative names; list is never null
@@ -128,12 +123,9 @@ public final class CertificateUtils {
 final List result = new ArrayList<>();
 for (final List generalName : altNames) {
 /**
- * generalName has the name type as the first element a String or
- * byte array for the second element.  We return any general names
- * that are String types.
+ * generalName has the name type as the first element a String or 
byte array for the second element. We return any general names that are String 
types.
  *
- * We don't inspect the numeric name type because some certificates
- * incorrectly put IPs and DNS names under the wrong name types.
+ * We don't inspect the numeri

[jira] [Comment Edited] (NIFI-1145) TestStandardFlowFileQueue#testDropSwappedFlowFiles fails in certain environments

2015-11-19 Thread George Seremetidis (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013438#comment-15013438
 ] 

George Seremetidis edited comment on NIFI-1145 at 11/19/15 12:26 PM:
-

Not sure if my email got through on the dev mailing list...

I had the same problem with Centos 7 and Oracle JDK 8. Those exact unit tests 
would fail. Follow the instructions on https://nifi.apache.org/quickstart.html 
to update various system parameters. I think you'll find that Centos sets max 
file handles to 1024 for a user.

I also had to increase net.core.rmem_max to 1,200,000 to pass one of the Syslog 
tests.

George


was (Author: gseremetidis):
Not sure if my email got through on the dev mailing list...

I had the same problem with Centos 7 and Oracle JDK 8. Those exact unit tests 
would fail. Follow the instructions on https://nifi.apache.org/quickstart.html. 
I think you'll find that Centos sets max file handles to 1024 for a user.

I also had to increase net.core.rmem_max to 1,200,000 to pass one of the Syslog 
tests.

George

> TestStandardFlowFileQueue#testDropSwappedFlowFiles fails in certain 
> environments
> 
>
> Key: NIFI-1145
> URL: https://issues.apache.org/jira/browse/NIFI-1145
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Tools and Build
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>
> This was first reported by Jeff on the dev mailing list using CentOS 7 and 
> JDK 8 (both OpenJDK and Oracle, at times) and has since also been reproduced 
> in the Travis CI environment (Ubuntu LTS 12.04 with Oracle JDK 8, JDK 7 is 
> fine).
> The associated thread on the mailing list is available at 
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201511.mbox/%3c66915aaf-b7ee-4792-9ca0-1db058376...@gmail.com%3E
> Limited sample size suggests it is a JDK 8 issue on Linux. 
> Failure is:
> {code}
> Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 29.094 sec 
> <<< FAILURE! - in org.apache.nifi.controller.TestStandardFlowFileQueue
> testDropSwappedFlowFiles(org.apache.nifi.controller.TestStandardFlowFileQueue)
>   Time elapsed: 21.461 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 2 
> milliseconds
>   at java.lang.Throwable.fillInStackTrace(Native Method)
>   at java.lang.Throwable.fillInStackTrace(Throwable.java:783)
>   at java.lang.Throwable.(Throwable.java:250)
>   at 
> org.mockito.internal.debugging.LocationImpl.(LocationImpl.java:24)
>   at 
> org.mockito.internal.debugging.LocationImpl.(LocationImpl.java:19)
>   at 
> org.mockito.internal.invocation.InvocationImpl.(InvocationImpl.java:50)
>   at 
> org.mockito.internal.creation.cglib.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:58)
>   at 
> org.apache.nifi.connectable.Connection$$EnhancerByMockitoWithCGLIB$$f8cf731f.getDestination()
>   at 
> org.apache.nifi.controller.StandardFlowFileQueue.put(StandardFlowFileQueue.java:316)
>   at 
> org.apache.nifi.controller.TestStandardFlowFileQueue.testDropSwappedFlowFiles(TestStandardFlowFileQueue.java:198)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1145) TestStandardFlowFileQueue#testDropSwappedFlowFiles fails in certain environments

2015-11-19 Thread George Seremetidis (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013438#comment-15013438
 ] 

George Seremetidis commented on NIFI-1145:
--

Not sure if my email got through on the dev mailing list...

I had the same problem with Centos 7 and Oracle JDK 8. Those exact unit tests 
would fail. Follow the instructions on https://nifi.apache.org/quickstart.html. 
I think you'll find that Centos sets max file handles to 1024 for a user.

I also had to increase net.core.rmem_max to 1,200,000 to pass one of the Syslog 
tests.

George

> TestStandardFlowFileQueue#testDropSwappedFlowFiles fails in certain 
> environments
> 
>
> Key: NIFI-1145
> URL: https://issues.apache.org/jira/browse/NIFI-1145
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Tools and Build
>Affects Versions: 0.3.0
>Reporter: Aldrin Piri
>
> This was first reported by Jeff on the dev mailing list using CentOS 7 and 
> JDK 8 (both OpenJDK and Oracle, at times) and has since also been reproduced 
> in the Travis CI environment (Ubuntu LTS 12.04 with Oracle JDK 8, JDK 7 is 
> fine).
> The associated thread on the mailing list is available at 
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201511.mbox/%3c66915aaf-b7ee-4792-9ca0-1db058376...@gmail.com%3E
> Limited sample size suggests it is a JDK 8 issue on Linux. 
> Failure is:
> {code}
> Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 29.094 sec 
> <<< FAILURE! - in org.apache.nifi.controller.TestStandardFlowFileQueue
> testDropSwappedFlowFiles(org.apache.nifi.controller.TestStandardFlowFileQueue)
>   Time elapsed: 21.461 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 2 
> milliseconds
>   at java.lang.Throwable.fillInStackTrace(Native Method)
>   at java.lang.Throwable.fillInStackTrace(Throwable.java:783)
>   at java.lang.Throwable.(Throwable.java:250)
>   at 
> org.mockito.internal.debugging.LocationImpl.(LocationImpl.java:24)
>   at 
> org.mockito.internal.debugging.LocationImpl.(LocationImpl.java:19)
>   at 
> org.mockito.internal.invocation.InvocationImpl.(InvocationImpl.java:50)
>   at 
> org.mockito.internal.creation.cglib.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:58)
>   at 
> org.apache.nifi.connectable.Connection$$EnhancerByMockitoWithCGLIB$$f8cf731f.getDestination()
>   at 
> org.apache.nifi.controller.StandardFlowFileQueue.put(StandardFlowFileQueue.java:316)
>   at 
> org.apache.nifi.controller.TestStandardFlowFileQueue.testDropSwappedFlowFiles(TestStandardFlowFileQueue.java:198)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1123) Extend the "Delete Attributes Expression" to support Expression Language

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013433#comment-15013433
 ] 

ASF GitHub Bot commented on NIFI-1123:
--

Github user trkurc commented on the pull request:

https://github.com/apache/nifi/pull/116#issuecomment-158042703
  
@jskora - yes, please!


> Extend the "Delete Attributes Expression" to support Expression Language
> 
>
> Key: NIFI-1123
> URL: https://issues.apache.org/jira/browse/NIFI-1123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joe Skora
>Assignee: Joe Skora
>Priority: Minor
>  Labels: easyfix, features, patch
>
> Allow the "Delete Attributes Expression" to accept Expression Language to 
> dynamically produce the regular expression to identify attributes to be 
> deleted per discussion on 
> [NIFI-641|https://issues.apache.org/jira/browse/NIFI-641].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1192) Allow Get/PutKafka to honor dynamic properties

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013383#comment-15013383
 ] 

ASF GitHub Bot commented on NIFI-1192:
--

GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/129

NIFI-1192 added support for Dynamic Properties



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1192

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/129.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #129


commit a5fecf632239a30ec822c5bebd2eeca7f549ac9e
Author: Oleg Zhurakousky 
Date:   2015-11-18T22:06:11Z

NIFI-1192 added support for Dynamic Properties




> Allow Get/PutKafka to honor dynamic properties
> --
>
> Key: NIFI-1192
> URL: https://issues.apache.org/jira/browse/NIFI-1192
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>Priority: Critical
> Fix For: 0.4.0
>
>
> Currently Kafka does not honor dynamic properties which means aside from 8 
> properties exposed none others could be set



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1197) improve SSL options for getmongo and putmongo processor configuration properties

2015-11-19 Thread subhash parise (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash parise updated NIFI-1197:
-
Summary: improve SSL options for getmongo and putmongo processor 
configuration properties  (was: improve SSL options for getmong and putmongo 
configuration properties)

> improve SSL options for getmongo and putmongo processor configuration 
> properties
> 
>
> Key: NIFI-1197
> URL: https://issues.apache.org/jira/browse/NIFI-1197
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 0.3.0
>Reporter: subhash parise
>
> Hi Team, 
> Now getmongo and putmongo configuration properties are mongodb URI,database 
> name, collection name, etc, but if the mongodb server configured with the ssl 
> it won't accept ssl options in mongodb uri.
> Could anyone please improve the ssl options in mongo proccessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1197) improve SSL options for getmong and putmongo configuration properties

2015-11-19 Thread subhash parise (JIRA)
subhash parise created NIFI-1197:


 Summary: improve SSL options for getmong and putmongo 
configuration properties
 Key: NIFI-1197
 URL: https://issues.apache.org/jira/browse/NIFI-1197
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.3.0
Reporter: subhash parise


Hi Team, 

Now getmongo and putmongo configuration properties are mongodb URI,database 
name, collection name, etc, but if the mongodb server configured with the ssl 
it won't accept ssl options in mongodb uri.
Could anyone please improve the ssl options in mongo proccessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)