[jira] [Commented] (NIFI-1937) GetHTTP should support configurable cookie policy

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325670#comment-15325670
 ] 

ASF GitHub Bot commented on NIFI-1937:
--

Github user trkurc commented on a diff in the pull request:

https://github.com/apache/nifi/pull/479#discussion_r66699726
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetHTTP.java
 ---
@@ -197,6 +198,30 @@
 .addValidator(StandardValidators.PORT_VALIDATOR)
 .build();
 
+public static final String DEFAULT_COOKIE_POLICY_STR = "default";
+public static final String STANDARD_COOKIE_POLICY_STR = "standard";
+public static final String STRICT_COOKIE_POLICY_STR = "strict";
+public static final String NETSCAPE_COOKIE_POLICY_STR = "netscape";
+public static final String IGNORE_COOKIE_POLICY_STR = "ignore";
+public static final AllowableValue DEFAULT_COOKIE_POLICY = new 
AllowableValue(DEFAULT_COOKIE_POLICY_STR, DEFAULT_COOKIE_POLICY_STR,
+"Default cookie policy that provides a higher degree of 
compatibility with common cookie management of popular HTTP agents for 
non-standard (Netscape style) cookies.");
+public static final AllowableValue STANDARD_COOKIE_POLICY = new 
AllowableValue(STANDARD_COOKIE_POLICY_STR, STANDARD_COOKIE_POLICY_STR,
+"RFC 6265 compliant cookie policy (interoperability 
profile).");
+public static final AllowableValue STRICT_COOKIE_POLICY = new 
AllowableValue(STRICT_COOKIE_POLICY_STR, STRICT_COOKIE_POLICY_STR,
+"RFC 6265 compliant cookie policy (strict profile).");
+public static final AllowableValue NETSCAPE_COOKIE_POLICY = new 
AllowableValue(NETSCAPE_COOKIE_POLICY_STR, NETSCAPE_COOKIE_POLICY_STR,
+"Netscape draft compliant cookie policy.");
+public static final AllowableValue IGNORE_COOKIE_POLICY = new 
AllowableValue(IGNORE_COOKIE_POLICY_STR, IGNORE_COOKIE_POLICY_STR,
+"A cookie policy that ignores cookies.");
+
+public static final PropertyDescriptor REDIRECT_COOKIE_POLICY = new 
PropertyDescriptor.Builder()
+.name("redirect-cookie-policy")
+.displayName("Redirect Cookie Policy")
+.description("When a HTTP server responds to a request with a 
redirect, this is the cookie policy used to copy cookies to the following 
request.")
+.allowableValues(DEFAULT_COOKIE_POLICY, 
STANDARD_COOKIE_POLICY, STRICT_COOKIE_POLICY, NETSCAPE_COOKIE_POLICY, 
IGNORE_COOKIE_POLICY)
+.defaultValue(DEFAULT_COOKIE_POLICY_STR)
--- End diff --

I was just about to merge this in, and realized that we might want to have 
a different default for 0.x and 1.x. 0.x prior to this had CookieSpecs.STANDARD 
set, and this would potentially change behavior without reconfiguring a flow 
back to standard, but admittedly, I had no success making a test that worked 
with CookieSpecs.STANDARD and broke with CookieSpecs.DEFAULT. @mosermw  - how 
do you feel about .defaultValue(STANDARD_COOKIE_POLICY_STR) in 0.x for flow 
compatibility versus writing a migration note, and 
.defaultValue(DEFAULT_COOKIE_POLICY_STR) in 1.x which is what I think a 
reasonable default where flow compatibility is less an issue?


> GetHTTP should support configurable cookie policy
> -
>
> Key: NIFI-1937
> URL: https://issues.apache.org/jira/browse/NIFI-1937
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 0.6.1
>Reporter: Michael Moser
>Assignee: Michael Moser
>Priority: Minor
> Fix For: 1.0.0, 0.7.0
>
>
> After changes to GetHTTP in NIFI-1714, I found a corporate web site where 
> GetHTTP fails to download content.  GetHTTP could successfully download 
> content from this site before NIFI-1714 was implemented.  So that change 
> effectively broke access to this site.
> I propose we add a new property to GetHTTP that allows the NiFi user to 
> choose the HTTPClient (Apache HTTPComponents) cookie policy.  The property 
> would be called Redirect Cookie Policy which would be "When a HTTP server 
> responds to a request with a redirect, this is the cookie specification used 
> to copy cookies to the following request"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1879) UI - Refresh Dialogs

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325471#comment-15325471
 ] 

ASF GitHub Bot commented on NIFI-1879:
--

GitHub user scottyaslan opened a pull request:

https://github.com/apache/nifi/pull/523

[NIFI-1879] Responsive dialogs and dialog UX refresh



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/scottyaslan/nifi responsiveDevBranch

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/523.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #523


commit 25a040a6403a981c5f28a08604c6cfa13b42b7ca
Author: Scott Aslan 
Date:   2016-06-10T22:42:38Z

[NIFI-1879] Responsive dialogs and dialog UX refresh




> UI - Refresh Dialogs
> 
>
> Key: NIFI-1879
> URL: https://issues.apache.org/jira/browse/NIFI-1879
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Scott Aslan
>Assignee: Scott Aslan
> Fix For: 1.0.0
>
> Attachments: nifi-about-dia...@2x.png, 
> nifi-add-template-select-m...@2x.png, nifi-add-templ...@2x.png, 
> nifi-create-connection-deta...@2x.png, 
> nifi-create-connection-setti...@2x.png, nifi-logo-about.svg, 
> nifi-system-diagnostics-j...@2x.png, nifi-system-diagnostics-j...@2x.png, 
> nifi-system-diagnostics-sys...@2x.png, nifi-ui-sh...@2x.png, 
> property-value-edit...@2x.png
>
>
> AC:
> -New Component dialogs
> --processor
> --input port
> --output port
> --group
> --remote group
> --template
> -Global menu dialogs
> --about



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-2000) When I click Empty queue in a cluster, only the FlowFiles on the node I'm connected to are emptied

2016-06-10 Thread Mark Payne (JIRA)
Mark Payne created NIFI-2000:


 Summary: When I click Empty queue in a cluster, only the FlowFiles 
on the node I'm connected to are emptied
 Key: NIFI-2000
 URL: https://issues.apache.org/jira/browse/NIFI-2000
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.0.0
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 1.0.0


When running NiFi in a cluster, I can queue up a bunch of data and then 
right-click on the connection and click Empty Queue. Whichever node I click 
Empty Queue on will empty the queue but not other nodes in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2000) When I click Empty queue in a cluster, only the FlowFiles on the node I'm connected to are emptied

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325285#comment-15325285
 ] 

ASF GitHub Bot commented on NIFI-2000:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/522

NIFI-2000: Ensure that if we override setters in ApplicationResource …

…that we call the super class's setter as well

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-2000

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/522.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #522


commit 54b965dd90d90efd9eae826e73b5c60b761bd2a8
Author: Mark Payne 
Date:   2016-06-10T21:08:30Z

NIFI-2000: Ensure that if we override setters in ApplicationResource that 
we call the super class's setter as well




> When I click Empty queue in a cluster, only the FlowFiles on the node I'm 
> connected to are emptied
> --
>
> Key: NIFI-2000
> URL: https://issues.apache.org/jira/browse/NIFI-2000
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.0.0
>
>
> When running NiFi in a cluster, I can queue up a bunch of data and then 
> right-click on the connection and click Empty Queue. Whichever node I click 
> Empty Queue on will empty the queue but not other nodes in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (NIFI-1935) Added ConvertDynamicJsonToAvro processor

2016-06-10 Thread Daniel Cave (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325212#comment-15325212
 ] 

Daniel Cave edited comment on NIFI-1935 at 6/10/16 8:32 PM:


I'm going to do more testing around the existing processors then I will reply 
over the weekend or on Monday.  There are some issues between verstions that I 
am working to track down and my original response may not be valid across the 
master/0.x split.  I will respond Monday with a better answer.


was (Author: daniel cave):
I'm going to do more testing around the existing processors then I will reply 
over the weekend or on Monday.  There are some issues between verstions that I 
am working to track down and my original response may not be valid across the 
master/0.x split (I was on a master version).  I will respond Monday with a 
better answer.

> Added ConvertDynamicJsonToAvro processor
> 
>
> Key: NIFI-1935
> URL: https://issues.apache.org/jira/browse/NIFI-1935
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Daniel Cave
>Assignee: Alex Halldin
>Priority: Minor
> Fix For: 1.0.0, 0.7.0
>
> Attachments: 
> 0001-NIFI-1935-Added-ConvertDynamicJSONToAvro.java.-Added.patch
>
>
> ConvertJsonToAvro required a predefined Avro schema to convert JSON and 
> required the presence of all field on the incoming JSON.  
> ConvertDynamicJsonToAvro functions similarly, however it now accepts the JSON 
> and schema as incoming flowfiles and creates the Avro dynamically.
> This processor requires the InferAvroSchema processor in its upstream flow so 
> that it can use the original and schema flowfiles as input.  These two 
> flowfiles will have the unique attribute inferredAvroId set on them by 
> InferAvroSchema so that they can be properly matched in 
> ConvertDynamicJsonToAvro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (NIFI-1935) Added ConvertDynamicJsonToAvro processor

2016-06-10 Thread Daniel Cave (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325212#comment-15325212
 ] 

Daniel Cave edited comment on NIFI-1935 at 6/10/16 8:30 PM:


I'm going to do more testing around the existing processors then I will reply 
over the weekend or on Monday.  There are some issues between verstions that I 
am working to track down and my original response may not be valid across the 
master/0.x split (I was on a master version).  I will respond Monday with a 
better answer.


was (Author: daniel cave):
I'm going to do more testing around the existing processors then I will reply 
over the weekend or on Monday.  There are some issues that I am working to 
track down.

> Added ConvertDynamicJsonToAvro processor
> 
>
> Key: NIFI-1935
> URL: https://issues.apache.org/jira/browse/NIFI-1935
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Daniel Cave
>Assignee: Alex Halldin
>Priority: Minor
> Fix For: 1.0.0, 0.7.0
>
> Attachments: 
> 0001-NIFI-1935-Added-ConvertDynamicJSONToAvro.java.-Added.patch
>
>
> ConvertJsonToAvro required a predefined Avro schema to convert JSON and 
> required the presence of all field on the incoming JSON.  
> ConvertDynamicJsonToAvro functions similarly, however it now accepts the JSON 
> and schema as incoming flowfiles and creates the Avro dynamically.
> This processor requires the InferAvroSchema processor in its upstream flow so 
> that it can use the original and schema flowfiles as input.  These two 
> flowfiles will have the unique attribute inferredAvroId set on them by 
> InferAvroSchema so that they can be properly matched in 
> ConvertDynamicJsonToAvro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1935) Added ConvertDynamicJsonToAvro processor

2016-06-10 Thread Daniel Cave (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325212#comment-15325212
 ] 

Daniel Cave commented on NIFI-1935:
---

I'm going to do more testing around the existing processors then I will reply 
over the weekend or on Monday.  There are some issues that I am working to 
track down.

> Added ConvertDynamicJsonToAvro processor
> 
>
> Key: NIFI-1935
> URL: https://issues.apache.org/jira/browse/NIFI-1935
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Daniel Cave
>Assignee: Alex Halldin
>Priority: Minor
> Fix For: 1.0.0, 0.7.0
>
> Attachments: 
> 0001-NIFI-1935-Added-ConvertDynamicJSONToAvro.java.-Added.patch
>
>
> ConvertJsonToAvro required a predefined Avro schema to convert JSON and 
> required the presence of all field on the incoming JSON.  
> ConvertDynamicJsonToAvro functions similarly, however it now accepts the JSON 
> and schema as incoming flowfiles and creates the Avro dynamically.
> This processor requires the InferAvroSchema processor in its upstream flow so 
> that it can use the original and schema flowfiles as input.  These two 
> flowfiles will have the unique attribute inferredAvroId set on them by 
> InferAvroSchema so that they can be properly matched in 
> ConvertDynamicJsonToAvro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (NIFI-1935) Added ConvertDynamicJsonToAvro processor

2016-06-10 Thread Daniel Cave (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Cave updated NIFI-1935:
--
Comment: was deleted

(was: I had to revisit the issues to remind myself of the cases/issues 
involved.  Previously I had issues with it not properly reading from the 
attribute due to the way that attributes are interpreted when they are JSON 
(i.e. as a JSON string value with the element being the attribute name), 
however this may have been fixed.  I retested it as of today based on 0.x, and 
ConvertJSONToAvro says it cannot find schema if you try to read it from an 
attribute (likely due to the same reason AttributesToJson fails to properly 
convert it, since it cant read avro.brinary and requires it to be a true 
string).  Also, ConvertJSONToAvro doesn't register the data provenance, which 
is a bug and needs a ticket and fix for.

However, the case for this processor is interrelated with use cases for 
InferAvroSchema (which also required the changes to SplitJSON).  If I write the 
schema out as an attribute and the only use of that schema is to convert the 
json to avro, then you are correct that the existing processor is sufficient.  
However, writing the schema to an attribute presents other issues and limits 
its usefulness.  My use cases for the schema also involve using the same schema 
to generate SQL/CQL/Hive/etc statements for ingestion as well as sending the 
avro to a schema registry and programatically creating RDDs.  To do all this I 
need the schema as content in its proper JSON form.  In theory, 
AttributesToJSON would do this, however it isn't language aware and will create 
JSON with {  :   }  where the attributeName is 
schema and the attributeValue would be the avro schema as a JSON string (there 
is a similar issue in using InvokeHTTP with the response as an attribute), 
however due to the way the schema exists in the attribute it actually returns 
an empty string (since the attribute is actually in avro.binary form).  The 
processor seems to have been meant for simple attributes and not complex ones 
as putting an avro schema in one creates where the attribute contents itself is 
avro or JSON.  EvaluateJsonPath is also an option, however again once you 
extract the avro schema from attribute you'll find that it's not in a proper 
format and isn't valid JSON or a valid schema anymore.

Basically, there are four options to fix the issue:  Do Infer twice (once to 
content and once to attribute, not desirable due to overhead ramifications on 
small devices with sub-second throughput and involves fixing the schema 
attribute issue), upgrade ConvertJSONToAvro to handle schemas from either 
content or attribute, make major changes to AttributeToJson or create a new 
complex version of it, or to split Convert into Convert and ConvertDynamic 
where one can accept the schema from a flowfile content (which has other use 
cases as well).  I chose the latter as it created the least amount of backwards 
compatibility issues.  That is not to say it is necessarily the best choice of 
the four in all cases, it was the lesser of evils for the community in my view 
for now.  If you guys disagree or I've missed a better way to extract the 
schema then I'm certainly open to discussion and revisiting my design for 
dynamically handling everything from source to sink (any sink).  Keep in mind, 
some of the sink processors require everything in flowfile content 
(PutCassandra), some hybrid (ExecuteSQL), and some all in attributes 
(PutHiveQL).  So since those design inconsistent processors are already in 
public use, I have to be able to interpret the avro schema and create 
statements for any of the three in order to be able to handle the JSON and sink 
or transfer it into any source which means I need to be able to do varying 
kinds of parsing to it depending on the sink.

Let me know what you think.  Also, keep in mind in building this response to 
you I found three new bugs in at least three processors that need new tickets:  
evaluateAttributeExpressions() doesnt seem to be able to handle avro.binary 
(affects anything evaluating an attribute I assume, and may apply to normal 
blobs too), ConvertJsonToAvro isn't writing any data provenance on failure, 
InferAvroSchema writing JSON to attribute doesn't write in the right form.)

> Added ConvertDynamicJsonToAvro processor
> 
>
> Key: NIFI-1935
> URL: https://issues.apache.org/jira/browse/NIFI-1935
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Daniel Cave
>Assignee: Alex Halldin
>Priority: Minor
> Fix For: 1.0.0, 0.7.0
>
> Attachments: 
> 0001-NIFI-1935-Added-ConvertDynamicJSONToAvro.java.-Added.patch
>
>
> ConvertJsonToAvro 

[jira] [Created] (NIFI-1999) Remove unnecessary data from template export

2016-06-10 Thread Oleg Zhurakousky (JIRA)
Oleg Zhurakousky created NIFI-1999:
--

 Summary: Remove unnecessary data from template export
 Key: NIFI-1999
 URL: https://issues.apache.org/jira/browse/NIFI-1999
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Oleg Zhurakousky
Assignee: Oleg Zhurakousky
 Fix For: 1.0.0


Certain elements of the flow do not need to be exported into a template. This 
came as part of the discussion on NIFI-826. Given the complexity of NIFI-826 
and unknown complexity of this effort I would prefer not to address it in a 
single JIRA.
Still need to discuss with [~mcgilman] as to what elements do not need to be 
exported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1663) Add support for ORC format

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325138#comment-15325138
 ] 

ASF GitHub Bot commented on NIFI-1663:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/477
  
@omalley Thanks! I updated the version in the POM to 1.1.0 and force-pushed 
the branch. Hopefully Travis will find the release JARs and complete 
successfully :)


> Add support for ORC format
> --
>
> Key: NIFI-1663
> URL: https://issues.apache.org/jira/browse/NIFI-1663
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.0.0
>
>
> From the Hive/ORC wiki 
> (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC): 
> The Optimized Row Columnar (ORC) file format provides a highly efficient way 
> to store Hive data ... Using ORC files improves performance when Hive is 
> reading, writing, and processing data.
> As users are interested in NiFi integrations with Hive (NIFI-981, NIFI-1193, 
> etc.), NiFi should be able to support ORC file format to enable users to 
> efficiently store flow files for use by Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1998) Upgrade Cassandra driver to 3.x

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325133#comment-15325133
 ] 

ASF GitHub Bot commented on NIFI-1998:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/521

NIFI-1998: Upgraded Cassandra driver to 3.0.2

This should apply cleanly to both master and 0.x branches

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi cassandra3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/521.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #521


commit c360c657cea74e8ae79874bedc0b82fcb9b7fc62
Author: Matt Burgess 
Date:   2016-06-10T19:25:09Z

NIFI-1998: Upgraded Cassandra driver to 3.0.2




> Upgrade Cassandra driver to 3.x
> ---
>
> Key: NIFI-1998
> URL: https://issues.apache.org/jira/browse/NIFI-1998
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.0.0, 0.7.0
>
>
> The Cassandra processors use the 2.1.9 driver, which is backwards-compatible 
> to 1.x, but not forward-compatible to 3.x. The latest driver at the time of 
> this writing is 3.0.2, which is fully backwards-compatible.
> Upgrading the driver, although it will involve API changes, will enable NiFi 
> users to interact with any Cassandra cluster of version 1.x, 2.x, or 3.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1997) On restart, a node that joins cluster does not update processors' run state to match the cluster

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325116#comment-15325116
 ] 

ASF GitHub Bot commented on NIFI-1997:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/520

NIFI-1997: Use the 'autoResumeState' property defined in nifi.properties on 
each node instead of inheriting the property from the Cluster Coordinator

Use the 'autoResumeState' property defined in nifi.properties on each node 
instead of inheriting the property from the Cluster Coordinator

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1997

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/520.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #520


commit 8cebe78d8e93694bce847c3e3c38802aa4f701eb
Author: Mark Payne 
Date:   2016-06-10T18:35:47Z

NIFI-1996: Fixed bug in the generation of UUID's for components when 
dealing with Snippets

commit 4f997585c3ff4c3997f76f0183ee234032d23251
Author: Mark Payne 
Date:   2016-06-10T19:16:18Z

NIFI-1997: Use the 'autoResumeState' property defined in nifi.properties on 
each node instead of inheriting the property from the Cluster Coordinator




> On restart, a node that joins cluster does not update processors' run state 
> to match the cluster
> 
>
> Key: NIFI-1997
> URL: https://issues.apache.org/jira/browse/NIFI-1997
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.0.0
>
>
> If I have a cluster, I can disconnect a node from the cluster and stop a 
> Processor. If I then re-join the node to the cluster, it will start running 
> the processor again, as it should.
> However, if I disconnect a node from the cluster and stop a Processor, and 
> then restart that node instead of joining it back into the cluster, the 
> problem arises. The node restarts and joins the cluster successfully but does 
> not start the Processor that is currently stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1663) Add support for ORC format

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325109#comment-15325109
 ] 

ASF GitHub Bot commented on NIFI-1663:
--

Github user omalley commented on the issue:

https://github.com/apache/nifi/pull/477
  
The ORC 1.1.0 jars are on Maven Central. :)



> Add support for ORC format
> --
>
> Key: NIFI-1663
> URL: https://issues.apache.org/jira/browse/NIFI-1663
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.0.0
>
>
> From the Hive/ORC wiki 
> (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC): 
> The Optimized Row Columnar (ORC) file format provides a highly efficient way 
> to store Hive data ... Using ORC files improves performance when Hive is 
> reading, writing, and processing data.
> As users are interested in NiFi integrations with Hive (NIFI-981, NIFI-1193, 
> etc.), NiFi should be able to support ORC file format to enable users to 
> efficiently store flow files for use by Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (NIFI-1935) Added ConvertDynamicJsonToAvro processor

2016-06-10 Thread Daniel Cave (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325078#comment-15325078
 ] 

Daniel Cave edited comment on NIFI-1935 at 6/10/16 7:00 PM:


I had to revisit the issues to remind myself of the cases/issues involved.  
Previously I had issues with it not properly reading from the attribute due to 
the way that attributes are interpreted when they are JSON (i.e. as a JSON 
string value with the element being the attribute name), however this may have 
been fixed.  I retested it as of today based on 0.x, and ConvertJSONToAvro says 
it cannot find schema if you try to read it from an attribute (likely due to 
the same reason AttributesToJson fails to properly convert it, since it cant 
read avro.brinary and requires it to be a true string).  Also, 
ConvertJSONToAvro doesn't register the data provenance, which is a bug and 
needs a ticket and fix for.

However, the case for this processor is interrelated with use cases for 
InferAvroSchema (which also required the changes to SplitJSON).  If I write the 
schema out as an attribute and the only use of that schema is to convert the 
json to avro, then you are correct that the existing processor is sufficient.  
However, writing the schema to an attribute presents other issues and limits 
its usefulness.  My use cases for the schema also involve using the same schema 
to generate SQL/CQL/Hive/etc statements for ingestion as well as sending the 
avro to a schema registry and programatically creating RDDs.  To do all this I 
need the schema as content in its proper JSON form.  In theory, 
AttributesToJSON would do this, however it isn't language aware and will create 
JSON with {  :   }  where the attributeName is 
schema and the attributeValue would be the avro schema as a JSON string (there 
is a similar issue in using InvokeHTTP with the response as an attribute), 
however due to the way the schema exists in the attribute it actually returns 
an empty string (since the attribute is actually in avro.binary form).  The 
processor seems to have been meant for simple attributes and not complex ones 
as putting an avro schema in one creates where the attribute contents itself is 
avro or JSON.  EvaluateJsonPath is also an option, however again once you 
extract the avro schema from attribute you'll find that it's not in a proper 
format and isn't valid JSON or a valid schema anymore.

Basically, there are four options to fix the issue:  Do Infer twice (once to 
content and once to attribute, not desirable due to overhead ramifications on 
small devices with sub-second throughput and involves fixing the schema 
attribute issue), upgrade ConvertJSONToAvro to handle schemas from either 
content or attribute, make major changes to AttributeToJson or create a new 
complex version of it, or to split Convert into Convert and ConvertDynamic 
where one can accept the schema from a flowfile content (which has other use 
cases as well).  I chose the latter as it created the least amount of backwards 
compatibility issues.  That is not to say it is necessarily the best choice of 
the four in all cases, it was the lesser of evils for the community in my view 
for now.  If you guys disagree or I've missed a better way to extract the 
schema then I'm certainly open to discussion and revisiting my design for 
dynamically handling everything from source to sink (any sink).  Keep in mind, 
some of the sink processors require everything in flowfile content 
(PutCassandra), some hybrid (ExecuteSQL), and some all in attributes 
(PutHiveQL).  So since those design inconsistent processors are already in 
public use, I have to be able to interpret the avro schema and create 
statements for any of the three in order to be able to handle the JSON and sink 
or transfer it into any source which means I need to be able to do varying 
kinds of parsing to it depending on the sink.

Let me know what you think.  Also, keep in mind in building this response to 
you I found three new bugs in at least three processors that need new tickets:  
evaluateAttributeExpressions() doesnt seem to be able to handle avro.binary 
(affects anything evaluating an attribute I assume, and may apply to normal 
blobs too), ConvertJsonToAvro isn't writing any data provenance on failure, 
InferAvroSchema writing JSON to attribute doesn't write in the right form.


was (Author: daniel cave):
I had to revisit the issues to remind myself of the cases/issues involved.  
Previously I had issues with it not properly reading from the attribute due to 
the way that attributes are interpreted when they are JSON (i.e. as a JSON 
string value with the element being the attribute name), however this may have 
been fixed.  I retested it as of today based on 0.x, and ConvertJSONToAvro says 
it cannot find schema if you try to read it from an attribute (likely due to 
the same reason AttributesToJson 

[jira] [Comment Edited] (NIFI-1935) Added ConvertDynamicJsonToAvro processor

2016-06-10 Thread Daniel Cave (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325078#comment-15325078
 ] 

Daniel Cave edited comment on NIFI-1935 at 6/10/16 6:58 PM:


I had to revisit the issues to remind myself of the cases/issues involved.  
Previously I had issues with it not properly reading from the attribute due to 
the way that attributes are interpreted when they are JSON (i.e. as a JSON 
string value with the element being the attribute name), however this may have 
been fixed.  I retested it as of today based on 0.x, and ConvertJSONToAvro says 
it cannot find schema if you try to read it from an attribute (likely due to 
the same reason AttributesToJson fails to properly convert it, since it cant 
read avro.brinary and requires it to be a true string).  Also, 
ConvertJSONToAvro doesn't register the data provenance, which is a bug and 
needs a ticket and fix for.

However, the case for this processor is interrelated with use cases for 
InferAvroSchema (which also required the changes to SplitJSON).  If I write the 
schema out as an attribute and the only use of that schema is to convert the 
json to avro, then you are correct that the existing processor is sufficient.  
However, writing the schema to an attribute presents other issues and limits 
its usefulness.  My use cases for the schema also involve using the same schema 
to generate SQL/CQL/Hive/etc statements for ingestion as well as sending the 
avro to a schema registry and programatically creating RDDs.  To do all this I 
need the schema as content in its proper JSON form.  In theory, 
AttributesToJSON would do this, however it isn't language aware and will create 
JSON with {  :   }  where the attributeName is 
schema and the attributeValue would be the avro schema as a JSON string (there 
is a similar issue in using InvokeHTTP with the response as an attribute), 
however due to the way the schema exists in the attribute it actually returns 
an empty string (since the attribute is actually in avro.binary form).  The 
processor seems to have been meant for simple attributes and not complex ones 
as putting an avro schema in one creates where the attribute contents itself is 
avro or JSON.  EvaluateJsonPath is also an option, however again once you 
extract the avro schema from attribute you'll find that it's not in a proper 
format and isn't valid JSON or a valid schema anymore.

Basically, there are four options to fix the issue:  Do Infer twice (once to 
content and once to attribute, not desirable due to overhead ramifications on 
small devices with sub-second throughput and involves fixing the schema 
attribute issue), upgrade ConvertJSONToAvro to handle schemas from either 
content or attribute, make major changes to AttributeToJson or create a new 
complex version of it, or to split Convert into Convert and ConvertDynamic 
where one can accept the schema from a flowfile content (which has other use 
cases as well).  I chose the latter as it created the least amount of backwards 
compatibility issues.  That is not to say it is necessarily the best choice of 
the four in all cases, it was the lesser of evils for the community in my view 
for now.  If you guys disagree or I've missed a better way to extract the 
schema then I'm certainly open to discussion and revisiting my design for 
dynamically handling everything from source to sink (any sink).  Keep in mind, 
some of the sink processors require everything in flowfile content 
(PutCassandra), some hybrid (ExecuteSQL), and some all in attributes 
(PutHiveQL).  So since those design inconsistent processors are already in 
public use, I have to be able to interpret the avro schema and create 
statements for any of the three in order to be able to handle the JSON and sink 
or transfer it into any source which means I need to be able to do varying 
kinds of parsing to it depending on the sink.

Let me know what you think.  Also, keep in mind in building this response to 
you I found three new bugs in at least three processors that need new tickets:  
evaluateAttributeExpressions() doesnt seem to be able to handle avro.binary 
(affects anything evaluating an attribute I assume, and may apply to normal 
blobs too), ConvertJsonToAvro is writing data provenance on failure, 
InferAvroSchema writing JSON to attribute doesn't write in the right form.


was (Author: daniel cave):
I had to revisit the issues to remind myself of the cases/issues involved.  
Previously I had issues with it not properly reading from the attribute due to 
the way that attributes are interpreted when they are JSON (i.e. as a JSON 
string value with the element being the attribute name), however this may have 
been fixed.  I retested it as of today based on 0.x, and ConvertJSONToAvro says 
it cannot find schema if you try to read it from an attribute (likely due to 
the same reason AttributesToJson fails to 

[jira] [Commented] (NIFI-1935) Added ConvertDynamicJsonToAvro processor

2016-06-10 Thread Daniel Cave (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325078#comment-15325078
 ] 

Daniel Cave commented on NIFI-1935:
---

I had to revisit the issues to remind myself of the cases/issues involved.  
Previously I had issues with it not properly reading from the attribute due to 
the way that attributes are interpreted when they are JSON (i.e. as a JSON 
string value with the element being the attribute name), however this may have 
been fixed.  I retested it as of today based on 0.x, and ConvertJSONToAvro says 
it cannot find schema if you try to read it from an attribute (likely due to 
the same reason AttributesToJson fails to properly convert it, since it cant 
read avro.brinary and requires it to be a true string).  Also, 
ConvertJSONToAvro doesn't register the data provenance, which is a bug and 
needs a ticket and fix for.

However, the case for this processor is interrelated with use cases for 
InferAvroSchema (which also required the changes to SplitJSON).  If I write the 
schema out as an attribute and the only use of that schema is to convert the 
json to avro, then you are correct that the existing processor is sufficient.  
However, writing the schema to an attribute presents other issues and limits 
its usefulness.  My use cases for the schema also involve using the same schema 
to generate SQL/CQL/Hive/etc statements for ingestion as well as sending the 
avro to a schema registry and programatically creating RDDs.  To do all this I 
need the schema as content in its proper JSON form.  In theory, 
AttributesToJSON would do this, however it isn't language aware and will create 
JSON with {  :   }  where the attributeName is 
schema and the attributeValue would be the avro schema as a JSON string (there 
is a similar issue in using InvokeHTTP with the response as an attribute), 
however due to the way the schema exists in the attribute it actually returns 
an empty string (since the attribute is actually in avro.binary form).  The 
processor seems to have been meant for simple attributes and not complex ones 
as putting an avro schema in one creates where the attribute contents itself is 
avro or JSON.  EvaluateJsonPath is also an option, however again once you 
extract the avro schema from attribute you'll find that it's not in a proper 
format and isn't valid JSON or a valid schema anymore.

Basically, there are four options to fix the issue:  Do Infer twice (once to 
content and once to attribute, not desirable due to overhead ramifications on 
small devices with sub-second throughput), upgrade ConvertJSONToAvro to handle 
schemas from either content or attribute, make major changes to AttributeToJson 
or create a new complex version of it, or to split Convert into Convert and 
ConvertDynamic where one can accept the schema from a flowfile content (which 
has other use cases as well).  I chose the latter as it created the least 
amount of backwards compatibility issues.  That is not to say it is necessarily 
the best choice of the four in all cases, it was the lesser of evils for the 
community in my view for now.  If you guys disagree or I've missed a better way 
to extract the schema then I'm certainly open to discussion and revisiting my 
design for dynamically handling everything from source to sink (any sink).  
Keep in mind, some of the sink processors require everything in flowfile 
content (PutCassandra), some hybrid (ExecuteSQL), and some all in attributes 
(PutHiveQL).  So since those design inconsistent processors are already in 
public use, I have to be able to interpret the avro schema and create 
statements for any of the three in order to be able to handle the JSON and sink 
or transfer it into any source which means I need to be able to do varying 
kinds of parsing to it depending on the sink.

Let me know what you think.  Also, keep in mind in building this response to 
you I found three new bugs in at least three processors that need new tickets:  
evaluateAttributeExpressions() doesnt seem to be able to handle avro.binary 
(affects anything evaluating an attribute I assume, and may apply to normal 
blobs too), ConvertJsonToAvro is writing data provenance on failure, 
InferAvroSchema writing JSON to attribute doesn't write in the right form.

> Added ConvertDynamicJsonToAvro processor
> 
>
> Key: NIFI-1935
> URL: https://issues.apache.org/jira/browse/NIFI-1935
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Daniel Cave
>Assignee: Alex Halldin
>Priority: Minor
> Fix For: 1.0.0, 0.7.0
>
> Attachments: 
> 0001-NIFI-1935-Added-ConvertDynamicJSONToAvro.java.-Added.patch
>
>
> ConvertJsonToAvro required a predefined Avro schema to convert JSON and 

[jira] [Created] (NIFI-1997) On restart, a node that joins cluster does not update processors' run state to match the cluster

2016-06-10 Thread Mark Payne (JIRA)
Mark Payne created NIFI-1997:


 Summary: On restart, a node that joins cluster does not update 
processors' run state to match the cluster
 Key: NIFI-1997
 URL: https://issues.apache.org/jira/browse/NIFI-1997
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Mark Payne
Assignee: Mark Payne
Priority: Blocker
 Fix For: 1.0.0


If I have a cluster, I can disconnect a node from the cluster and stop a 
Processor. If I then re-join the node to the cluster, it will start running the 
processor again, as it should.

However, if I disconnect a node from the cluster and stop a Processor, and then 
restart that node instead of joining it back into the cluster, the problem 
arises. The node restarts and joins the cluster successfully but does not start 
the Processor that is currently stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1996) Cannot enter Process Group after Copy & Paste

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325046#comment-15325046
 ] 

ASF GitHub Bot commented on NIFI-1996:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/519

NIFI-1996: Fixed bug in the generation of UUID's for components when 
dealing with Snippets

Fixed bug in the generation of UUID's for components when dealing with 
Snippets

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1996

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/519.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #519


commit 8cebe78d8e93694bce847c3e3c38802aa4f701eb
Author: Mark Payne 
Date:   2016-06-10T18:35:47Z

NIFI-1996: Fixed bug in the generation of UUID's for components when 
dealing with Snippets




> Cannot enter Process Group after Copy & Paste
> -
>
> Key: NIFI-1996
> URL: https://issues.apache.org/jira/browse/NIFI-1996
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.0.0
>
>
> I copied & pasted a Process Group on the canvas. When I then try to step into 
> the Process Group, the UI hangs for a bit and then gives me an error to check 
> logs. Logs show that the request timed out when being replicated, and I can 
> no longer open the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1996) Cannot enter Process Group after Copy & Paste

2016-06-10 Thread Mark Payne (JIRA)
Mark Payne created NIFI-1996:


 Summary: Cannot enter Process Group after Copy & Paste
 Key: NIFI-1996
 URL: https://issues.apache.org/jira/browse/NIFI-1996
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.0.0
Reporter: Mark Payne
Assignee: Mark Payne
Priority: Blocker
 Fix For: 1.0.0


I copied & pasted a Process Group on the canvas. When I then try to step into 
the Process Group, the UI hangs for a bit and then gives me an error to check 
logs. Logs show that the request timed out when being replicated, and I can no 
longer open the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1280) Create FilterCSVColumns Processor

2016-06-10 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325029#comment-15325029
 ] 

Mark Payne commented on NIFI-1280:
--

Also, of note, while we would like to avoid reading the dataset twice if we 
can, what I have found is that in cases where the data is not enormous and the 
node has a reasonable amount of RAM, it will end up all residing in the 
Operating System's disk cache most of the time, so the second pass will 
actually be reading from RAM, so the performance hit is not nearly as large as 
one would think.

> Create FilterCSVColumns Processor
> -
>
> Key: NIFI-1280
> URL: https://issues.apache.org/jira/browse/NIFI-1280
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Toivo Adams
>
> We should have a Processor that allows users to easily filter out specific 
> columns from CSV data. For instance, a user would configure two different 
> properties: "Columns of Interest" (a comma-separated list of column indexes) 
> and "Filtering Strategy" (Keep Only These Columns, Remove Only These Columns).
> We can do this today with ReplaceText, but it is far more difficult than it 
> would be with this Processor, as the user has to use Regular Expressions, 
> etc. with ReplaceText.
> Eventually a Custom UI could even be built that allows a user to upload a 
> Sample CSV and choose which columns from there, similar to the way that Excel 
> works when importing CSV by dragging and selecting the desired columns? That 
> would certainly be a larger undertaking and would not need to be done for an 
> initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1280) Create FilterCSVColumns Processor

2016-06-10 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325013#comment-15325013
 ] 

Mark Payne commented on NIFI-1280:
--

[~Toivo Adams] correct - the data would only be read multiple times if 
necessary but this won't normally happen. I spent some time looking at this a 
few days ago, actually, looking for a way to refactor it so that we can easily 
enable multi-pass reading. Unfortunately, though, the only solutions that I 
came up with are either very hack-y or would require some changes to the NiFi 
API in order to allow us to obtain an InputStream and return it outside of a 
ProcessSession callback, which I'm not wild about. Planned to revisit again 
next week, but just trying to figure out a good way to make this feasible.

> Create FilterCSVColumns Processor
> -
>
> Key: NIFI-1280
> URL: https://issues.apache.org/jira/browse/NIFI-1280
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Toivo Adams
>
> We should have a Processor that allows users to easily filter out specific 
> columns from CSV data. For instance, a user would configure two different 
> properties: "Columns of Interest" (a comma-separated list of column indexes) 
> and "Filtering Strategy" (Keep Only These Columns, Remove Only These Columns).
> We can do this today with ReplaceText, but it is far more difficult than it 
> would be with this Processor, as the user has to use Regular Expressions, 
> etc. with ReplaceText.
> Eventually a Custom UI could even be built that allows a user to upload a 
> Sample CSV and choose which columns from there, similar to the way that Excel 
> works when importing CSV by dragging and selecting the desired columns? That 
> would certainly be a larger undertaking and would not need to be done for an 
> initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1280) Create FilterCSVColumns Processor

2016-06-10 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324784#comment-15324784
 ] 

Josh Elser commented on NIFI-1280:
--

bq. Hopefully all are just too busy and this is not a blocker.

Certainly just busy on my part, but I haven't forgotten about this in general 
(had talked with [~bende] about this one offline earlier this week, actually).

Will try to take a look this weekend again with fresh-eyes.

> Create FilterCSVColumns Processor
> -
>
> Key: NIFI-1280
> URL: https://issues.apache.org/jira/browse/NIFI-1280
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Toivo Adams
>
> We should have a Processor that allows users to easily filter out specific 
> columns from CSV data. For instance, a user would configure two different 
> properties: "Columns of Interest" (a comma-separated list of column indexes) 
> and "Filtering Strategy" (Keep Only These Columns, Remove Only These Columns).
> We can do this today with ReplaceText, but it is far more difficult than it 
> would be with this Processor, as the user has to use Regular Expressions, 
> etc. with ReplaceText.
> Eventually a Custom UI could even be built that allows a user to upload a 
> Sample CSV and choose which columns from there, similar to the way that Excel 
> works when importing CSV by dragging and selecting the desired columns? That 
> would certainly be a larger undertaking and would not need to be done for an 
> initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1993) Upgrade CGLIB to the latest 3.2

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324729#comment-15324729
 ] 

ASF GitHub Bot commented on NIFI-1993:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/516
  
+1 LGTM, tested #515 with and without this commit, verified the test fails 
without this commit. Merged to master


> Upgrade CGLIB to the latest 3.2
> ---
>
> Key: NIFI-1993
> URL: https://issues.apache.org/jira/browse/NIFI-1993
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>Priority: Minor
> Fix For: 1.0.0
>
>
> While, working in NIFI-826 I've encountered problem related to Groovy tests 
> (Spoke) and java 1.8 which is essentially described here: 
> https://groups.google.com/forum/#!topic/spockframework/59WIHGgcSNE
> The stack trace from the failing Spoke test:
> {code}
> test InstantiateTemplate moves and scales 
> templates[0](org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec)  Time 
> elapsed: 0.46 sec  <<< ERROR!
> java.lang.IllegalArgumentException: null
>   at 
> net.sf.cglib.proxy.BridgeMethodResolver.resolveAll(BridgeMethodResolver.java:61)
>   at net.sf.cglib.proxy.Enhancer.emitMethods(Enhancer.java:911)
>   at net.sf.cglib.proxy.Enhancer.generateClass(Enhancer.java:498)
>   at 
> net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
>   at 
> net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
>   at net.sf.cglib.proxy.Enhancer.createHelper(Enhancer.java:377)
>   at net.sf.cglib.proxy.Enhancer.createClass(Enhancer.java:317)
>   at 
> org.spockframework.mock.runtime.ProxyBasedMockFactory$CglibMockFactory.createMock(ProxyBasedMockFactory.java:91)
>   at 
> org.spockframework.mock.runtime.ProxyBasedMockFactory.create(ProxyBasedMockFactory.java:49)
>   at 
> org.spockframework.mock.runtime.JavaMockFactory.create(JavaMockFactory.java:51)
>   at 
> org.spockframework.mock.runtime.CompositeMockFactory.create(CompositeMockFactory.java:44)
>   at 
> org.spockframework.lang.SpecInternals.createMock(SpecInternals.java:45)
>   at 
> org.spockframework.lang.SpecInternals.createMockImpl(SpecInternals.java:281)
>   at org.spockframework.lang.SpecInternals.MockImpl(SpecInternals.java:99)
>   at 
> groovy.lang.GroovyObjectSupport.invokeMethod(GroovyObjectSupport.java:46)
>   at 
> groovy.lang.GroovyObjectSupport.invokeMethod(GroovyObjectSupport.java:46)
>   at 
> org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec$__spock_feature_0_0_closure2.closure7$_closure8(StandardTemplateDAOSpec.groovy:71)
>   at groovy.lang.Closure.call(Closure.java:426)
>   at 
> org.spockframework.mock.response.CodeResponseGenerator.invokeClosure(CodeResponseGenerator.java:53)
>   at 
> org.spockframework.mock.response.CodeResponseGenerator.doRespond(CodeResponseGenerator.java:36)
>   at 
> org.spockframework.mock.response.SingleResponseGenerator.respond(SingleResponseGenerator.java:31)
>   at 
> org.spockframework.mock.response.ResponseGeneratorChain.respond(ResponseGeneratorChain.java:45)
>   at 
> org.spockframework.mock.runtime.MockInteraction.accept(MockInteraction.java:76)
>   at 
> org.spockframework.mock.runtime.MockInteractionDecorator.accept(MockInteractionDecorator.java:46)
>   at 
> org.spockframework.mock.runtime.InteractionScope$1.accept(InteractionScope.java:41)
>   at 
> org.spockframework.mock.runtime.MockController.handle(MockController.java:39)
>   at 
> org.spockframework.mock.runtime.JavaMockInterceptor.intercept(JavaMockInterceptor.java:72)
>   at 
> org.spockframework.mock.runtime.CglibMockInterceptorAdapter.intercept(CglibMockInterceptorAdapter.java:30)
>   at 
> org.apache.nifi.web.dao.impl.StandardTemplateDAO.instantiateTemplate(StandardTemplateDAO.java:91)
>   at org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec.test 
> InstantiateTemplate moves and scales 
> templates(StandardTemplateDAOSpec.groovy:62)
> {code}
> Upgrading to CGLIB 3.2 resolves the issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1857) Support HTTP(S) as a transport mechanism for Site-to-Site

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324726#comment-15324726
 ] 

ASF GitHub Bot commented on NIFI-1857:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/497


> Support HTTP(S) as a transport mechanism for Site-to-Site
> -
>
> Key: NIFI-1857
> URL: https://issues.apache.org/jira/browse/NIFI-1857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>   Original Estimate: 480h
>  Remaining Estimate: 480h
>
> We should add support for using HTTP(S) for site-to-site to be an alternative 
> to the current socket based approach.
> This would support the same push based or pull based approach site-to-site 
> offers now but it would use HTTP(S) for all interactions to include learning 
> about ports, learning about NCM topology, and actually exchanging data. This 
> mechanism should also support interaction via an HTTP proxy.
> This would also require some UI work to allow the user to specify which 
> protocol for site-to-site to use such as 'raw' vs 'http'. We also need to 
> document any limitations with regard to SSL support for this mode and we'd 
> need to provide 'how-to' when using proxies like http_proxy or something else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1993) Upgrade CGLIB to the latest 3.2

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324727#comment-15324727
 ] 

ASF GitHub Bot commented on NIFI-1993:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/516


> Upgrade CGLIB to the latest 3.2
> ---
>
> Key: NIFI-1993
> URL: https://issues.apache.org/jira/browse/NIFI-1993
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>Priority: Minor
> Fix For: 1.0.0
>
>
> While, working in NIFI-826 I've encountered problem related to Groovy tests 
> (Spoke) and java 1.8 which is essentially described here: 
> https://groups.google.com/forum/#!topic/spockframework/59WIHGgcSNE
> The stack trace from the failing Spoke test:
> {code}
> test InstantiateTemplate moves and scales 
> templates[0](org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec)  Time 
> elapsed: 0.46 sec  <<< ERROR!
> java.lang.IllegalArgumentException: null
>   at 
> net.sf.cglib.proxy.BridgeMethodResolver.resolveAll(BridgeMethodResolver.java:61)
>   at net.sf.cglib.proxy.Enhancer.emitMethods(Enhancer.java:911)
>   at net.sf.cglib.proxy.Enhancer.generateClass(Enhancer.java:498)
>   at 
> net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
>   at 
> net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
>   at net.sf.cglib.proxy.Enhancer.createHelper(Enhancer.java:377)
>   at net.sf.cglib.proxy.Enhancer.createClass(Enhancer.java:317)
>   at 
> org.spockframework.mock.runtime.ProxyBasedMockFactory$CglibMockFactory.createMock(ProxyBasedMockFactory.java:91)
>   at 
> org.spockframework.mock.runtime.ProxyBasedMockFactory.create(ProxyBasedMockFactory.java:49)
>   at 
> org.spockframework.mock.runtime.JavaMockFactory.create(JavaMockFactory.java:51)
>   at 
> org.spockframework.mock.runtime.CompositeMockFactory.create(CompositeMockFactory.java:44)
>   at 
> org.spockframework.lang.SpecInternals.createMock(SpecInternals.java:45)
>   at 
> org.spockframework.lang.SpecInternals.createMockImpl(SpecInternals.java:281)
>   at org.spockframework.lang.SpecInternals.MockImpl(SpecInternals.java:99)
>   at 
> groovy.lang.GroovyObjectSupport.invokeMethod(GroovyObjectSupport.java:46)
>   at 
> groovy.lang.GroovyObjectSupport.invokeMethod(GroovyObjectSupport.java:46)
>   at 
> org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec$__spock_feature_0_0_closure2.closure7$_closure8(StandardTemplateDAOSpec.groovy:71)
>   at groovy.lang.Closure.call(Closure.java:426)
>   at 
> org.spockframework.mock.response.CodeResponseGenerator.invokeClosure(CodeResponseGenerator.java:53)
>   at 
> org.spockframework.mock.response.CodeResponseGenerator.doRespond(CodeResponseGenerator.java:36)
>   at 
> org.spockframework.mock.response.SingleResponseGenerator.respond(SingleResponseGenerator.java:31)
>   at 
> org.spockframework.mock.response.ResponseGeneratorChain.respond(ResponseGeneratorChain.java:45)
>   at 
> org.spockframework.mock.runtime.MockInteraction.accept(MockInteraction.java:76)
>   at 
> org.spockframework.mock.runtime.MockInteractionDecorator.accept(MockInteractionDecorator.java:46)
>   at 
> org.spockframework.mock.runtime.InteractionScope$1.accept(InteractionScope.java:41)
>   at 
> org.spockframework.mock.runtime.MockController.handle(MockController.java:39)
>   at 
> org.spockframework.mock.runtime.JavaMockInterceptor.intercept(JavaMockInterceptor.java:72)
>   at 
> org.spockframework.mock.runtime.CglibMockInterceptorAdapter.intercept(CglibMockInterceptorAdapter.java:30)
>   at 
> org.apache.nifi.web.dao.impl.StandardTemplateDAO.instantiateTemplate(StandardTemplateDAO.java:91)
>   at org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec.test 
> InstantiateTemplate moves and scales 
> templates(StandardTemplateDAOSpec.groovy:62)
> {code}
> Upgrading to CGLIB 3.2 resolves the issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1993) Upgrade CGLIB to the latest 3.2

2016-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324724#comment-15324724
 ] 

ASF subversion and git services commented on NIFI-1993:
---

Commit 1b965cb667e6a3c3112e05777cc369357b5d3f71 in nifi's branch 
refs/heads/master from [~ozhurakousky]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=1b965cb ]

NIFI-1993 upgraded CGLIB to 3.2.2

This closes #516


> Upgrade CGLIB to the latest 3.2
> ---
>
> Key: NIFI-1993
> URL: https://issues.apache.org/jira/browse/NIFI-1993
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>Priority: Minor
> Fix For: 1.0.0
>
>
> While, working in NIFI-826 I've encountered problem related to Groovy tests 
> (Spoke) and java 1.8 which is essentially described here: 
> https://groups.google.com/forum/#!topic/spockframework/59WIHGgcSNE
> The stack trace from the failing Spoke test:
> {code}
> test InstantiateTemplate moves and scales 
> templates[0](org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec)  Time 
> elapsed: 0.46 sec  <<< ERROR!
> java.lang.IllegalArgumentException: null
>   at 
> net.sf.cglib.proxy.BridgeMethodResolver.resolveAll(BridgeMethodResolver.java:61)
>   at net.sf.cglib.proxy.Enhancer.emitMethods(Enhancer.java:911)
>   at net.sf.cglib.proxy.Enhancer.generateClass(Enhancer.java:498)
>   at 
> net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
>   at 
> net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
>   at net.sf.cglib.proxy.Enhancer.createHelper(Enhancer.java:377)
>   at net.sf.cglib.proxy.Enhancer.createClass(Enhancer.java:317)
>   at 
> org.spockframework.mock.runtime.ProxyBasedMockFactory$CglibMockFactory.createMock(ProxyBasedMockFactory.java:91)
>   at 
> org.spockframework.mock.runtime.ProxyBasedMockFactory.create(ProxyBasedMockFactory.java:49)
>   at 
> org.spockframework.mock.runtime.JavaMockFactory.create(JavaMockFactory.java:51)
>   at 
> org.spockframework.mock.runtime.CompositeMockFactory.create(CompositeMockFactory.java:44)
>   at 
> org.spockframework.lang.SpecInternals.createMock(SpecInternals.java:45)
>   at 
> org.spockframework.lang.SpecInternals.createMockImpl(SpecInternals.java:281)
>   at org.spockframework.lang.SpecInternals.MockImpl(SpecInternals.java:99)
>   at 
> groovy.lang.GroovyObjectSupport.invokeMethod(GroovyObjectSupport.java:46)
>   at 
> groovy.lang.GroovyObjectSupport.invokeMethod(GroovyObjectSupport.java:46)
>   at 
> org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec$__spock_feature_0_0_closure2.closure7$_closure8(StandardTemplateDAOSpec.groovy:71)
>   at groovy.lang.Closure.call(Closure.java:426)
>   at 
> org.spockframework.mock.response.CodeResponseGenerator.invokeClosure(CodeResponseGenerator.java:53)
>   at 
> org.spockframework.mock.response.CodeResponseGenerator.doRespond(CodeResponseGenerator.java:36)
>   at 
> org.spockframework.mock.response.SingleResponseGenerator.respond(SingleResponseGenerator.java:31)
>   at 
> org.spockframework.mock.response.ResponseGeneratorChain.respond(ResponseGeneratorChain.java:45)
>   at 
> org.spockframework.mock.runtime.MockInteraction.accept(MockInteraction.java:76)
>   at 
> org.spockframework.mock.runtime.MockInteractionDecorator.accept(MockInteractionDecorator.java:46)
>   at 
> org.spockframework.mock.runtime.InteractionScope$1.accept(InteractionScope.java:41)
>   at 
> org.spockframework.mock.runtime.MockController.handle(MockController.java:39)
>   at 
> org.spockframework.mock.runtime.JavaMockInterceptor.intercept(JavaMockInterceptor.java:72)
>   at 
> org.spockframework.mock.runtime.CglibMockInterceptorAdapter.intercept(CglibMockInterceptorAdapter.java:30)
>   at 
> org.apache.nifi.web.dao.impl.StandardTemplateDAO.instantiateTemplate(StandardTemplateDAO.java:91)
>   at org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec.test 
> InstantiateTemplate moves and scales 
> templates(StandardTemplateDAOSpec.groovy:62)
> {code}
> Upgrading to CGLIB 3.2 resolves the issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


nifi git commit: NIFI-1993 upgraded CGLIB to 3.2.2

2016-06-10 Thread mattyb149
Repository: nifi
Updated Branches:
  refs/heads/master c120c4982 -> 1b965cb66


NIFI-1993 upgraded CGLIB to 3.2.2

This closes #516


Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/1b965cb6
Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/1b965cb6
Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/1b965cb6

Branch: refs/heads/master
Commit: 1b965cb667e6a3c3112e05777cc369357b5d3f71
Parents: c120c49
Author: Oleg Zhurakousky 
Authored: Thu Jun 9 20:08:00 2016 -0400
Committer: Matt Burgess 
Committed: Fri Jun 10 12:19:50 2016 -0400

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/nifi/blob/1b965cb6/pom.xml
--
diff --git a/pom.xml b/pom.xml
index d06b274..df4e70c 100644
--- a/pom.xml
+++ b/pom.xml
@@ -546,7 +546,7 @@ language governing permissions and limitations under the 
License. -->
 
 cglib
 cglib-nodep
-3.1
+3.2.2
 
 
 org.apache.commons



[jira] [Commented] (NIFI-1037) Hdfs Inotify processor

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324687#comment-15324687
 ] 

ASF GitHub Bot commented on NIFI-1037:
--

Github user jjmeyer0 commented on the issue:

https://github.com/apache/nifi/pull/493
  
Sorry for the late response, this week has been a bit crazy. I updated the 
processor to add all the suggestions (except the filtering update you talked 
about). I will look into updating the way the processor does filtering, but I 
have a couple concerns. For example, the comma-separated list of paths won't 
always work. HDFS directories can technically have commas. Also, I couldn't 
find an example of how regular expressions work with the expression language. 
I'm a but curious how'll they'll interact with one another. I'll try to find 
some time to play around with it this weekend to see.


> Hdfs Inotify processor
> --
>
> Key: NIFI-1037
> URL: https://issues.apache.org/jira/browse/NIFI-1037
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: nicolas maillard
>Assignee: Josh Meyer
>Priority: Minor
>
> HDFS has an Inotify interface that enables to access the HDFS edit stream.
> https://issues.apache.org/jira/browse/HDFS-6634
> Creating a processor to listen in and get notifications either for select 
> directories or select actions would have many applications.
> - Stream to a search engine the activity on HDFS
> - Wait for specific actions or files to trigger workflows, like duplication 
> to other clusters
> - Validate ingestion processes
> etc..
> probably more I don't think of.
> I have a first working beta version that needs to evolve
> it reuses the Hadoop-nar-bundle
> Needs a HDFS 2.7  dependency currently done through editing the Hadop-lib 
> bundle
> let me know if this idea makes sense and would be of interest to the community
> would love to contribute the idea



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1280) Create FilterCSVColumns Processor

2016-06-10 Thread Toivo Adams (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324583#comment-15324583
 ] 

Toivo Adams commented on NIFI-1280:
---

Josh, Mark, 

I was too categorical when I declared
“read the data multiple times in order to perform the JOIN doesn't sound good. “

Hopefully all are just too busy and this is not a blocker.

And if reading data multiple times is needed and cannot be avoided, when let’s 
implement it.
As I understand reading data several times won’t happen always, only during 
joins?
Maybe it’s tolerable? At least some time, until better solution is available.

Thoughts?

May I do something?

Thanks
toivo


> Create FilterCSVColumns Processor
> -
>
> Key: NIFI-1280
> URL: https://issues.apache.org/jira/browse/NIFI-1280
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Toivo Adams
>
> We should have a Processor that allows users to easily filter out specific 
> columns from CSV data. For instance, a user would configure two different 
> properties: "Columns of Interest" (a comma-separated list of column indexes) 
> and "Filtering Strategy" (Keep Only These Columns, Remove Only These Columns).
> We can do this today with ReplaceText, but it is far more difficult than it 
> would be with this Processor, as the user has to use Regular Expressions, 
> etc. with ReplaceText.
> Eventually a Custom UI could even be built that allows a user to upload a 
> Sample CSV and choose which columns from there, similar to the way that Excel 
> works when importing CSV by dragging and selecting the desired columns? That 
> would certainly be a larger undertaking and would not need to be done for an 
> initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1974) Support Custom Properties in Expression Language

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324549#comment-15324549
 ] 

ASF GitHub Bot commented on NIFI-1974:
--

Github user YolandaMDavis commented on the issue:

https://github.com/apache/nifi/pull/501
  
@markap14 per our offline discussion I made changes that provides variable 
registry via constructors and eliminated the access from the 
ControllerServiceLookup.


> Support Custom Properties in Expression Language
> 
>
> Key: NIFI-1974
> URL: https://issues.apache.org/jira/browse/NIFI-1974
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
> Fix For: 1.0.0
>
>
> Add a property in "nifi.properties" config file to allows users to specify a 
> list of custom properties files (containing data such as environmental 
> specific values, or sensitive values, etc.). The key/value pairs should be 
> loaded upon NIFI startup and availbale to processors for use in expression 
> languages. 
> Optimally this will lay the groundwork for a UI driven Variable Registry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1993) Upgrade CGLIB to the latest 3.2

2016-06-10 Thread Oleg Zhurakousky (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324380#comment-15324380
 ] 

Oleg Zhurakousky commented on NIFI-1993:


Just to connect more dots, the symptoms of this issue are very similar to 
NIFI-1595. Basically improper handling of _bridge_ and _synthetic_ methods

> Upgrade CGLIB to the latest 3.2
> ---
>
> Key: NIFI-1993
> URL: https://issues.apache.org/jira/browse/NIFI-1993
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>Priority: Minor
> Fix For: 1.0.0
>
>
> While, working in NIFI-826 I've encountered problem related to Groovy tests 
> (Spoke) and java 1.8 which is essentially described here: 
> https://groups.google.com/forum/#!topic/spockframework/59WIHGgcSNE
> The stack trace from the failing Spoke test:
> {code}
> test InstantiateTemplate moves and scales 
> templates[0](org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec)  Time 
> elapsed: 0.46 sec  <<< ERROR!
> java.lang.IllegalArgumentException: null
>   at 
> net.sf.cglib.proxy.BridgeMethodResolver.resolveAll(BridgeMethodResolver.java:61)
>   at net.sf.cglib.proxy.Enhancer.emitMethods(Enhancer.java:911)
>   at net.sf.cglib.proxy.Enhancer.generateClass(Enhancer.java:498)
>   at 
> net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
>   at 
> net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
>   at net.sf.cglib.proxy.Enhancer.createHelper(Enhancer.java:377)
>   at net.sf.cglib.proxy.Enhancer.createClass(Enhancer.java:317)
>   at 
> org.spockframework.mock.runtime.ProxyBasedMockFactory$CglibMockFactory.createMock(ProxyBasedMockFactory.java:91)
>   at 
> org.spockframework.mock.runtime.ProxyBasedMockFactory.create(ProxyBasedMockFactory.java:49)
>   at 
> org.spockframework.mock.runtime.JavaMockFactory.create(JavaMockFactory.java:51)
>   at 
> org.spockframework.mock.runtime.CompositeMockFactory.create(CompositeMockFactory.java:44)
>   at 
> org.spockframework.lang.SpecInternals.createMock(SpecInternals.java:45)
>   at 
> org.spockframework.lang.SpecInternals.createMockImpl(SpecInternals.java:281)
>   at org.spockframework.lang.SpecInternals.MockImpl(SpecInternals.java:99)
>   at 
> groovy.lang.GroovyObjectSupport.invokeMethod(GroovyObjectSupport.java:46)
>   at 
> groovy.lang.GroovyObjectSupport.invokeMethod(GroovyObjectSupport.java:46)
>   at 
> org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec$__spock_feature_0_0_closure2.closure7$_closure8(StandardTemplateDAOSpec.groovy:71)
>   at groovy.lang.Closure.call(Closure.java:426)
>   at 
> org.spockframework.mock.response.CodeResponseGenerator.invokeClosure(CodeResponseGenerator.java:53)
>   at 
> org.spockframework.mock.response.CodeResponseGenerator.doRespond(CodeResponseGenerator.java:36)
>   at 
> org.spockframework.mock.response.SingleResponseGenerator.respond(SingleResponseGenerator.java:31)
>   at 
> org.spockframework.mock.response.ResponseGeneratorChain.respond(ResponseGeneratorChain.java:45)
>   at 
> org.spockframework.mock.runtime.MockInteraction.accept(MockInteraction.java:76)
>   at 
> org.spockframework.mock.runtime.MockInteractionDecorator.accept(MockInteractionDecorator.java:46)
>   at 
> org.spockframework.mock.runtime.InteractionScope$1.accept(InteractionScope.java:41)
>   at 
> org.spockframework.mock.runtime.MockController.handle(MockController.java:39)
>   at 
> org.spockframework.mock.runtime.JavaMockInterceptor.intercept(JavaMockInterceptor.java:72)
>   at 
> org.spockframework.mock.runtime.CglibMockInterceptorAdapter.intercept(CglibMockInterceptorAdapter.java:30)
>   at 
> org.apache.nifi.web.dao.impl.StandardTemplateDAO.instantiateTemplate(StandardTemplateDAO.java:91)
>   at org.apache.nifi.web.dao.impl.StandardTemplateDAOSpec.test 
> InstantiateTemplate moves and scales 
> templates(StandardTemplateDAOSpec.groovy:62)
> {code}
> Upgrading to CGLIB 3.2 resolves the issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-1883) Controller Service referencing components

2016-06-10 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-1883.
---
Resolution: Fixed

> Controller Service referencing components
> -
>
> Key: NIFI-1883
> URL: https://issues.apache.org/jira/browse/NIFI-1883
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.0.0
>
>
> Controller Service referencing components are not restored at start up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-1883) Controller Service referencing components

2016-06-10 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reassigned NIFI-1883:
-

Assignee: Matt Gilman

> Controller Service referencing components
> -
>
> Key: NIFI-1883
> URL: https://issues.apache.org/jira/browse/NIFI-1883
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.0.0
>
>
> Controller Service referencing components are not restored at start up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1901) Restore access control unit tests

2016-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324343#comment-15324343
 ] 

ASF GitHub Bot commented on NIFI-1901:
--

GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/518

NIFI-1901: Component based access control tests

- Building component based access control tests for Connections, Funnels, 
Labels, Input Ports, Output Ports, Processors, and Process Groups.
- Tests for remaining APIs (Queue's, Controller, History, etc) will be 
coming in a subsequent commit.
- Restoring Access Token Endpoint tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-1901

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/518.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #518


commit a5fecda5a2ffb35e21d950aa19a07127e19a419e
Author: Bryan Rosander 
Date:   2016-05-27T14:56:02Z

NIFI-1975 - Processor for parsing evtx files

Signed-off-by: Matt Burgess 

This closes #492

commit c120c4982d4fc811b06b672e3983b8ca5fb8ae64
Author: Koji Kawamura 
Date:   2016-06-06T13:19:26Z

NIFI-1857: HTTPS Site-to-Site

- Enable HTTP(S) for Site-to-Site communication
- Support HTTP Proxy in the middle of local and remote NiFi
- Support BASIC and DIGEST auth with Proxy Server
- Provide 2-phase style commit same as existing socket version
- [WIP] Test with the latest cluster env (without NCM) hasn't tested yet

- Fixed Buffer handling issues at asyc http client POST
- Fixed JS error when applying Remote Process Group Port setting from UI
- Use compression setting from UI
- Removed already finished TODO comments

- Added additional buffer draining code after receiving EOF
- Added inspection and assert code to make sure Site-to-Site client has
  written data fully to output
stream
- Changed default nifi.remote.input.secure from true to false

This closes #497.

commit d9dcb46dc4be926275131a5b552dfbd33db4f3ad
Author: Matt Gilman 
Date:   2016-06-10T11:57:02Z

NIFI-1901:
- Building component based access control tests for Connections, Funnels, 
Labels, Input Ports, Output Ports, Processors, and Process Groups.
- Restoring Access Token Endpoint tests.




> Restore access control unit tests
> -
>
> Key: NIFI-1901
> URL: https://issues.apache.org/jira/browse/NIFI-1901
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core Framework
>Reporter: Matt Gilman
>Priority: Critical
> Fix For: 1.0.0
>
>
> The previous access control tests have been ignored as they are designed the 
> around role based authorities. New, more comprehensive, tests need to be 
> introduced once the fine-grained component authorization is in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)