[GitHub] nifi pull request #2219: NiFi-4436: Add UI controls for starting/stopping/re...

2017-12-29 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2219#discussion_r159118646
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/WEB-INF/partials/canvas/save-flow-version-dialog.jsp
 ---
@@ -0,0 +1,57 @@
+<%--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+--%>
+<%@ page contentType="text/html" pageEncoding="UTF-8" session="false" %>
+
+
+
+Registry
+
+
+
+
+
+
+Bucket
+
+
+
+
+
+
+Name
+
+
+
+
+
+
+
+
+Description
--- End diff --

Where can I edit `Description` once a flow is added to Registry?


---


[GitHub] nifi pull request #2219: NiFi-4436: Add UI controls for starting/stopping/re...

2017-12-29 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2219#discussion_r159118626
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/WEB-INF/partials/canvas/save-flow-version-dialog.jsp
 ---
@@ -0,0 +1,57 @@
+<%--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+--%>
+<%@ page contentType="text/html" pageEncoding="UTF-8" session="false" %>
+
+
+
+Registry
+
+
+
+
+
+
+Bucket
+
+
+
+
+
+
+Name
+
+
+
+
+
+
+
+
+Description
--- End diff --

When I looked at this dialog at the first time, I was not sure what should 
I specify for this `Description` and `Comment`. It can be more user friendly if 
these read something like 'Flow Description' and 'Version Comment'.


---


[GitHub] nifi pull request #2219: NiFi-4436: Add UI controls for starting/stopping/re...

2017-12-29 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2219#discussion_r159118517
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/registry/flow/StandardFlowRegistryClient.java
 ---
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.registry.flow;
+
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+import javax.net.ssl.SSLContext;
+
+import org.apache.nifi.framework.security.util.SslContextFactory;
+import org.apache.nifi.util.NiFiProperties;
+
+public class StandardFlowRegistryClient implements FlowRegistryClient {
+private NiFiProperties nifiProperties;
+private ConcurrentMap registryById = new 
ConcurrentHashMap<>();
+
+@Override
+public FlowRegistry getFlowRegistry(String registryId) {
+return registryById.get(registryId);
+}
+
+@Override
+public Set getRegistryIdentifiers() {
+return registryById.keySet();
+}
+
+@Override
+public void addFlowRegistry(final FlowRegistry registry) {
+final boolean duplicateName = registryById.values().stream()
+.anyMatch(reg -> reg.getName().equals(registry.getName()));
+
+if (duplicateName) {
+throw new IllegalStateException("Cannot add Flow Registry 
because a Flow Registry already exists with the name " + registry.getName());
--- End diff --

The same name duplication check should be done when a FlowRegistry is 
updated. Currently, the same name can be specified when updating.


---


[GitHub] nifi pull request #2361: NIFI-4715: ListS3 produces duplicates in frequently...

2017-12-29 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2361#discussion_r159118001
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/ListS3.java
 ---
@@ -264,18 +265,19 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 }
 bucketLister.setNextMarker();
 
+totalListCount += listCount;
 commit(context, session, listCount);
 listCount = 0;
 } while (bucketLister.isTruncated());
+
+// Update stateManger with the most recent timestamp
 currentTimestamp = maxTimestamp;
+persistState(context);
 
 final long listMillis = 
TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
 getLogger().info("Successfully listed S3 bucket {} in {} millis", 
new Object[]{bucket, listMillis});
 
-if (!commit(context, session, listCount)) {
-if (currentTimestamp > 0) {
-persistState(context);
-}
+if (totalListCount == 0) {
--- End diff --

Good catch!


---


[jira] [Commented] (NIFI-4715) ListS3 produces duplicates in frequently updated buckets

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306676#comment-16306676
 ] 

ASF GitHub Bot commented on NIFI-4715:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2361#discussion_r159118001
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/ListS3.java
 ---
@@ -264,18 +265,19 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 }
 bucketLister.setNextMarker();
 
+totalListCount += listCount;
 commit(context, session, listCount);
 listCount = 0;
 } while (bucketLister.isTruncated());
+
+// Update stateManger with the most recent timestamp
 currentTimestamp = maxTimestamp;
+persistState(context);
 
 final long listMillis = 
TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
 getLogger().info("Successfully listed S3 bucket {} in {} millis", 
new Object[]{bucket, listMillis});
 
-if (!commit(context, session, listCount)) {
-if (currentTimestamp > 0) {
-persistState(context);
-}
+if (totalListCount == 0) {
--- End diff --

Good catch!


> ListS3 produces duplicates in frequently updated buckets
> 
>
> Key: NIFI-4715
> URL: https://issues.apache.org/jira/browse/NIFI-4715
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
> Environment: All
>Reporter: Milan Das
> Attachments: List-S3-dup-issue.xml, screenshot-1.png
>
>
> ListS3 state is implemented using HashSet. HashSet is not thread safe. When 
> ListS3 operates in multi threaded mode, sometimes it  tries to list  same 
> file from S3 bucket.  Seems like HashSet data is getting corrupted.
> currentKeys = new HashSet<>(); // need to be implemented Thread Safe like 
> currentKeys = //ConcurrentHashMap.newKeySet();
> *{color:red}+Update+{color}*:
> This is not a HashSet issue:
> Root cause is: 
> When the file gets uploaded to S3 simultaneously  when List S3 is in progress.
> onTrigger-->  maxTimestamp is initiated as 0L.
> This is clearing keys as per the code below
> When lastModifiedTime on S3 object is same as currentTimestamp for the listed 
> key it should be skipped. As the key is cleared, it is loading the same file 
> again. 
> I think fix should be to initiate the maxTimestamp with currentTimestamp not 
> 0L.
> {code}
>  long maxTimestamp = currentTimestamp;
> {code}
> Following block is clearing keys.
> {code:title=org.apache.nifi.processors.aws.s3.ListS3.java|borderStyle=solid}
>  if (lastModified > maxTimestamp) {
> maxTimestamp = lastModified;
> currentKeys.clear();
> getLogger().debug("clearing keys");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4715) ListS3 produces duplicates in frequently updated buckets

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306675#comment-16306675
 ] 

ASF GitHub Bot commented on NIFI-4715:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2361#discussion_r159117995
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/ListS3.java
 ---
@@ -264,18 +265,19 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 }
 bucketLister.setNextMarker();
 
+totalListCount += listCount;
 commit(context, session, listCount);
 listCount = 0;
 } while (bucketLister.isTruncated());
+
+// Update stateManger with the most recent timestamp
 currentTimestamp = maxTimestamp;
+persistState(context);
--- End diff --

These two lines of code can be embedded in `commit` method.
```
 // Update stateManger with the most recent timestamp
 currentTimestamp = maxTimestamp;
 persistState(context);
```


> ListS3 produces duplicates in frequently updated buckets
> 
>
> Key: NIFI-4715
> URL: https://issues.apache.org/jira/browse/NIFI-4715
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
> Environment: All
>Reporter: Milan Das
> Attachments: List-S3-dup-issue.xml, screenshot-1.png
>
>
> ListS3 state is implemented using HashSet. HashSet is not thread safe. When 
> ListS3 operates in multi threaded mode, sometimes it  tries to list  same 
> file from S3 bucket.  Seems like HashSet data is getting corrupted.
> currentKeys = new HashSet<>(); // need to be implemented Thread Safe like 
> currentKeys = //ConcurrentHashMap.newKeySet();
> *{color:red}+Update+{color}*:
> This is not a HashSet issue:
> Root cause is: 
> When the file gets uploaded to S3 simultaneously  when List S3 is in progress.
> onTrigger-->  maxTimestamp is initiated as 0L.
> This is clearing keys as per the code below
> When lastModifiedTime on S3 object is same as currentTimestamp for the listed 
> key it should be skipped. As the key is cleared, it is loading the same file 
> again. 
> I think fix should be to initiate the maxTimestamp with currentTimestamp not 
> 0L.
> {code}
>  long maxTimestamp = currentTimestamp;
> {code}
> Following block is clearing keys.
> {code:title=org.apache.nifi.processors.aws.s3.ListS3.java|borderStyle=solid}
>  if (lastModified > maxTimestamp) {
> maxTimestamp = lastModified;
> currentKeys.clear();
> getLogger().debug("clearing keys");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2361: NIFI-4715: ListS3 produces duplicates in frequently...

2017-12-29 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2361#discussion_r159117995
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/ListS3.java
 ---
@@ -264,18 +265,19 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 }
 bucketLister.setNextMarker();
 
+totalListCount += listCount;
 commit(context, session, listCount);
 listCount = 0;
 } while (bucketLister.isTruncated());
+
+// Update stateManger with the most recent timestamp
 currentTimestamp = maxTimestamp;
+persistState(context);
--- End diff --

These two lines of code can be embedded in `commit` method.
```
 // Update stateManger with the most recent timestamp
 currentTimestamp = maxTimestamp;
 persistState(context);
```


---


[jira] [Commented] (NIFIREG-89) Add a default URL handler for root '/' instead of returning a 404

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306640#comment-16306640
 ] 

ASF GitHub Bot commented on NIFIREG-89:
---

GitHub user dannylane opened a pull request:

https://github.com/apache/nifi-registry/pull/74

NIFIREG-89 Add landing page for root URL '/'

Following on from a suggestion in the dev mailing list I created 
[NIFIREG-89](https://issues.apache.org/jira/browse/NIFIREG-89) to add a landing 
page informing the user they may have mistyped, similar to the page that 
currently exists in NiFi.
I followed a similar approach as was used in NiFi.

This is the page that has been added:

![screenshot_landing_page](https://user-images.githubusercontent.com/250202/34450579-d86c85a0-ed04-11e7-8ae5-2bfb7878df91.png)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dannylane/nifi-registry NIFIREG-89

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/74.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #74


commit 09d6ec4ba0e7f36e07f07deac76d4304339d32b6
Author: Danny Lane 
Date:   2017-12-30T01:43:09Z

NIFIREG-89 Add landing page for default url




> Add a default URL handler for root '/' instead of returning a 404
> -
>
> Key: NIFIREG-89
> URL: https://issues.apache.org/jira/browse/NIFIREG-89
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Danny Lane
>Assignee: Danny Lane
>Priority: Minor
>  Labels: usability
>
> Currently when you land on the root or a NiFi Registry deployment you get a 
> 404 response from Jetty.
> It was suggested in the mailing list to add a page similar to the NiFi 'You 
> may have mistyped...' page.
> This is item to track that suggestion and associated work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #74: NIFIREG-89 Add landing page for root URL '/'

2017-12-29 Thread dannylane
GitHub user dannylane opened a pull request:

https://github.com/apache/nifi-registry/pull/74

NIFIREG-89 Add landing page for root URL '/'

Following on from a suggestion in the dev mailing list I created 
[NIFIREG-89](https://issues.apache.org/jira/browse/NIFIREG-89) to add a landing 
page informing the user they may have mistyped, similar to the page that 
currently exists in NiFi.
I followed a similar approach as was used in NiFi.

This is the page that has been added:

![screenshot_landing_page](https://user-images.githubusercontent.com/250202/34450579-d86c85a0-ed04-11e7-8ae5-2bfb7878df91.png)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dannylane/nifi-registry NIFIREG-89

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/74.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #74


commit 09d6ec4ba0e7f36e07f07deac76d4304339d32b6
Author: Danny Lane 
Date:   2017-12-30T01:43:09Z

NIFIREG-89 Add landing page for default url




---


[jira] [Closed] (NIFIREG-82) Properly handle when someone goes to registry but without the "/nifi-registry/" path

2017-12-29 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-82?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall closed NIFIREG-82.
---
Resolution: Duplicate

> Properly handle when someone goes to registry but without the 
> "/nifi-registry/" path
> 
>
> Key: NIFIREG-82
> URL: https://issues.apache.org/jira/browse/NIFIREG-82
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Joseph Percivall
>Priority: Trivial
>
> Currently, Registry goes to the Jetty 404 page. We should have something 
> similar to NiFi with the page that says "Did you mean: /nifiYou may have 
> mistyped"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFIREG-89) Add a default URL handler for root '/' instead of returning a 404

2017-12-29 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-89?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall reassigned NIFIREG-89:
---

Assignee: Danny Lane

> Add a default URL handler for root '/' instead of returning a 404
> -
>
> Key: NIFIREG-89
> URL: https://issues.apache.org/jira/browse/NIFIREG-89
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Danny Lane
>Assignee: Danny Lane
>Priority: Minor
>  Labels: usability
>
> Currently when you land on the root or a NiFi Registry deployment you get a 
> 404 response from Jetty.
> It was suggested in the mailing list to add a page similar to the NiFi 'You 
> may have mistyped...' page.
> This is item to track that suggestion and associated work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFIREG-89) Add a default URL handler for root '/' instead of returning a 404

2017-12-29 Thread Danny Lane (JIRA)
Danny Lane created NIFIREG-89:
-

 Summary: Add a default URL handler for root '/' instead of 
returning a 404
 Key: NIFIREG-89
 URL: https://issues.apache.org/jira/browse/NIFIREG-89
 Project: NiFi Registry
  Issue Type: Improvement
Reporter: Danny Lane
Priority: Minor


Currently when you land on the root or a NiFi Registry deployment you get a 404 
response from Jetty.
It was suggested in the mailing list to add a page similar to the NiFi 'You may 
have mistyped...' page.
This is item to track that suggestion and associated work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4715) ListS3 produces duplicates in frequently updated buckets

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306613#comment-16306613
 ] 

ASF GitHub Bot commented on NIFI-4715:
--

Github user adamlamar commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2361#discussion_r159112711
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/ListS3.java
 ---
@@ -264,18 +265,19 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 }
 bucketLister.setNextMarker();
 
+totalListCount += listCount;
 commit(context, session, listCount);
 listCount = 0;
 } while (bucketLister.isTruncated());
+
+// Update stateManger with the most recent timestamp
 currentTimestamp = maxTimestamp;
+persistState(context);
 
 final long listMillis = 
TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
 getLogger().info("Successfully listed S3 bucket {} in {} millis", 
new Object[]{bucket, listMillis});
 
-if (!commit(context, session, listCount)) {
-if (currentTimestamp > 0) {
-persistState(context);
-}
--- End diff --

Note that this `commit` isn't required, since the last part of the main 
do/while loop already does a `commit`. Further, it sets `listCount` to zero, so 
this branch would always be taken.


> ListS3 produces duplicates in frequently updated buckets
> 
>
> Key: NIFI-4715
> URL: https://issues.apache.org/jira/browse/NIFI-4715
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
> Environment: All
>Reporter: Milan Das
> Attachments: List-S3-dup-issue.xml, screenshot-1.png
>
>
> ListS3 state is implemented using HashSet. HashSet is not thread safe. When 
> ListS3 operates in multi threaded mode, sometimes it  tries to list  same 
> file from S3 bucket.  Seems like HashSet data is getting corrupted.
> currentKeys = new HashSet<>(); // need to be implemented Thread Safe like 
> currentKeys = //ConcurrentHashMap.newKeySet();
> *{color:red}+Update+{color}*:
> This is not a HashSet issue:
> Root cause is: 
> When the file gets uploaded to S3 simultaneously  when List S3 is in progress.
> onTrigger-->  maxTimestamp is initiated as 0L.
> This is clearing keys as per the code below
> When lastModifiedTime on S3 object is same as currentTimestamp for the listed 
> key it should be skipped. As the key is cleared, it is loading the same file 
> again. 
> I think fix should be to initiate the maxTimestamp with currentTimestamp not 
> 0L.
> {code}
>  long maxTimestamp = currentTimestamp;
> {code}
> Following block is clearing keys.
> {code:title=org.apache.nifi.processors.aws.s3.ListS3.java|borderStyle=solid}
>  if (lastModified > maxTimestamp) {
> maxTimestamp = lastModified;
> currentKeys.clear();
> getLogger().debug("clearing keys");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2361: NIFI-4715: ListS3 produces duplicates in frequently...

2017-12-29 Thread adamlamar
Github user adamlamar commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2361#discussion_r159112711
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/ListS3.java
 ---
@@ -264,18 +265,19 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 }
 bucketLister.setNextMarker();
 
+totalListCount += listCount;
 commit(context, session, listCount);
 listCount = 0;
 } while (bucketLister.isTruncated());
+
+// Update stateManger with the most recent timestamp
 currentTimestamp = maxTimestamp;
+persistState(context);
 
 final long listMillis = 
TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
 getLogger().info("Successfully listed S3 bucket {} in {} millis", 
new Object[]{bucket, listMillis});
 
-if (!commit(context, session, listCount)) {
-if (currentTimestamp > 0) {
-persistState(context);
-}
--- End diff --

Note that this `commit` isn't required, since the last part of the main 
do/while loop already does a `commit`. Further, it sets `listCount` to zero, so 
this branch would always be taken.


---


[jira] [Commented] (NIFI-4715) ListS3 produces duplicates in frequently updated buckets

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306612#comment-16306612
 ] 

ASF GitHub Bot commented on NIFI-4715:
--

Github user adamlamar commented on the issue:

https://github.com/apache/nifi/pull/2361
  
@ijokarumawak I did as you suggested and pulled `persistState` out in the 
case when no new keys have been listed, but this actually caused unit tests to 
fail. This is because `currentTimestamp` never changes during the main loop, so 
even though `commit` calls `persistState`, the value of `currentTimestamp` 
doesn't change until the main loop exits. Which is why `persistState` is 
required in both exit paths.

Instead, I took a slightly different approach with the change just pushed. 
Since `currentTimestamp` is the current value persisted to the state manager, 
`maxTimestamp` is the highest timestamp seen in the main loop, and 
`currentKeys` is tied to `maxTimestamp` (not `currentTimestamp`), I removed the 
`persistState` call in `commit`, and did `persistState` at the end of 
`onTrigger` only. While this does continue to `persistState` on each exit, it 
reduces the number of `persistState` calls to once per `onTrigger` rather than 
once per 1000 keys iterated (which was done previously in `commit`).

I did a bunch of manual testing with concurrent `PutS3Object` and `ListS3` 
and always got the correct number of listed keys, even when uploading 20k+ 
objects using 10 threads. I tried a few strategies to skip `persistState` if 
nothing had changed, but in manual testing it always produced the wrong number 
of keys, although sometimes only off by 1. The current code should be quite an 
improvement to the load on the state manager, even if it isn't ideal.

I also introduced `totalListCount` which helps tighten up the log messages 
a bit. Previously it would "successfully list X objects" followed by "no new 
objects to list" in a single `onTrigger` (this was apparent in the unit test 
output). `totalListCount` also avoids an unnecessary `yield`.

There's a lot going on in this one - let me know if you have any other 
questions!


> ListS3 produces duplicates in frequently updated buckets
> 
>
> Key: NIFI-4715
> URL: https://issues.apache.org/jira/browse/NIFI-4715
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
> Environment: All
>Reporter: Milan Das
> Attachments: List-S3-dup-issue.xml, screenshot-1.png
>
>
> ListS3 state is implemented using HashSet. HashSet is not thread safe. When 
> ListS3 operates in multi threaded mode, sometimes it  tries to list  same 
> file from S3 bucket.  Seems like HashSet data is getting corrupted.
> currentKeys = new HashSet<>(); // need to be implemented Thread Safe like 
> currentKeys = //ConcurrentHashMap.newKeySet();
> *{color:red}+Update+{color}*:
> This is not a HashSet issue:
> Root cause is: 
> When the file gets uploaded to S3 simultaneously  when List S3 is in progress.
> onTrigger-->  maxTimestamp is initiated as 0L.
> This is clearing keys as per the code below
> When lastModifiedTime on S3 object is same as currentTimestamp for the listed 
> key it should be skipped. As the key is cleared, it is loading the same file 
> again. 
> I think fix should be to initiate the maxTimestamp with currentTimestamp not 
> 0L.
> {code}
>  long maxTimestamp = currentTimestamp;
> {code}
> Following block is clearing keys.
> {code:title=org.apache.nifi.processors.aws.s3.ListS3.java|borderStyle=solid}
>  if (lastModified > maxTimestamp) {
> maxTimestamp = lastModified;
> currentKeys.clear();
> getLogger().debug("clearing keys");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2361: NIFI-4715: ListS3 produces duplicates in frequently update...

2017-12-29 Thread adamlamar
Github user adamlamar commented on the issue:

https://github.com/apache/nifi/pull/2361
  
@ijokarumawak I did as you suggested and pulled `persistState` out in the 
case when no new keys have been listed, but this actually caused unit tests to 
fail. This is because `currentTimestamp` never changes during the main loop, so 
even though `commit` calls `persistState`, the value of `currentTimestamp` 
doesn't change until the main loop exits. Which is why `persistState` is 
required in both exit paths.

Instead, I took a slightly different approach with the change just pushed. 
Since `currentTimestamp` is the current value persisted to the state manager, 
`maxTimestamp` is the highest timestamp seen in the main loop, and 
`currentKeys` is tied to `maxTimestamp` (not `currentTimestamp`), I removed the 
`persistState` call in `commit`, and did `persistState` at the end of 
`onTrigger` only. While this does continue to `persistState` on each exit, it 
reduces the number of `persistState` calls to once per `onTrigger` rather than 
once per 1000 keys iterated (which was done previously in `commit`).

I did a bunch of manual testing with concurrent `PutS3Object` and `ListS3` 
and always got the correct number of listed keys, even when uploading 20k+ 
objects using 10 threads. I tried a few strategies to skip `persistState` if 
nothing had changed, but in manual testing it always produced the wrong number 
of keys, although sometimes only off by 1. The current code should be quite an 
improvement to the load on the state manager, even if it isn't ideal.

I also introduced `totalListCount` which helps tighten up the log messages 
a bit. Previously it would "successfully list X objects" followed by "no new 
objects to list" in a single `onTrigger` (this was apparent in the unit test 
output). `totalListCount` also avoids an unnecessary `yield`.

There's a lot going on in this one - let me know if you have any other 
questions!


---


[jira] [Commented] (NIFI-4715) ListS3 produces duplicates in frequently updated buckets

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306602#comment-16306602
 ] 

ASF GitHub Bot commented on NIFI-4715:
--

Github user adamlamar commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2361#discussion_r159111452
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/ListS3.java
 ---
@@ -267,26 +267,28 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 commit(context, session, listCount);
 listCount = 0;
 } while (bucketLister.isTruncated());
-currentTimestamp = maxTimestamp;
+
+if (maxTimestamp > currentTimestamp) {
+currentTimestamp = maxTimestamp;
+}
 
 final long listMillis = 
TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
 getLogger().info("Successfully listed S3 bucket {} in {} millis", 
new Object[]{bucket, listMillis});
 
 if (!commit(context, session, listCount)) {
-if (currentTimestamp > 0) {
-persistState(context);
-}
 getLogger().debug("No new objects in S3 bucket {} to list. 
Yielding.", new Object[]{bucket});
 context.yield();
 }
+
+// Persist all state, including any currentKeys
+persistState(context);
--- End diff --

@ijokarumawak I started writing an example, but then realized you are 
correct - there is no need to manually call `persistState` because any addition 
to `currentKeys` will also increment `listCount`, and the normal update 
mechanism will take over from there. We shouldn't need a `dirtyState` flag.


> ListS3 produces duplicates in frequently updated buckets
> 
>
> Key: NIFI-4715
> URL: https://issues.apache.org/jira/browse/NIFI-4715
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
> Environment: All
>Reporter: Milan Das
> Attachments: List-S3-dup-issue.xml, screenshot-1.png
>
>
> ListS3 state is implemented using HashSet. HashSet is not thread safe. When 
> ListS3 operates in multi threaded mode, sometimes it  tries to list  same 
> file from S3 bucket.  Seems like HashSet data is getting corrupted.
> currentKeys = new HashSet<>(); // need to be implemented Thread Safe like 
> currentKeys = //ConcurrentHashMap.newKeySet();
> *{color:red}+Update+{color}*:
> This is not a HashSet issue:
> Root cause is: 
> When the file gets uploaded to S3 simultaneously  when List S3 is in progress.
> onTrigger-->  maxTimestamp is initiated as 0L.
> This is clearing keys as per the code below
> When lastModifiedTime on S3 object is same as currentTimestamp for the listed 
> key it should be skipped. As the key is cleared, it is loading the same file 
> again. 
> I think fix should be to initiate the maxTimestamp with currentTimestamp not 
> 0L.
> {code}
>  long maxTimestamp = currentTimestamp;
> {code}
> Following block is clearing keys.
> {code:title=org.apache.nifi.processors.aws.s3.ListS3.java|borderStyle=solid}
>  if (lastModified > maxTimestamp) {
> maxTimestamp = lastModified;
> currentKeys.clear();
> getLogger().debug("clearing keys");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2361: NIFI-4715: ListS3 produces duplicates in frequently...

2017-12-29 Thread adamlamar
Github user adamlamar commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2361#discussion_r159111452
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/ListS3.java
 ---
@@ -267,26 +267,28 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 commit(context, session, listCount);
 listCount = 0;
 } while (bucketLister.isTruncated());
-currentTimestamp = maxTimestamp;
+
+if (maxTimestamp > currentTimestamp) {
+currentTimestamp = maxTimestamp;
+}
 
 final long listMillis = 
TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
 getLogger().info("Successfully listed S3 bucket {} in {} millis", 
new Object[]{bucket, listMillis});
 
 if (!commit(context, session, listCount)) {
-if (currentTimestamp > 0) {
-persistState(context);
-}
 getLogger().debug("No new objects in S3 bucket {} to list. 
Yielding.", new Object[]{bucket});
 context.yield();
 }
+
+// Persist all state, including any currentKeys
+persistState(context);
--- End diff --

@ijokarumawak I started writing an example, but then realized you are 
correct - there is no need to manually call `persistState` because any addition 
to `currentKeys` will also increment `listCount`, and the normal update 
mechanism will take over from there. We shouldn't need a `dirtyState` flag.


---


[jira] [Commented] (NIFI-4701) Support encrypted properties in authorizers.xml

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306534#comment-16306534
 ] 

ASF GitHub Bot commented on NIFI-4701:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159102652
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/test/groovy/org/apache/nifi/properties/ConfigEncryptionToolTest.groovy
 ---
@@ -319,6 +320,59 @@ class ConfigEncryptionToolTest extends GroovyTestCase {
 }
 }
 
+@Test
+void testShouldParseAuthorizersArgument() {
+// Arrange
+def flags = ["-a", "--authorizers"]
+String authorizersPath = "src/test/resources/authorizers.xml"
+ConfigEncryptionTool tool = new ConfigEncryptionTool()
+
+// Act
+flags.each { String arg ->
+tool.parse([arg, authorizersPath] as String[])
+logger.info("Parsed authorizers.xml location: 
${tool.authorizersPath}")
+
+// Assert
+assert tool.authorizersPath == authorizersPath
+assert tool.handlingAuthorizers
+}
+}
+
+@Test
+void testShouldParseOutputAuthorizersArgument() {
+// Arrange
+def flags = ["-u", "--outputAuthorizers"]
+String authorizersPath = "src/test/resources/authorizers.xml"
+ConfigEncryptionTool tool = new ConfigEncryptionTool()
+
+// Act
+flags.each { String arg ->
+tool.parse([arg, authorizersPath, "-a", authorizersPath] as 
String[])
--- End diff --

Change so the `outputAuthorizersPath` is different from `authorizersPath` 
(just call `authorizersPath.reverse()`; it doesn't have to be a valid file) to 
ensure from the equality check at the end that the correct value is being read 
here. 


> Support encrypted properties in authorizers.xml
> ---
>
> Key: NIFI-4701
> URL: https://issues.apache.org/jira/browse/NIFI-4701
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Kevin Doran
>Assignee: Kevin Doran
> Fix For: 1.5.0
>
>
> Since the addition of LdapUserGroupProvider (see NIFI-4059) in v1.4.0, 
> authorizers.xml can now contain properties for LDAP Server credentials. 
> This ticket is to enable properties in authorizers.xml to be encrypted, so 
> that the LDAP Server Manager credentials can be protected similar to 
> LdapProvider which is configured via login-identity-providers.xml.
> The main changes are in nifi-authorizers are:
> * authorizers.xsd to add an encryption attribute to Property
> * to PropertyAuthorizerFactoryBean to check for that attribute and decrypt 
> the property value if necessary when creating the the configuration context
> Additionally, support for creating an encrypted authorizers.xml, protected by 
> the NiFi master key, should be added to the Encrypt Tool in NiFi Toolkit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2350: NIFI-4701 Support encrypted authorizers.xml

2017-12-29 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159102652
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/test/groovy/org/apache/nifi/properties/ConfigEncryptionToolTest.groovy
 ---
@@ -319,6 +320,59 @@ class ConfigEncryptionToolTest extends GroovyTestCase {
 }
 }
 
+@Test
+void testShouldParseAuthorizersArgument() {
+// Arrange
+def flags = ["-a", "--authorizers"]
+String authorizersPath = "src/test/resources/authorizers.xml"
+ConfigEncryptionTool tool = new ConfigEncryptionTool()
+
+// Act
+flags.each { String arg ->
+tool.parse([arg, authorizersPath] as String[])
+logger.info("Parsed authorizers.xml location: 
${tool.authorizersPath}")
+
+// Assert
+assert tool.authorizersPath == authorizersPath
+assert tool.handlingAuthorizers
+}
+}
+
+@Test
+void testShouldParseOutputAuthorizersArgument() {
+// Arrange
+def flags = ["-u", "--outputAuthorizers"]
+String authorizersPath = "src/test/resources/authorizers.xml"
+ConfigEncryptionTool tool = new ConfigEncryptionTool()
+
+// Act
+flags.each { String arg ->
+tool.parse([arg, authorizersPath, "-a", authorizersPath] as 
String[])
--- End diff --

Change so the `outputAuthorizersPath` is different from `authorizersPath` 
(just call `authorizersPath.reverse()`; it doesn't have to be a valid file) to 
ensure from the equality check at the end that the correct value is being read 
here. 


---


[jira] [Commented] (NIFI-4701) Support encrypted properties in authorizers.xml

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306531#comment-16306531
 ] 

ASF GitHub Bot commented on NIFI-4701:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159102294
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/main/groovy/org/apache/nifi/properties/ConfigEncryptionTool.groovy
 ---
@@ -921,6 +1090,39 @@ class ConfigEncryptionTool {
 }
 }
 
+/**
+ * Writes the contents of the authorizers configuration file with 
encrypted values to the output {@code authorizers.xml} file.
+ *
+ * @throw IOException if there is a problem reading or writing the 
authorizers.xml file
+ */
+private void writeAuthorizers() throws IOException {
+if (!outputAuthorizersPath) {
+throw new IllegalArgumentException("Cannot write encrypted 
properties to empty authorizers.xml path")
+}
+
+File outputAuthorizersFile = new File(outputAuthorizersPath)
+
+if (isSafeToWrite(outputAuthorizersFile)) {
+try {
+String updatedXmlContent
+File authorizersFile = new File(authorizersPath)
+if (authorizersFile.exists() && authorizersFile.canRead()) 
{
+// Instead of just writing the XML content to a file, 
this method attempts to maintain the structure of the original file and 
preserves comments
+updatedXmlContent = 
serializeAuthorizersAndPreserveFormat(authorizers, authorizersFile).join("\n")
+}
--- End diff --

Due to a possible race condition (`authorizersFile` exists and can be read 
when the tool execution starts, but has been deleted/made unreadable by an 
external process before `writeAuthorizers` executes), the value of 
`updatedXmlContent` will be empty, and it will overwrite `authorizers.xml`. 
There should be an `else` branch here which simply serializes `authorizers` to 
XML without the preserved whitespace and comments in order to maintain the 
content. 

This should probably also be done for the LDAP section. 


> Support encrypted properties in authorizers.xml
> ---
>
> Key: NIFI-4701
> URL: https://issues.apache.org/jira/browse/NIFI-4701
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Kevin Doran
>Assignee: Kevin Doran
> Fix For: 1.5.0
>
>
> Since the addition of LdapUserGroupProvider (see NIFI-4059) in v1.4.0, 
> authorizers.xml can now contain properties for LDAP Server credentials. 
> This ticket is to enable properties in authorizers.xml to be encrypted, so 
> that the LDAP Server Manager credentials can be protected similar to 
> LdapProvider which is configured via login-identity-providers.xml.
> The main changes are in nifi-authorizers are:
> * authorizers.xsd to add an encryption attribute to Property
> * to PropertyAuthorizerFactoryBean to check for that attribute and decrypt 
> the property value if necessary when creating the the configuration context
> Additionally, support for creating an encrypted authorizers.xml, protected by 
> the NiFi master key, should be added to the Encrypt Tool in NiFi Toolkit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2350: NIFI-4701 Support encrypted authorizers.xml

2017-12-29 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159102294
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/main/groovy/org/apache/nifi/properties/ConfigEncryptionTool.groovy
 ---
@@ -921,6 +1090,39 @@ class ConfigEncryptionTool {
 }
 }
 
+/**
+ * Writes the contents of the authorizers configuration file with 
encrypted values to the output {@code authorizers.xml} file.
+ *
+ * @throw IOException if there is a problem reading or writing the 
authorizers.xml file
+ */
+private void writeAuthorizers() throws IOException {
+if (!outputAuthorizersPath) {
+throw new IllegalArgumentException("Cannot write encrypted 
properties to empty authorizers.xml path")
+}
+
+File outputAuthorizersFile = new File(outputAuthorizersPath)
+
+if (isSafeToWrite(outputAuthorizersFile)) {
+try {
+String updatedXmlContent
+File authorizersFile = new File(authorizersPath)
+if (authorizersFile.exists() && authorizersFile.canRead()) 
{
+// Instead of just writing the XML content to a file, 
this method attempts to maintain the structure of the original file and 
preserves comments
+updatedXmlContent = 
serializeAuthorizersAndPreserveFormat(authorizers, authorizersFile).join("\n")
+}
--- End diff --

Due to a possible race condition (`authorizersFile` exists and can be read 
when the tool execution starts, but has been deleted/made unreadable by an 
external process before `writeAuthorizers` executes), the value of 
`updatedXmlContent` will be empty, and it will overwrite `authorizers.xml`. 
There should be an `else` branch here which simply serializes `authorizers` to 
XML without the preserved whitespace and comments in order to maintain the 
content. 

This should probably also be done for the LDAP section. 


---


[jira] [Commented] (NIFI-4701) Support encrypted properties in authorizers.xml

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306528#comment-16306528
 ] 

ASF GitHub Bot commented on NIFI-4701:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159102075
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/main/groovy/org/apache/nifi/properties/ConfigEncryptionTool.groovy
 ---
@@ -772,6 +899,48 @@ class ConfigEncryptionTool {
 }
 }
 
+String encryptAuthorizers(String plainXml, String newKeyHex = keyHex) {
+AESSensitivePropertyProvider sensitivePropertyProvider = new 
AESSensitivePropertyProvider(newKeyHex)
+
+// TODO: Switch to XmlParser & XmlNodePrinter to maintain "empty" 
element structure
+try {
+def doc = new XmlSlurper().parseText(plainXml)
+// Find the provider element by class even if it has been 
renamed
+def passwords = doc.userGroupProvider.find { it.'class' as 
String == LDAP_USER_GROUP_PROVIDER_CLASS }
+.property.findAll {
+// Only operate on un-encrypted passwords
+it.@name =~ "Password" && (it.@encryption == "none" || 
it.@encryption == "") && it.text()
+}
+
+if (passwords.isEmpty()) {
+if (isVerbose) {
+logger.info("No unencrypted password property elements 
found in login-identity-providers.xml")
+}
+return plainXml
+}
+
+passwords.each { password ->
+if (isVerbose) {
+logger.info("Attempting to encrypt ${password.name()}")
+}
+String encryptedValue = 
sensitivePropertyProvider.protect(password.text().trim())
+password.replaceNode {
+property(name: password.@name, encryption: 
sensitivePropertyProvider.identifierKey, encryptedValue)
+}
+}
+
+// Does not preserve whitespace formatting or comments
+String updatedXml = XmlUtil.serialize(doc)
+logger.info("Updated XML content: ${updatedXml}")
+updatedXml
+} catch (Exception e) {
+if (isVerbose) {
+logger.error("Encountered exception", e)
+}
+printUsageAndThrow("Cannot encrypt login identity providers 
XML content", ExitCode.SERVICE_ERROR)
--- End diff --

This message should also be updated to `authorizers.xml`. 


> Support encrypted properties in authorizers.xml
> ---
>
> Key: NIFI-4701
> URL: https://issues.apache.org/jira/browse/NIFI-4701
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Kevin Doran
>Assignee: Kevin Doran
> Fix For: 1.5.0
>
>
> Since the addition of LdapUserGroupProvider (see NIFI-4059) in v1.4.0, 
> authorizers.xml can now contain properties for LDAP Server credentials. 
> This ticket is to enable properties in authorizers.xml to be encrypted, so 
> that the LDAP Server Manager credentials can be protected similar to 
> LdapProvider which is configured via login-identity-providers.xml.
> The main changes are in nifi-authorizers are:
> * authorizers.xsd to add an encryption attribute to Property
> * to PropertyAuthorizerFactoryBean to check for that attribute and decrypt 
> the property value if necessary when creating the the configuration context
> Additionally, support for creating an encrypted authorizers.xml, protected by 
> the NiFi master key, should be added to the Encrypt Tool in NiFi Toolkit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2350: NIFI-4701 Support encrypted authorizers.xml

2017-12-29 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159102075
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/main/groovy/org/apache/nifi/properties/ConfigEncryptionTool.groovy
 ---
@@ -772,6 +899,48 @@ class ConfigEncryptionTool {
 }
 }
 
+String encryptAuthorizers(String plainXml, String newKeyHex = keyHex) {
+AESSensitivePropertyProvider sensitivePropertyProvider = new 
AESSensitivePropertyProvider(newKeyHex)
+
+// TODO: Switch to XmlParser & XmlNodePrinter to maintain "empty" 
element structure
+try {
+def doc = new XmlSlurper().parseText(plainXml)
+// Find the provider element by class even if it has been 
renamed
+def passwords = doc.userGroupProvider.find { it.'class' as 
String == LDAP_USER_GROUP_PROVIDER_CLASS }
+.property.findAll {
+// Only operate on un-encrypted passwords
+it.@name =~ "Password" && (it.@encryption == "none" || 
it.@encryption == "") && it.text()
+}
+
+if (passwords.isEmpty()) {
+if (isVerbose) {
+logger.info("No unencrypted password property elements 
found in login-identity-providers.xml")
+}
+return plainXml
+}
+
+passwords.each { password ->
+if (isVerbose) {
+logger.info("Attempting to encrypt ${password.name()}")
+}
+String encryptedValue = 
sensitivePropertyProvider.protect(password.text().trim())
+password.replaceNode {
+property(name: password.@name, encryption: 
sensitivePropertyProvider.identifierKey, encryptedValue)
+}
+}
+
+// Does not preserve whitespace formatting or comments
+String updatedXml = XmlUtil.serialize(doc)
+logger.info("Updated XML content: ${updatedXml}")
+updatedXml
+} catch (Exception e) {
+if (isVerbose) {
+logger.error("Encountered exception", e)
+}
+printUsageAndThrow("Cannot encrypt login identity providers 
XML content", ExitCode.SERVICE_ERROR)
--- End diff --

This message should also be updated to `authorizers.xml`. 


---


[jira] [Commented] (NIFI-4701) Support encrypted properties in authorizers.xml

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306526#comment-16306526
 ] 

ASF GitHub Bot commented on NIFI-4701:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159102020
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/main/groovy/org/apache/nifi/properties/ConfigEncryptionTool.groovy
 ---
@@ -772,6 +899,48 @@ class ConfigEncryptionTool {
 }
 }
 
+String encryptAuthorizers(String plainXml, String newKeyHex = keyHex) {
+AESSensitivePropertyProvider sensitivePropertyProvider = new 
AESSensitivePropertyProvider(newKeyHex)
+
+// TODO: Switch to XmlParser & XmlNodePrinter to maintain "empty" 
element structure
+try {
+def doc = new XmlSlurper().parseText(plainXml)
+// Find the provider element by class even if it has been 
renamed
+def passwords = doc.userGroupProvider.find { it.'class' as 
String == LDAP_USER_GROUP_PROVIDER_CLASS }
+.property.findAll {
+// Only operate on un-encrypted passwords
+it.@name =~ "Password" && (it.@encryption == "none" || 
it.@encryption == "") && it.text()
+}
+
+if (passwords.isEmpty()) {
+if (isVerbose) {
+logger.info("No unencrypted password property elements 
found in login-identity-providers.xml")
--- End diff --

The message should be updated to `authorizers.xml`. 


> Support encrypted properties in authorizers.xml
> ---
>
> Key: NIFI-4701
> URL: https://issues.apache.org/jira/browse/NIFI-4701
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Kevin Doran
>Assignee: Kevin Doran
> Fix For: 1.5.0
>
>
> Since the addition of LdapUserGroupProvider (see NIFI-4059) in v1.4.0, 
> authorizers.xml can now contain properties for LDAP Server credentials. 
> This ticket is to enable properties in authorizers.xml to be encrypted, so 
> that the LDAP Server Manager credentials can be protected similar to 
> LdapProvider which is configured via login-identity-providers.xml.
> The main changes are in nifi-authorizers are:
> * authorizers.xsd to add an encryption attribute to Property
> * to PropertyAuthorizerFactoryBean to check for that attribute and decrypt 
> the property value if necessary when creating the the configuration context
> Additionally, support for creating an encrypted authorizers.xml, protected by 
> the NiFi master key, should be added to the Encrypt Tool in NiFi Toolkit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2350: NIFI-4701 Support encrypted authorizers.xml

2017-12-29 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159102020
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/main/groovy/org/apache/nifi/properties/ConfigEncryptionTool.groovy
 ---
@@ -772,6 +899,48 @@ class ConfigEncryptionTool {
 }
 }
 
+String encryptAuthorizers(String plainXml, String newKeyHex = keyHex) {
+AESSensitivePropertyProvider sensitivePropertyProvider = new 
AESSensitivePropertyProvider(newKeyHex)
+
+// TODO: Switch to XmlParser & XmlNodePrinter to maintain "empty" 
element structure
+try {
+def doc = new XmlSlurper().parseText(plainXml)
+// Find the provider element by class even if it has been 
renamed
+def passwords = doc.userGroupProvider.find { it.'class' as 
String == LDAP_USER_GROUP_PROVIDER_CLASS }
+.property.findAll {
+// Only operate on un-encrypted passwords
+it.@name =~ "Password" && (it.@encryption == "none" || 
it.@encryption == "") && it.text()
+}
+
+if (passwords.isEmpty()) {
+if (isVerbose) {
+logger.info("No unencrypted password property elements 
found in login-identity-providers.xml")
--- End diff --

The message should be updated to `authorizers.xml`. 


---


[jira] [Commented] (NIFI-4701) Support encrypted properties in authorizers.xml

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306525#comment-16306525
 ] 

ASF GitHub Bot commented on NIFI-4701:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159101938
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/main/groovy/org/apache/nifi/properties/ConfigEncryptionTool.groovy
 ---
@@ -730,6 +821,42 @@ class ConfigEncryptionTool {
 }
 }
 
+String decryptAuthorizers(String encryptedXml, String existingKeyHex = 
keyHex) {
+AESSensitivePropertyProvider sensitivePropertyProvider = new 
AESSensitivePropertyProvider(existingKeyHex)
+
+try {
+def doc = new XmlSlurper().parseText(encryptedXml)
+// Find the provider element by class even if it has been 
renamed
+def passwords = doc.userGroupProvider.find { it.'class' as 
String == LDAP_USER_GROUP_PROVIDER_CLASS }.property.findAll {
+it.@name =~ "Password" && it.@encryption =~ 
"aes/gcm/\\d{3}"
+}
+
+if (passwords.isEmpty()) {
+if (isVerbose) {
+logger.info("No encrypted password property elements 
found in authorizers.xml")
+}
+return encryptedXml
+}
+
+passwords.each { password ->
+if (isVerbose) {
--- End diff --

Informational note: in the event the file is in an unsupported state 
(perhaps manually decrypted but the `encryption` attribute is still present), 
this will log the plaintext password to the console output before attempting to 
decrypt. This is not necessarily a vulnerability of the tool, as the incoming 
data is not in the expected format. It would take additional effort to capture 
the "raw" value, compare the attempted decryption and the original value, and 
output the raw value if the contents are different. This would still allow the 
tool to print the attempted decryption input value if the attempt throws an 
exception, but this level of effort is unnecessary for this edge case. Just a 
note for the future. 


> Support encrypted properties in authorizers.xml
> ---
>
> Key: NIFI-4701
> URL: https://issues.apache.org/jira/browse/NIFI-4701
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Kevin Doran
>Assignee: Kevin Doran
> Fix For: 1.5.0
>
>
> Since the addition of LdapUserGroupProvider (see NIFI-4059) in v1.4.0, 
> authorizers.xml can now contain properties for LDAP Server credentials. 
> This ticket is to enable properties in authorizers.xml to be encrypted, so 
> that the LDAP Server Manager credentials can be protected similar to 
> LdapProvider which is configured via login-identity-providers.xml.
> The main changes are in nifi-authorizers are:
> * authorizers.xsd to add an encryption attribute to Property
> * to PropertyAuthorizerFactoryBean to check for that attribute and decrypt 
> the property value if necessary when creating the the configuration context
> Additionally, support for creating an encrypted authorizers.xml, protected by 
> the NiFi master key, should be added to the Encrypt Tool in NiFi Toolkit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2350: NIFI-4701 Support encrypted authorizers.xml

2017-12-29 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159101938
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/main/groovy/org/apache/nifi/properties/ConfigEncryptionTool.groovy
 ---
@@ -730,6 +821,42 @@ class ConfigEncryptionTool {
 }
 }
 
+String decryptAuthorizers(String encryptedXml, String existingKeyHex = 
keyHex) {
+AESSensitivePropertyProvider sensitivePropertyProvider = new 
AESSensitivePropertyProvider(existingKeyHex)
+
+try {
+def doc = new XmlSlurper().parseText(encryptedXml)
+// Find the provider element by class even if it has been 
renamed
+def passwords = doc.userGroupProvider.find { it.'class' as 
String == LDAP_USER_GROUP_PROVIDER_CLASS }.property.findAll {
+it.@name =~ "Password" && it.@encryption =~ 
"aes/gcm/\\d{3}"
+}
+
+if (passwords.isEmpty()) {
+if (isVerbose) {
+logger.info("No encrypted password property elements 
found in authorizers.xml")
+}
+return encryptedXml
+}
+
+passwords.each { password ->
+if (isVerbose) {
--- End diff --

Informational note: in the event the file is in an unsupported state 
(perhaps manually decrypted but the `encryption` attribute is still present), 
this will log the plaintext password to the console output before attempting to 
decrypt. This is not necessarily a vulnerability of the tool, as the incoming 
data is not in the expected format. It would take additional effort to capture 
the "raw" value, compare the attempted decryption and the original value, and 
output the raw value if the contents are different. This would still allow the 
tool to print the attempted decryption input value if the attempt throws an 
exception, but this level of effort is unnecessary for this edge case. Just a 
note for the future. 


---


[jira] [Commented] (NIFI-4701) Support encrypted properties in authorizers.xml

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306522#comment-16306522
 ] 

ASF GitHub Bot commented on NIFI-4701:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159101417
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/main/groovy/org/apache/nifi/properties/ConfigEncryptionTool.groovy
 ---
@@ -473,6 +536,34 @@ class ConfigEncryptionTool {
 }
 }
 
+/**
+ * Loads the authorizers configuration from the provided file path.
+ *
+ * @param existingKeyHex the key used to encrypt the configs (defaults 
to the current key)
+ *
+ * @return the file content
+ * @throw IOException if the authorizers.xml file cannot be read
+ */
+private String loadAuthorizers(String existingKeyHex = keyHex) throws 
IOException {
+File authorizersFile
+if (authorizersPath && (authorizersFile = new 
File(authorizersPath)).exists()) {
+try {
+String xmlContent = authorizersFile.text
+List lines = authorizersFile.readLines()
+logger.info("Loaded Authroizers content (${lines.size()} 
lines)")
--- End diff --

I think this was copied from the LIP section and should be fixed there too 
-- this is redundant. In order to capture the number of lines and get all the 
contents as a single string, we should use the `readLines()` method and then 
`join` the `List`. 


> Support encrypted properties in authorizers.xml
> ---
>
> Key: NIFI-4701
> URL: https://issues.apache.org/jira/browse/NIFI-4701
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Kevin Doran
>Assignee: Kevin Doran
> Fix For: 1.5.0
>
>
> Since the addition of LdapUserGroupProvider (see NIFI-4059) in v1.4.0, 
> authorizers.xml can now contain properties for LDAP Server credentials. 
> This ticket is to enable properties in authorizers.xml to be encrypted, so 
> that the LDAP Server Manager credentials can be protected similar to 
> LdapProvider which is configured via login-identity-providers.xml.
> The main changes are in nifi-authorizers are:
> * authorizers.xsd to add an encryption attribute to Property
> * to PropertyAuthorizerFactoryBean to check for that attribute and decrypt 
> the property value if necessary when creating the the configuration context
> Additionally, support for creating an encrypted authorizers.xml, protected by 
> the NiFi master key, should be added to the Encrypt Tool in NiFi Toolkit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2350: NIFI-4701 Support encrypted authorizers.xml

2017-12-29 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2350#discussion_r159101417
  
--- Diff: 
nifi-toolkit/nifi-toolkit-encrypt-config/src/main/groovy/org/apache/nifi/properties/ConfigEncryptionTool.groovy
 ---
@@ -473,6 +536,34 @@ class ConfigEncryptionTool {
 }
 }
 
+/**
+ * Loads the authorizers configuration from the provided file path.
+ *
+ * @param existingKeyHex the key used to encrypt the configs (defaults 
to the current key)
+ *
+ * @return the file content
+ * @throw IOException if the authorizers.xml file cannot be read
+ */
+private String loadAuthorizers(String existingKeyHex = keyHex) throws 
IOException {
+File authorizersFile
+if (authorizersPath && (authorizersFile = new 
File(authorizersPath)).exists()) {
+try {
+String xmlContent = authorizersFile.text
+List lines = authorizersFile.readLines()
+logger.info("Loaded Authroizers content (${lines.size()} 
lines)")
--- End diff --

I think this was copied from the LIP section and should be fixed there too 
-- this is redundant. In order to capture the number of lines and get all the 
contents as a single string, we should use the `readLines()` method and then 
`join` the `List`. 


---


[jira] [Created] (NIFIREG-88) Settings do not always show in Chrome until tabs are switched

2017-12-29 Thread marco polo (JIRA)
marco polo created NIFIREG-88:
-

 Summary: Settings do not always show in Chrome until tabs are 
switched
 Key: NIFIREG-88
 URL: https://issues.apache.org/jira/browse/NIFIREG-88
 Project: NiFi Registry
  Issue Type: Bug
Affects Versions: 0.1.0
 Environment: Chrome version: 63.0.3239.108 (Official Build) (64-bit)
In and out of incognito mode
Reporter: marco polo
Priority: Minor


My first attempt to select settings was successful, but after creating a bucket 
I have to switch tabs to show the settings layover. 

The buckets endpoint responds within 20-30 ms, but switching chrome tabs is the 
only way to show settings. This happens in incognito mode as well ( where my 
plugins are disabled ). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4717) Minor bugs and performance improvements to record-oriented processors

2017-12-29 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4717:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Minor bugs and performance improvements to record-oriented processors
> -
>
> Key: NIFI-4717
> URL: https://issues.apache.org/jira/browse/NIFI-4717
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.5.0
>
>
> While testing some corner cases, I have run into a few minor issues with some 
> of the record oriented processors/libraries:
> ConvertRecord performs very poorly if the writer is the JSON RestSetWriter 
> and configured to write the schema using the avro.schema attribute.
> If QueryRecord fails to parse data properly with the configured reader, it 
> may roll back the session instead of routing to failure, leading the FlowFile 
> being stuck on the queue. This includes an error message indicating that the 
> FlowFile has an active callback or input stream that hasn't been closed.
> QueryRecord fails if referencing a field that is of UNION type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2180: Added GetMongoAggregation to support running Mongo aggrega...

2017-12-29 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2180
  
@MikeThomsen not sure I'll get a chance before the New Year, but I will 
take a look when I get some time


---


[jira] [Updated] (NIFI-4717) Minor bugs and performance improvements to record-oriented processors

2017-12-29 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4717:
---
Fix Version/s: 1.5.0

> Minor bugs and performance improvements to record-oriented processors
> -
>
> Key: NIFI-4717
> URL: https://issues.apache.org/jira/browse/NIFI-4717
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.5.0
>
>
> While testing some corner cases, I have run into a few minor issues with some 
> of the record oriented processors/libraries:
> ConvertRecord performs very poorly if the writer is the JSON RestSetWriter 
> and configured to write the schema using the avro.schema attribute.
> If QueryRecord fails to parse data properly with the configured reader, it 
> may roll back the session instead of routing to failure, leading the FlowFile 
> being stuck on the queue. This includes an error message indicating that the 
> FlowFile has an active callback or input stream that hasn't been closed.
> QueryRecord fails if referencing a field that is of UNION type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4717) Minor bugs and performance improvements to record-oriented processors

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306350#comment-16306350
 ] 

ASF GitHub Bot commented on NIFI-4717:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2359


> Minor bugs and performance improvements to record-oriented processors
> -
>
> Key: NIFI-4717
> URL: https://issues.apache.org/jira/browse/NIFI-4717
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> While testing some corner cases, I have run into a few minor issues with some 
> of the record oriented processors/libraries:
> ConvertRecord performs very poorly if the writer is the JSON RestSetWriter 
> and configured to write the schema using the avro.schema attribute.
> If QueryRecord fails to parse data properly with the configured reader, it 
> may roll back the session instead of routing to failure, leading the FlowFile 
> being stuck on the queue. This includes an error message indicating that the 
> FlowFile has an active callback or input stream that hasn't been closed.
> QueryRecord fails if referencing a field that is of UNION type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4717) Minor bugs and performance improvements to record-oriented processors

2017-12-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306348#comment-16306348
 ] 

ASF subversion and git services commented on NIFI-4717:
---

Commit c91d99884a3263d88908c97a1a48ca6178ea7379 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=c91d998 ]

NIFI-4717: Several minor bug fixes and performance improvements around 
record-oriented processors

Signed-off-by: Matthew Burgess 

This closes #2359


> Minor bugs and performance improvements to record-oriented processors
> -
>
> Key: NIFI-4717
> URL: https://issues.apache.org/jira/browse/NIFI-4717
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> While testing some corner cases, I have run into a few minor issues with some 
> of the record oriented processors/libraries:
> ConvertRecord performs very poorly if the writer is the JSON RestSetWriter 
> and configured to write the schema using the avro.schema attribute.
> If QueryRecord fails to parse data properly with the configured reader, it 
> may roll back the session instead of routing to failure, leading the FlowFile 
> being stuck on the queue. This includes an error message indicating that the 
> FlowFile has an active callback or input stream that hasn't been closed.
> QueryRecord fails if referencing a field that is of UNION type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2359: NIFI-4717: [More Testing Needed] Several minor bug ...

2017-12-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2359


---


[jira] [Commented] (NIFI-4717) Minor bugs and performance improvements to record-oriented processors

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306347#comment-16306347
 ] 

ASF GitHub Bot commented on NIFI-4717:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2359
  
+1 LGTM, ran a full build with contrib-check and unit tests, and tried 
UpdateRecord and QueryRecord with various schemas, Readers/Writers, with/out 
Expression Language, etc. I didn't do any explicit performance testing 
(metrics, e.g.) but it did seem to perform faster.  Thanks for the 
improvements! Merging to master


> Minor bugs and performance improvements to record-oriented processors
> -
>
> Key: NIFI-4717
> URL: https://issues.apache.org/jira/browse/NIFI-4717
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> While testing some corner cases, I have run into a few minor issues with some 
> of the record oriented processors/libraries:
> ConvertRecord performs very poorly if the writer is the JSON RestSetWriter 
> and configured to write the schema using the avro.schema attribute.
> If QueryRecord fails to parse data properly with the configured reader, it 
> may roll back the session instead of routing to failure, leading the FlowFile 
> being stuck on the queue. This includes an error message indicating that the 
> FlowFile has an active callback or input stream that hasn't been closed.
> QueryRecord fails if referencing a field that is of UNION type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2359: NIFI-4717: [More Testing Needed] Several minor bug fixes a...

2017-12-29 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2359
  
+1 LGTM, ran a full build with contrib-check and unit tests, and tried 
UpdateRecord and QueryRecord with various schemas, Readers/Writers, with/out 
Expression Language, etc. I didn't do any explicit performance testing 
(metrics, e.g.) but it did seem to perform faster.  Thanks for the 
improvements! Merging to master


---


[jira] [Commented] (NIFI-4717) Minor bugs and performance improvements to record-oriented processors

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306325#comment-16306325
 ] 

ASF GitHub Bot commented on NIFI-4717:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2359
  
Reviewing...


> Minor bugs and performance improvements to record-oriented processors
> -
>
> Key: NIFI-4717
> URL: https://issues.apache.org/jira/browse/NIFI-4717
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> While testing some corner cases, I have run into a few minor issues with some 
> of the record oriented processors/libraries:
> ConvertRecord performs very poorly if the writer is the JSON RestSetWriter 
> and configured to write the schema using the avro.schema attribute.
> If QueryRecord fails to parse data properly with the configured reader, it 
> may roll back the session instead of routing to failure, leading the FlowFile 
> being stuck on the queue. This includes an error message indicating that the 
> FlowFile has an active callback or input stream that hasn't been closed.
> QueryRecord fails if referencing a field that is of UNION type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2359: NIFI-4717: [More Testing Needed] Several minor bug fixes a...

2017-12-29 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2359
  
Reviewing...


---


[jira] [Commented] (NIFI-3538) Add DeleteHBase processor(s)

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306185#comment-16306185
 ] 

ASF GitHub Bot commented on NIFI-3538:
--

Github user mgaido91 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2294#discussion_r159047973
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/DeleteHBaseRow.java
 ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.hbase;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+@Tags({ "delete", "hbase" })
+@CapabilityDescription(
+"Delete HBase records individually or in batches. The input can be 
a single row ID in the body, one ID per line, " +
+"row IDs separated by commas or a combination of the two. ")
+public class DeleteHBaseRow extends AbstractDeleteHBase {
+static final AllowableValue ROW_ID_BODY = new AllowableValue("body", 
"FlowFile content", "Get the row key(s) from the flowfile content.");
+static final AllowableValue ROW_ID_ATTR = new AllowableValue("attr", 
"FlowFile attributes", "Get the row key from an expression language 
statement.");
+
+static final PropertyDescriptor ROW_ID_LOCATION = new 
PropertyDescriptor.Builder()
+.name("delete-hb-row-id-location")
+.displayName("Row ID Location")
+.description("The location of the row ID to use for building 
the delete. Can be from the content or an expression language statement.")
+.required(true)
+.defaultValue(ROW_ID_BODY.getValue())
+.allowableValues(ROW_ID_BODY, ROW_ID_ATTR)
+.addValidator(Validator.VALID)
+.build();
+
+static final PropertyDescriptor FLOWFILE_FETCH_COUNT = new 
PropertyDescriptor.Builder()
+.name("delete-hb-flowfile-fetch-count")
+.displayName("Flowfile Fetch Count")
+.description("The number of flowfiles to fetch per run.")
+.required(true)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.defaultValue("5")
+.expressionLanguageSupported(false)
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("delete-hb-row-ff-count")
+.displayName("Batch Size")
+.description("The number of deletes to send per batch.")
+.required(true)
+.defaultValue("50")
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.expressionLanguageSupported(false)
+.build();
+
+static final PropertyDescriptor KEY_SEPARATOR = new 
PropertyDescriptor.Builder()
+.name("delete-hb-separator")
+.displayName("Delete Row Key Separator")
+.description("The separator character(s) that separate 
multiple row keys " +
+"when multiple row keys are provided in the flowfile 
body")
+.required(true)
+.defaultValue(",")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(false)
+

[GitHub] nifi pull request #2294: NIFI-3538 Added DeleteHBaseRow

2017-12-29 Thread mgaido91
Github user mgaido91 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2294#discussion_r159047973
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/DeleteHBaseRow.java
 ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.hbase;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+@Tags({ "delete", "hbase" })
+@CapabilityDescription(
+"Delete HBase records individually or in batches. The input can be 
a single row ID in the body, one ID per line, " +
+"row IDs separated by commas or a combination of the two. ")
+public class DeleteHBaseRow extends AbstractDeleteHBase {
+static final AllowableValue ROW_ID_BODY = new AllowableValue("body", 
"FlowFile content", "Get the row key(s) from the flowfile content.");
+static final AllowableValue ROW_ID_ATTR = new AllowableValue("attr", 
"FlowFile attributes", "Get the row key from an expression language 
statement.");
+
+static final PropertyDescriptor ROW_ID_LOCATION = new 
PropertyDescriptor.Builder()
+.name("delete-hb-row-id-location")
+.displayName("Row ID Location")
+.description("The location of the row ID to use for building 
the delete. Can be from the content or an expression language statement.")
+.required(true)
+.defaultValue(ROW_ID_BODY.getValue())
+.allowableValues(ROW_ID_BODY, ROW_ID_ATTR)
+.addValidator(Validator.VALID)
+.build();
+
+static final PropertyDescriptor FLOWFILE_FETCH_COUNT = new 
PropertyDescriptor.Builder()
+.name("delete-hb-flowfile-fetch-count")
+.displayName("Flowfile Fetch Count")
+.description("The number of flowfiles to fetch per run.")
+.required(true)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.defaultValue("5")
+.expressionLanguageSupported(false)
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("delete-hb-row-ff-count")
+.displayName("Batch Size")
+.description("The number of deletes to send per batch.")
+.required(true)
+.defaultValue("50")
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.expressionLanguageSupported(false)
+.build();
+
+static final PropertyDescriptor KEY_SEPARATOR = new 
PropertyDescriptor.Builder()
+.name("delete-hb-separator")
+.displayName("Delete Row Key Separator")
+.description("The separator character(s) that separate 
multiple row keys " +
+"when multiple row keys are provided in the flowfile 
body")
+.required(true)
+.defaultValue(",")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(false)
+.build();
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List properties = 
super.getSupportedPropertyDescriptors();
+properties.add(ROW_ID_LOCATION);
+

[jira] [Commented] (NIFI-3538) Add DeleteHBase processor(s)

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306183#comment-16306183
 ] 

ASF GitHub Bot commented on NIFI-3538:
--

Github user mgaido91 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2294#discussion_r159047906
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/DeleteHBaseRow.java
 ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.hbase;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+@Tags({ "delete", "hbase" })
+@CapabilityDescription(
+"Delete HBase records individually or in batches. The input can be 
a single row ID in the body, one ID per line, " +
+"row IDs separated by commas or a combination of the two. ")
+public class DeleteHBaseRow extends AbstractDeleteHBase {
+static final AllowableValue ROW_ID_BODY = new AllowableValue("body", 
"FlowFile content", "Get the row key(s) from the flowfile content.");
+static final AllowableValue ROW_ID_ATTR = new AllowableValue("attr", 
"FlowFile attributes", "Get the row key from an expression language 
statement.");
+
+static final PropertyDescriptor ROW_ID_LOCATION = new 
PropertyDescriptor.Builder()
+.name("delete-hb-row-id-location")
+.displayName("Row ID Location")
+.description("The location of the row ID to use for building 
the delete. Can be from the content or an expression language statement.")
+.required(true)
+.defaultValue(ROW_ID_BODY.getValue())
+.allowableValues(ROW_ID_BODY, ROW_ID_ATTR)
+.addValidator(Validator.VALID)
+.build();
+
+static final PropertyDescriptor FLOWFILE_FETCH_COUNT = new 
PropertyDescriptor.Builder()
+.name("delete-hb-flowfile-fetch-count")
+.displayName("Flowfile Fetch Count")
+.description("The number of flowfiles to fetch per run.")
+.required(true)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.defaultValue("5")
+.expressionLanguageSupported(false)
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("delete-hb-row-ff-count")
+.displayName("Batch Size")
+.description("The number of deletes to send per batch.")
+.required(true)
+.defaultValue("50")
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.expressionLanguageSupported(false)
+.build();
+
+static final PropertyDescriptor KEY_SEPARATOR = new 
PropertyDescriptor.Builder()
+.name("delete-hb-separator")
+.displayName("Delete Row Key Separator")
+.description("The separator character(s) that separate 
multiple row keys " +
+"when multiple row keys are provided in the flowfile 
body")
+.required(true)
+.defaultValue(",")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(false)
+

[GitHub] nifi pull request #2294: NIFI-3538 Added DeleteHBaseRow

2017-12-29 Thread mgaido91
Github user mgaido91 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2294#discussion_r159047906
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/DeleteHBaseRow.java
 ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.hbase;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+@Tags({ "delete", "hbase" })
+@CapabilityDescription(
+"Delete HBase records individually or in batches. The input can be 
a single row ID in the body, one ID per line, " +
+"row IDs separated by commas or a combination of the two. ")
+public class DeleteHBaseRow extends AbstractDeleteHBase {
+static final AllowableValue ROW_ID_BODY = new AllowableValue("body", 
"FlowFile content", "Get the row key(s) from the flowfile content.");
+static final AllowableValue ROW_ID_ATTR = new AllowableValue("attr", 
"FlowFile attributes", "Get the row key from an expression language 
statement.");
+
+static final PropertyDescriptor ROW_ID_LOCATION = new 
PropertyDescriptor.Builder()
+.name("delete-hb-row-id-location")
+.displayName("Row ID Location")
+.description("The location of the row ID to use for building 
the delete. Can be from the content or an expression language statement.")
+.required(true)
+.defaultValue(ROW_ID_BODY.getValue())
+.allowableValues(ROW_ID_BODY, ROW_ID_ATTR)
+.addValidator(Validator.VALID)
+.build();
+
+static final PropertyDescriptor FLOWFILE_FETCH_COUNT = new 
PropertyDescriptor.Builder()
+.name("delete-hb-flowfile-fetch-count")
+.displayName("Flowfile Fetch Count")
+.description("The number of flowfiles to fetch per run.")
+.required(true)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.defaultValue("5")
+.expressionLanguageSupported(false)
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("delete-hb-row-ff-count")
+.displayName("Batch Size")
+.description("The number of deletes to send per batch.")
+.required(true)
+.defaultValue("50")
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.expressionLanguageSupported(false)
+.build();
+
+static final PropertyDescriptor KEY_SEPARATOR = new 
PropertyDescriptor.Builder()
+.name("delete-hb-separator")
+.displayName("Delete Row Key Separator")
+.description("The separator character(s) that separate 
multiple row keys " +
+"when multiple row keys are provided in the flowfile 
body")
+.required(true)
+.defaultValue(",")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(false)
+.build();
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List properties = 
super.getSupportedPropertyDescriptors();
+properties.add(ROW_ID_LOCATION);
+

[jira] [Commented] (NIFI-3538) Add DeleteHBase processor(s)

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306181#comment-16306181
 ] 

ASF GitHub Bot commented on NIFI-3538:
--

Github user mgaido91 commented on the issue:

https://github.com/apache/nifi/pull/2294
  
@MikeThomsen there are validation errors due to unused imports. May you 
please fix them?
```
[WARNING] 
src/main/java/org/apache/nifi/hbase/DeleteHBaseRow.java:[31,8] (imports) 
UnusedImports: Unused import - java.io.IOException.
[WARNING] 
src/main/java/org/apache/nifi/hbase/DeleteHBaseRow.java:[33,8] (imports) 
UnusedImports: Unused import - java.util.HashMap.
[WARNING] 
src/main/java/org/apache/nifi/hbase/DeleteHBaseRow.java:[35,8] (imports) 
UnusedImports: Unused import - java.util.Map.
```


> Add DeleteHBase processor(s)
> 
>
> Key: NIFI-3538
> URL: https://issues.apache.org/jira/browse/NIFI-3538
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Mike Thomsen
>
> NiFi currently has processors for storing and retrieving cells/rows in HBase, 
> but there is no mechanism for deleting records and/or tables.
> I'm not sure if a single DeleteHBase processor could accomplish both, that 
> can be discussed under this Jira (and can be split out if necessary).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2294: NIFI-3538 Added DeleteHBaseRow

2017-12-29 Thread mgaido91
Github user mgaido91 commented on the issue:

https://github.com/apache/nifi/pull/2294
  
@MikeThomsen there are validation errors due to unused imports. May you 
please fix them?
```
[WARNING] 
src/main/java/org/apache/nifi/hbase/DeleteHBaseRow.java:[31,8] (imports) 
UnusedImports: Unused import - java.io.IOException.
[WARNING] 
src/main/java/org/apache/nifi/hbase/DeleteHBaseRow.java:[33,8] (imports) 
UnusedImports: Unused import - java.util.HashMap.
[WARNING] 
src/main/java/org/apache/nifi/hbase/DeleteHBaseRow.java:[35,8] (imports) 
UnusedImports: Unused import - java.util.Map.
```


---


[GitHub] nifi pull request #2363: NIFI-4726: Avoid concurrency issues in JoltTransfor...

2017-12-29 Thread mgaido91
GitHub user mgaido91 opened a pull request:

https://github.com/apache/nifi/pull/2363

NIFI-4726: Avoid concurrency issues in JoltTransformJSON

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? NA 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly? NA
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly? NA
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties? NA

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered? NA

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mgaido91/nifi NIFI-4726

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2363.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2363


commit 5f938708b797f4894005ab16300585792d6f1da0
Author: mark91 
Date:   2017-12-28T16:14:32Z

NIFI-4726: Avoid concurrency issues in JoltTransformJSON




---


[jira] [Commented] (NIFI-4726) Concurrency issue with JoltTransformJson

2017-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306121#comment-16306121
 ] 

ASF GitHub Bot commented on NIFI-4726:
--

GitHub user mgaido91 opened a pull request:

https://github.com/apache/nifi/pull/2363

NIFI-4726: Avoid concurrency issues in JoltTransformJSON

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? NA 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly? NA
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly? NA
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties? NA

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered? NA

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mgaido91/nifi NIFI-4726

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2363.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2363


commit 5f938708b797f4894005ab16300585792d6f1da0
Author: mark91 
Date:   2017-12-28T16:14:32Z

NIFI-4726: Avoid concurrency issues in JoltTransformJSON




> Concurrency issue with JoltTransformJson
> 
>
> Key: NIFI-4726
> URL: https://issues.apache.org/jira/browse/NIFI-4726
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Marco Gaido
>
> JoltTransformJson uses under the hood Jackson to parse JSONs. On heavy 
> multithreading workloads, Jackson can have concurrency problem, as also 
> described in this Stackoverflow thread 
> https://stackoverflow.com/questions/17924865/jsonmappingexception-was-java-lang-arrayindexoutofboundsexception.
>  This can cause all the parsing to fail when this problem occurs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4726) Concurrency issue with JoltTransformJson

2017-12-29 Thread Marco Gaido (JIRA)
Marco Gaido created NIFI-4726:
-

 Summary: Concurrency issue with JoltTransformJson
 Key: NIFI-4726
 URL: https://issues.apache.org/jira/browse/NIFI-4726
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Marco Gaido


JoltTransformJson uses under the hood Jackson to parse JSONs. On heavy 
multithreading workloads, Jackson can have concurrency problem, as also 
described in this Stackoverflow thread 
https://stackoverflow.com/questions/17924865/jsonmappingexception-was-java-lang-arrayindexoutofboundsexception.
 This can cause all the parsing to fail when this problem occurs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)