[jira] [Commented] (NIFI-4936) NiFi parent pom dependency management forcing versions to align defeating classloader isolation

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387297#comment-16387297
 ] 

ASF GitHub Bot commented on NIFI-4936:
--

GitHub user joewitt opened a pull request:

https://github.com/apache/nifi/pull/2512

NIFI-4936 pushed down version declarations to lowest appropriate level

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joewitt/incubator-nifi NIFI-4936

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2512.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2512


commit 2029a04d75b8fc14fef5e6d2f2c49fc56d70cde2
Author: joewitt 
Date:   2018-03-06T04:53:36Z

NIFI-4936 pushed down version declarations to lowest appropriate level




> NiFi parent pom dependency management forcing versions to align defeating 
> classloader isolation
> ---
>
> Key: NIFI-4936
> URL: https://issues.apache.org/jira/browse/NIFI-4936
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions, Tools and Build
>Affects Versions: 1.1.0, 1.2.0, 1.0.1, 1.3.0, 1.4.0, 1.5.0
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: NIFI-4936.patch, build-fixing, 
> old-vs-new-dependencies.txt
>
>
> the top level pom in nifi has a massive dependency management section.  this 
> was used initially to help enforce consistent usage of dependencies across 
> nifi but this also can defeat the purpose of the classloader isolation 
> offered by nars.  We need to push down dependency version declarations to the 
> nar levels where appropriate.
> there have been reported issues of defects happening due to us using much 
> newer versions (or sometimes older) of dependencies due to this dependency 
> management model.  By pushing declarations down to the proper scope each 
> nar/etc.. can use the specific versions of components it needs and we'll stop 
> introducing issues by forcing a different version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4936) NiFi parent pom dependency management forcing versions to align defeating classloader isolation

2018-03-05 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4936:
--
Status: Patch Available  (was: Open)

> NiFi parent pom dependency management forcing versions to align defeating 
> classloader isolation
> ---
>
> Key: NIFI-4936
> URL: https://issues.apache.org/jira/browse/NIFI-4936
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions, Tools and Build
>Affects Versions: 1.5.0, 1.4.0, 1.3.0, 1.0.1, 1.2.0, 1.1.0
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: NIFI-4936.patch, build-fixing, 
> old-vs-new-dependencies.txt
>
>
> the top level pom in nifi has a massive dependency management section.  this 
> was used initially to help enforce consistent usage of dependencies across 
> nifi but this also can defeat the purpose of the classloader isolation 
> offered by nars.  We need to push down dependency version declarations to the 
> nar levels where appropriate.
> there have been reported issues of defects happening due to us using much 
> newer versions (or sometimes older) of dependencies due to this dependency 
> management model.  By pushing declarations down to the proper scope each 
> nar/etc.. can use the specific versions of components it needs and we'll stop 
> introducing issues by forcing a different version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2512: NIFI-4936 pushed down version declarations to lowes...

2018-03-05 Thread joewitt
GitHub user joewitt opened a pull request:

https://github.com/apache/nifi/pull/2512

NIFI-4936 pushed down version declarations to lowest appropriate level

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joewitt/incubator-nifi NIFI-4936

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2512.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2512


commit 2029a04d75b8fc14fef5e6d2f2c49fc56d70cde2
Author: joewitt 
Date:   2018-03-06T04:53:36Z

NIFI-4936 pushed down version declarations to lowest appropriate level




---


[jira] [Updated] (NIFI-4936) NiFi parent pom dependency management forcing versions to align defeating classloader isolation

2018-03-05 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4936:
--
Attachment: NIFI-4936.patch

> NiFi parent pom dependency management forcing versions to align defeating 
> classloader isolation
> ---
>
> Key: NIFI-4936
> URL: https://issues.apache.org/jira/browse/NIFI-4936
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions, Tools and Build
>Affects Versions: 1.1.0, 1.2.0, 1.0.1, 1.3.0, 1.4.0, 1.5.0
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: NIFI-4936.patch, build-fixing, 
> old-vs-new-dependencies.txt
>
>
> the top level pom in nifi has a massive dependency management section.  this 
> was used initially to help enforce consistent usage of dependencies across 
> nifi but this also can defeat the purpose of the classloader isolation 
> offered by nars.  We need to push down dependency version declarations to the 
> nar levels where appropriate.
> there have been reported issues of defects happening due to us using much 
> newer versions (or sometimes older) of dependencies due to this dependency 
> management model.  By pushing declarations down to the proper scope each 
> nar/etc.. can use the specific versions of components it needs and we'll stop 
> introducing issues by forcing a different version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4936) NiFi parent pom dependency management forcing versions to align defeating classloader isolation

2018-03-05 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4936:
--
Attachment: old-vs-new-dependencies.txt

> NiFi parent pom dependency management forcing versions to align defeating 
> classloader isolation
> ---
>
> Key: NIFI-4936
> URL: https://issues.apache.org/jira/browse/NIFI-4936
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions, Tools and Build
>Affects Versions: 1.1.0, 1.2.0, 1.0.1, 1.3.0, 1.4.0, 1.5.0
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: NIFI-4936.patch, build-fixing, 
> old-vs-new-dependencies.txt
>
>
> the top level pom in nifi has a massive dependency management section.  this 
> was used initially to help enforce consistent usage of dependencies across 
> nifi but this also can defeat the purpose of the classloader isolation 
> offered by nars.  We need to push down dependency version declarations to the 
> nar levels where appropriate.
> there have been reported issues of defects happening due to us using much 
> newer versions (or sometimes older) of dependencies due to this dependency 
> management model.  By pushing declarations down to the proper scope each 
> nar/etc.. can use the specific versions of components it needs and we'll stop 
> introducing issues by forcing a different version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4936) NiFi parent pom dependency management forcing versions to align defeating classloader isolation

2018-03-05 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4936:
--
Attachment: build-fixing

> NiFi parent pom dependency management forcing versions to align defeating 
> classloader isolation
> ---
>
> Key: NIFI-4936
> URL: https://issues.apache.org/jira/browse/NIFI-4936
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions, Tools and Build
>Affects Versions: 1.1.0, 1.2.0, 1.0.1, 1.3.0, 1.4.0, 1.5.0
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: NIFI-4936.patch, build-fixing, 
> old-vs-new-dependencies.txt
>
>
> the top level pom in nifi has a massive dependency management section.  this 
> was used initially to help enforce consistent usage of dependencies across 
> nifi but this also can defeat the purpose of the classloader isolation 
> offered by nars.  We need to push down dependency version declarations to the 
> nar levels where appropriate.
> there have been reported issues of defects happening due to us using much 
> newer versions (or sometimes older) of dependencies due to this dependency 
> management model.  By pushing declarations down to the proper scope each 
> nar/etc.. can use the specific versions of components it needs and we'll stop 
> introducing issues by forcing a different version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4936) NiFi parent pom dependency management forcing versions to align defeating classloader isolation

2018-03-05 Thread Joseph Witt (JIRA)
Joseph Witt created NIFI-4936:
-

 Summary: NiFi parent pom dependency management forcing versions to 
align defeating classloader isolation
 Key: NIFI-4936
 URL: https://issues.apache.org/jira/browse/NIFI-4936
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework, Extensions, Tools and Build
Affects Versions: 1.5.0, 1.4.0, 1.3.0, 1.0.1, 1.2.0, 1.1.0
Reporter: Joseph Witt
Assignee: Joseph Witt
 Fix For: 1.6.0


the top level pom in nifi has a massive dependency management section.  this 
was used initially to help enforce consistent usage of dependencies across nifi 
but this also can defeat the purpose of the classloader isolation offered by 
nars.  We need to push down dependency version declarations to the nar levels 
where appropriate.

there have been reported issues of defects happening due to us using much newer 
versions (or sometimes older) of dependencies due to this dependency management 
model.  By pushing declarations down to the proper scope each nar/etc.. can use 
the specific versions of components it needs and we'll stop introducing issues 
by forcing a different version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #272: MINIFICPP-405: RPG bind to local interfac...

2018-03-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/272


---


[jira] [Comment Edited] (NIFI-3731) Excessive Curator messages

2018-03-05 Thread Jordan Zimmerman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387179#comment-16387179
 ] 

Jordan Zimmerman edited comment on NIFI-3731 at 3/6/18 2:47 AM:


Please see Curator TechNote 8 
[https://cwiki.apache.org/confluence/display/CURATOR/TN8] - you can set a CLI 
switch so that the first connection error is logged as {{ERROR}} but subsequent 
connection errors are logged as {{DEBUG}} until the connection is repaired.



E.g. \{{-Dcurator-log-only-first-connection-issue-as-error-level=true}}


was (Author: randgalt):
Please see Curator TechNote 8 
[https://cwiki.apache.org/confluence/display/CURATOR/TN8] - you can set a CLI 
switch so that the first connection error is logged as {{ERROR}} but subsequent 
connection errors are logged as {{DEBUG}} until the connection is repaired.

> Excessive Curator messages
> --
>
> Key: NIFI-3731
> URL: https://issues.apache.org/jira/browse/NIFI-3731
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Mark Bean
>Priority: Minor
>
> Occasionally, while testing scenarios on a 3-Node cluster a Node will be 
> removed from the cluster. Sometimes, the log files begin filling with an 
> inordinate amount of repeated messages from the Curator. I'm still trying to 
> track down what instigates this, but most likely it is when one of the 3 ZK 
> servers becomes unavailable. Is there a way to suppress or reduce the 
> frequency of the following messages? Currently, it is repeated more than 
> once/millisecond.
> 2017-04-19 16:25:09,824 ERROR [Curator-Framework-0] 
> o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
> org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = 
> ConnectionLoss 
> at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFramworkImpl.java:838)
>  [curator-framework-2.11.0.jar:na] 
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809)
>  [curator-framework-2.11.0.jar:na]
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64)
>  [curator-framework-2.11.0.jar:na]
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.$4.call(CuratorFrameworkImpl.java:267)
>  [curator-framework-2.11.0.jar:na]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_121] 
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_121]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.8.0_121]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3731) Excessive Curator messages

2018-03-05 Thread Jordan Zimmerman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387179#comment-16387179
 ] 

Jordan Zimmerman commented on NIFI-3731:


Please see Curator TechNote 8 
[https://cwiki.apache.org/confluence/display/CURATOR/TN8] - you can set a CLI 
switch so that the first connection error is logged as {{ERROR}} but subsequent 
connection errors are logged as {{DEBUG}} until the connection is repaired.

> Excessive Curator messages
> --
>
> Key: NIFI-3731
> URL: https://issues.apache.org/jira/browse/NIFI-3731
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Mark Bean
>Priority: Minor
>
> Occasionally, while testing scenarios on a 3-Node cluster a Node will be 
> removed from the cluster. Sometimes, the log files begin filling with an 
> inordinate amount of repeated messages from the Curator. I'm still trying to 
> track down what instigates this, but most likely it is when one of the 3 ZK 
> servers becomes unavailable. Is there a way to suppress or reduce the 
> frequency of the following messages? Currently, it is repeated more than 
> once/millisecond.
> 2017-04-19 16:25:09,824 ERROR [Curator-Framework-0] 
> o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
> org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = 
> ConnectionLoss 
> at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFramworkImpl.java:838)
>  [curator-framework-2.11.0.jar:na] 
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809)
>  [curator-framework-2.11.0.jar:na]
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64)
>  [curator-framework-2.11.0.jar:na]
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.$4.call(CuratorFrameworkImpl.java:267)
>  [curator-framework-2.11.0.jar:na]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_121] 
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_121]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.8.0_121]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4932) Enable S2S work behind a Reverse Proxy

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387004#comment-16387004
 ] 

ASF GitHub Bot commented on NIFI-4932:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2510
  
@alopresto Thank you! The RAT check failed with SVG files for docs. I 
didn't run contrib check after I added docs, my bad. I'll update it.


> Enable S2S work behind a Reverse Proxy
> --
>
> Key: NIFI-4932
> URL: https://issues.apache.org/jira/browse/NIFI-4932
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
>
> Currently, NiFi UI and REST API work through a reverse proxy, but NiFi 
> Site-to-Site does not. The core issue is how a NiFi node introduce remote 
> peers to Site-to-Site clients. NiFi should provide more flexible 
> configuration so that user can define remote Site-to-Site endpoints those can 
> work for both routes, through a reverse proxy, and directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2510: NIFI-4932: Enable S2S work behind a Reverse Proxy

2018-03-05 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2510
  
@alopresto Thank you! The RAT check failed with SVG files for docs. I 
didn't run contrib check after I added docs, my bad. I'll update it.


---


[jira] [Commented] (NIFIREG-100) FDS theme SASS mixin

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386945#comment-16386945
 ] 

ASF GitHub Bot commented on NIFIREG-100:


Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/101


> FDS theme SASS mixin
> 
>
> Key: NIFIREG-100
> URL: https://issues.apache.org/jira/browse/NIFIREG-100
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Major
>
> As a developer I want the ability to programmatically change the theme of the 
> FDS NgModule.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #101: [NIFIREG-100] create and leverage FDS SASS ...

2018-03-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/101


---


[jira] [Commented] (NIFIREG-100) FDS theme SASS mixin

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386902#comment-16386902
 ] 

ASF GitHub Bot commented on NIFIREG-100:


Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/101
  
Reviewing...


> FDS theme SASS mixin
> 
>
> Key: NIFIREG-100
> URL: https://issues.apache.org/jira/browse/NIFIREG-100
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Major
>
> As a developer I want the ability to programmatically change the theme of the 
> FDS NgModule.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #101: [NIFIREG-100] create and leverage FDS SASS theming...

2018-03-05 Thread kevdoran
Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/101
  
Reviewing...


---


[jira] [Created] (NIFI-4935) Support Schema Branches when using HWX Schema Registry

2018-03-05 Thread Bryan Bende (JIRA)
Bryan Bende created NIFI-4935:
-

 Summary: Support Schema Branches when using HWX Schema Registry
 Key: NIFI-4935
 URL: https://issues.apache.org/jira/browse/NIFI-4935
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Bryan Bende
Assignee: Bryan Bende


The latest version of Hortonworks Schema Registry now supports a 
forking/branching concept. This means that when retrieving a schema by name, it 
may be desirable to also specify a branch name, as the default would retrieve 
from the "master" branch. We'll need to pass down an optional branch name to 
the SchemaRegistry implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (NIFIREG-149) NiFi Registry UI never directs user to login page for LDAP

2018-03-05 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran closed NIFIREG-149.
---

> NiFi Registry UI never directs user to login page for LDAP
> --
>
> Key: NIFIREG-149
> URL: https://issues.apache.org/jira/browse/NIFIREG-149
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Kevin Doran
>Assignee: Scott Aslan
>Priority: Blocker
>
> This does not affect any released version, but [the current master (as of 
> 2018-03-05)|https://github.com/apache/nifi-registry/commit/24039e63dbb33d61b88235532f059bec8d3a0617]
>  exhibits this behavior:
> When using  LDAP login, the auth-guard for the UI is not falling through to 
> the login page. It is getting stuck in an infinite loop attempting to use 
> Kerberos ticket exchange and certificates. 
> Steps to reproduce: Configure NiFi Registry with LDAP identity provider / 
> user group provider. Access any UI resource such as /nifi-registry or 
> /nifi-registry/login. You will be stuck in an infinite loop attempting to 
> check the current browser credentials (can view this in the developer tools 
> -> network tab)
> *Expected behavior:* It should try Kerberos, then certs, then present the 
> user with the user/pass Login page where I can enter LDAP credentials.
> I did not have a NiFi Registry JWT in my browser. This was tested with latest 
> Chrome for macOS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFIREG-149) NiFi Registry UI never directs user to login page for LDAP

2018-03-05 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran resolved NIFIREG-149.
-
Resolution: Fixed

> NiFi Registry UI never directs user to login page for LDAP
> --
>
> Key: NIFIREG-149
> URL: https://issues.apache.org/jira/browse/NIFIREG-149
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Kevin Doran
>Assignee: Scott Aslan
>Priority: Blocker
>
> This does not affect any released version, but [the current master (as of 
> 2018-03-05)|https://github.com/apache/nifi-registry/commit/24039e63dbb33d61b88235532f059bec8d3a0617]
>  exhibits this behavior:
> When using  LDAP login, the auth-guard for the UI is not falling through to 
> the login page. It is getting stuck in an infinite loop attempting to use 
> Kerberos ticket exchange and certificates. 
> Steps to reproduce: Configure NiFi Registry with LDAP identity provider / 
> user group provider. Access any UI resource such as /nifi-registry or 
> /nifi-registry/login. You will be stuck in an infinite loop attempting to 
> check the current browser credentials (can view this in the developer tools 
> -> network tab)
> *Expected behavior:* It should try Kerberos, then certs, then present the 
> user with the user/pass Login page where I can enter LDAP credentials.
> I did not have a NiFi Registry JWT in my browser. This was tested with latest 
> Chrome for macOS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4934) LDAP implementation documentation is inconsistent

2018-03-05 Thread Rob Smith (JIRA)
Rob Smith created NIFI-4934:
---

 Summary: LDAP implementation documentation is inconsistent
 Key: NIFI-4934
 URL: https://issues.apache.org/jira/browse/NIFI-4934
 Project: Apache NiFi
  Issue Type: Bug
  Components: Docker, Documentation  Website
Affects Versions: 1.5.0
Reporter: Rob Smith


Docker LDAP command example contains

" -e AUTH=tls \"

but the documentation text says

"`AUTH` environment variable which is set to `ldap`"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-149) NiFi Registry UI never directs user to login page for LDAP

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386810#comment-16386810
 ] 

ASF GitHub Bot commented on NIFIREG-149:


Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/104


> NiFi Registry UI never directs user to login page for LDAP
> --
>
> Key: NIFIREG-149
> URL: https://issues.apache.org/jira/browse/NIFIREG-149
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Kevin Doran
>Assignee: Scott Aslan
>Priority: Blocker
>
> This does not affect any released version, but [the current master (as of 
> 2018-03-05)|https://github.com/apache/nifi-registry/commit/24039e63dbb33d61b88235532f059bec8d3a0617]
>  exhibits this behavior:
> When using  LDAP login, the auth-guard for the UI is not falling through to 
> the login page. It is getting stuck in an infinite loop attempting to use 
> Kerberos ticket exchange and certificates. 
> Steps to reproduce: Configure NiFi Registry with LDAP identity provider / 
> user group provider. Access any UI resource such as /nifi-registry or 
> /nifi-registry/login. You will be stuck in an infinite loop attempting to 
> check the current browser credentials (can view this in the developer tools 
> -> network tab)
> *Expected behavior:* It should try Kerberos, then certs, then present the 
> user with the user/pass Login page where I can enter LDAP credentials.
> I did not have a NiFi Registry JWT in my browser. This was tested with latest 
> Chrome for macOS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #104: [NIFIREG-149] directs user to login page fo...

2018-03-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/104


---


[jira] [Commented] (NIFI-4531) LDAP/AD Support

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386795#comment-16386795
 ] 

ASF GitHub Bot commented on NIFI-4531:
--

Github user RobSmithSeattle commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2372#discussion_r172335757
  
--- Diff: nifi-docker/dockerhub/README.md ---
@@ -74,6 +74,45 @@ Finally, this command makes use of a volume to provide 
certificates on the host
   -d \
   apache/nifi:latest
 
+### Standalone Instance, LDAP
+In this configuration, the user will need to provide certificates and the 
associated configuration information.  Optionally,
+if the LDAP provider of interest is operating in LDAPS or START_TLS modes, 
certificates will additionally be needed.
+Of particular note, is the `AUTH` environment variable which is set to 
`ldap`.  Additionally, the user must provide a
+DN as provided by the configured LDAP server in the 
`INITIAL_ADMIN_IDENTITY` environment variable. This value will be 
+used to seed the instance with an initial user with administrative 
privileges.  Finally, this command makes use of a 
+volume to provide certificates on the host system to the container 
instance.
+
+ For a minimal, connection to an LDAP server using SIMPLE 
authentication:
+
+docker run --name nifi \
+  -v /User/dreynolds/certs/localhost:/opt/certs \
+  -p 18443:8443 \
+  -e AUTH=tls \
--- End diff --

This looks like an error - the previous paragraph says 
   `AUTH` environment variable which is set to `ldap`.

What is the value supposed to be?


> LDAP/AD Support
> ---
>
> Key: NIFI-4531
> URL: https://issues.apache.org/jira/browse/NIFI-4531
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Docker
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>Priority: Major
> Fix For: 1.5.0
>
>
> Provide configuration and documentation for setting up instances using LDAP/AD



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2372: NIFI-4531: LDAP Auth for Docker image

2018-03-05 Thread RobSmithSeattle
Github user RobSmithSeattle commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2372#discussion_r172335757
  
--- Diff: nifi-docker/dockerhub/README.md ---
@@ -74,6 +74,45 @@ Finally, this command makes use of a volume to provide 
certificates on the host
   -d \
   apache/nifi:latest
 
+### Standalone Instance, LDAP
+In this configuration, the user will need to provide certificates and the 
associated configuration information.  Optionally,
+if the LDAP provider of interest is operating in LDAPS or START_TLS modes, 
certificates will additionally be needed.
+Of particular note, is the `AUTH` environment variable which is set to 
`ldap`.  Additionally, the user must provide a
+DN as provided by the configured LDAP server in the 
`INITIAL_ADMIN_IDENTITY` environment variable. This value will be 
+used to seed the instance with an initial user with administrative 
privileges.  Finally, this command makes use of a 
+volume to provide certificates on the host system to the container 
instance.
+
+ For a minimal, connection to an LDAP server using SIMPLE 
authentication:
+
+docker run --name nifi \
+  -v /User/dreynolds/certs/localhost:/opt/certs \
+  -p 18443:8443 \
+  -e AUTH=tls \
--- End diff --

This looks like an error - the previous paragraph says 
   `AUTH` environment variable which is set to `ldap`.

What is the value supposed to be?


---


[jira] [Commented] (NIFIREG-149) NiFi Registry UI never directs user to login page for LDAP

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386761#comment-16386761
 ] 

ASF GitHub Bot commented on NIFIREG-149:


Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/104
  
Reviewing...


> NiFi Registry UI never directs user to login page for LDAP
> --
>
> Key: NIFIREG-149
> URL: https://issues.apache.org/jira/browse/NIFIREG-149
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Kevin Doran
>Assignee: Scott Aslan
>Priority: Blocker
>
> This does not affect any released version, but [the current master (as of 
> 2018-03-05)|https://github.com/apache/nifi-registry/commit/24039e63dbb33d61b88235532f059bec8d3a0617]
>  exhibits this behavior:
> When using  LDAP login, the auth-guard for the UI is not falling through to 
> the login page. It is getting stuck in an infinite loop attempting to use 
> Kerberos ticket exchange and certificates. 
> Steps to reproduce: Configure NiFi Registry with LDAP identity provider / 
> user group provider. Access any UI resource such as /nifi-registry or 
> /nifi-registry/login. You will be stuck in an infinite loop attempting to 
> check the current browser credentials (can view this in the developer tools 
> -> network tab)
> *Expected behavior:* It should try Kerberos, then certs, then present the 
> user with the user/pass Login page where I can enter LDAP credentials.
> I did not have a NiFi Registry JWT in my browser. This was tested with latest 
> Chrome for macOS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #104: [NIFIREG-149] directs user to login page for LDAP

2018-03-05 Thread kevdoran
Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/104
  
Reviewing...


---


[jira] [Commented] (NIFIREG-149) NiFi Registry UI never directs user to login page for LDAP

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386755#comment-16386755
 ] 

ASF GitHub Bot commented on NIFIREG-149:


GitHub user scottyaslan opened a pull request:

https://github.com/apache/nifi-registry/pull/104

[NIFIREG-149] directs user to login page for LDAP



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/scottyaslan/nifi-registry NIFIREG-149

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/104.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #104


commit 30b8e19b84acc34fbe20606cf3ec168f371f69b2
Author: Scott Aslan 
Date:   2018-03-05T19:05:38Z

[NIFIREG-149] directs user to login page for LDAP




> NiFi Registry UI never directs user to login page for LDAP
> --
>
> Key: NIFIREG-149
> URL: https://issues.apache.org/jira/browse/NIFIREG-149
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Kevin Doran
>Assignee: Scott Aslan
>Priority: Blocker
>
> This does not affect any released version, but [the current master (as of 
> 2018-03-05)|https://github.com/apache/nifi-registry/commit/24039e63dbb33d61b88235532f059bec8d3a0617]
>  exhibits this behavior:
> When using  LDAP login, the auth-guard for the UI is not falling through to 
> the login page. It is getting stuck in an infinite loop attempting to use 
> Kerberos ticket exchange and certificates. 
> Steps to reproduce: Configure NiFi Registry with LDAP identity provider / 
> user group provider. Access any UI resource such as /nifi-registry or 
> /nifi-registry/login. You will be stuck in an infinite loop attempting to 
> check the current browser credentials (can view this in the developer tools 
> -> network tab)
> *Expected behavior:* It should try Kerberos, then certs, then present the 
> user with the user/pass Login page where I can enter LDAP credentials.
> I did not have a NiFi Registry JWT in my browser. This was tested with latest 
> Chrome for macOS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #104: [NIFIREG-149] directs user to login page fo...

2018-03-05 Thread scottyaslan
GitHub user scottyaslan opened a pull request:

https://github.com/apache/nifi-registry/pull/104

[NIFIREG-149] directs user to login page for LDAP



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/scottyaslan/nifi-registry NIFIREG-149

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/104.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #104


commit 30b8e19b84acc34fbe20606cf3ec168f371f69b2
Author: Scott Aslan 
Date:   2018-03-05T19:05:38Z

[NIFIREG-149] directs user to login page for LDAP




---


[jira] [Commented] (NIFI-4925) Ranger Authorizer - Memory Leak

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386736#comment-16386736
 ] 

ASF GitHub Bot commented on NIFI-4925:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2511
  
@mcgilman this looks good. Was able to read through the code and I think it 
does what is expected. Was able to verify that the issue no longer exists. +1 
merged to master. Thanks!


> Ranger Authorizer - Memory Leak
> ---
>
> Key: NIFI-4925
> URL: https://issues.apache.org/jira/browse/NIFI-4925
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.6.0
>
>
> Authorization requests/results are now explicitly audited. This change was 
> due to the fact that the Ranger was auditing a lot of false positives 
> previously. This is partly because the NiFi uses authorization to check which 
> features the user may have permissions to. This check is used to 
> enable/disable various parts of the UI. The remainder of the false positives 
> came from the authorizer not knowing the entire context of the request. For 
> instance, when a Processor has no policy we check its parent and so on.
> The memory leak is due to the authorizer holding onto authorization results 
> that are never destined for auditing. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4925) Ranger Authorizer - Memory Leak

2018-03-05 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4925:
-
   Resolution: Fixed
Fix Version/s: 1.6.0
   Status: Resolved  (was: Patch Available)

> Ranger Authorizer - Memory Leak
> ---
>
> Key: NIFI-4925
> URL: https://issues.apache.org/jira/browse/NIFI-4925
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.6.0
>
>
> Authorization requests/results are now explicitly audited. This change was 
> due to the fact that the Ranger was auditing a lot of false positives 
> previously. This is partly because the NiFi uses authorization to check which 
> features the user may have permissions to. This check is used to 
> enable/disable various parts of the UI. The remainder of the false positives 
> came from the authorizer not knowing the entire context of the request. For 
> instance, when a Processor has no policy we check its parent and so on.
> The memory leak is due to the authorizer holding onto authorization results 
> that are never destined for auditing. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4925) Ranger Authorizer - Memory Leak

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386734#comment-16386734
 ] 

ASF GitHub Bot commented on NIFI-4925:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2511


> Ranger Authorizer - Memory Leak
> ---
>
> Key: NIFI-4925
> URL: https://issues.apache.org/jira/browse/NIFI-4925
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.6.0
>
>
> Authorization requests/results are now explicitly audited. This change was 
> due to the fact that the Ranger was auditing a lot of false positives 
> previously. This is partly because the NiFi uses authorization to check which 
> features the user may have permissions to. This check is used to 
> enable/disable various parts of the UI. The remainder of the false positives 
> came from the authorizer not knowing the entire context of the request. For 
> instance, when a Processor has no policy we check its parent and so on.
> The memory leak is due to the authorizer holding onto authorization results 
> that are never destined for auditing. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2511: NIFI-4925: Ranger Authorizer Memory Leak

2018-03-05 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2511
  
@mcgilman this looks good. Was able to read through the code and I think it 
does what is expected. Was able to verify that the issue no longer exists. +1 
merged to master. Thanks!


---


[GitHub] nifi pull request #2511: NIFI-4925: Ranger Authorizer Memory Leak

2018-03-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2511


---


[jira] [Commented] (NIFI-4872) NIFI component high resource usage annotation

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386725#comment-16386725
 ] 

ASF GitHub Bot commented on NIFI-4872:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2475
  
@jtstorck this looks good to me. Simple merge conflict in import statements 
but I was able to address that. Otherwise, I think this is all a great step 
forward. I do agree that we will likely need more PR's later to further enrich 
the existing processors but this lays the groundwork for it all, so it makes 
sense to merge it in as-is. So +1 merged to master. Thanks for getting this 
knocked out!


> NIFI component high resource usage annotation
> -
>
> Key: NIFI-4872
> URL: https://issues.apache.org/jira/browse/NIFI-4872
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Core UI
>Affects Versions: 1.5.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Critical
> Fix For: 1.6.0
>
>
> NiFi Processors currently have no means to relay whether or not they have may 
> be resource intensive or not. The idea here would be to introduce an 
> Annotation that can be added to Processors that indicate they may cause high 
> memory, disk, CPU, or network usage. For instance, any Processor that reads 
> the FlowFile contents into memory (like many XML Processors for instance) may 
> cause high memory usage. What ultimately determines if there is high 
> memory/disk/cpu/network usage will depend on the FlowFiles being processed. 
> With many of these components in the dataflow, it increases the risk of 
> OutOfMemoryErrors and performance degradation.
> The annotation should support one value from a fixed list of: CPU, Disk, 
> Memory, Network.  It should also allow the developer to provide a custom 
> description of the scenario that the component would fall under the high 
> usage category.  The annotation should be able to be specified multiple 
> times, for as many resources as it has the potential to be high usage.
> By marking components with this new Annotation, we can update the generated 
> Processor documentation to include this fact.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4872) NIFI component high resource usage annotation

2018-03-05 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4872:
-
   Resolution: Fixed
Fix Version/s: 1.6.0
   Status: Resolved  (was: Patch Available)

> NIFI component high resource usage annotation
> -
>
> Key: NIFI-4872
> URL: https://issues.apache.org/jira/browse/NIFI-4872
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Core UI
>Affects Versions: 1.5.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Critical
> Fix For: 1.6.0
>
>
> NiFi Processors currently have no means to relay whether or not they have may 
> be resource intensive or not. The idea here would be to introduce an 
> Annotation that can be added to Processors that indicate they may cause high 
> memory, disk, CPU, or network usage. For instance, any Processor that reads 
> the FlowFile contents into memory (like many XML Processors for instance) may 
> cause high memory usage. What ultimately determines if there is high 
> memory/disk/cpu/network usage will depend on the FlowFiles being processed. 
> With many of these components in the dataflow, it increases the risk of 
> OutOfMemoryErrors and performance degradation.
> The annotation should support one value from a fixed list of: CPU, Disk, 
> Memory, Network.  It should also allow the developer to provide a custom 
> description of the scenario that the component would fall under the high 
> usage category.  The annotation should be able to be specified multiple 
> times, for as many resources as it has the potential to be high usage.
> By marking components with this new Annotation, we can update the generated 
> Processor documentation to include this fact.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4872) NIFI component high resource usage annotation

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386721#comment-16386721
 ] 

ASF GitHub Bot commented on NIFI-4872:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2475


> NIFI component high resource usage annotation
> -
>
> Key: NIFI-4872
> URL: https://issues.apache.org/jira/browse/NIFI-4872
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Core UI
>Affects Versions: 1.5.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Critical
>
> NiFi Processors currently have no means to relay whether or not they have may 
> be resource intensive or not. The idea here would be to introduce an 
> Annotation that can be added to Processors that indicate they may cause high 
> memory, disk, CPU, or network usage. For instance, any Processor that reads 
> the FlowFile contents into memory (like many XML Processors for instance) may 
> cause high memory usage. What ultimately determines if there is high 
> memory/disk/cpu/network usage will depend on the FlowFiles being processed. 
> With many of these components in the dataflow, it increases the risk of 
> OutOfMemoryErrors and performance degradation.
> The annotation should support one value from a fixed list of: CPU, Disk, 
> Memory, Network.  It should also allow the developer to provide a custom 
> description of the scenario that the component would fall under the high 
> usage category.  The annotation should be able to be specified multiple 
> times, for as many resources as it has the potential to be high usage.
> By marking components with this new Annotation, we can update the generated 
> Processor documentation to include this fact.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2475: NIFI-4872 Added annotation for specifying scenarios...

2018-03-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2475


---


[jira] [Commented] (NIFI-4292) PutElasticSearchHTTP Missing error message

2018-03-05 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386570#comment-16386570
 ] 

Pierre Villard commented on NIFI-4292:
--

Hi [~kimtux] and [~jsmith1]. I believe this will be resolved in 1.6.0 thanks to 
NIFI-4410. I'll mark this Jira as duplicated of the other one. Feel free to 
comment if this issue is not solved when NiFi 1.6.0 is out (or you can confirm 
the fix by building the master branch).

> PutElasticSearchHTTP Missing error message
> --
>
> Key: NIFI-4292
> URL: https://issues.apache.org/jira/browse/NIFI-4292
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Nicholas Carenza
>Priority: Minor
> Fix For: 1.6.0
>
>
> My logs are filled with errors from PutElasticSearchHTTP but no failure 
> reason.
> "2017-08-14 15:15:23,949 ERROR [Timer-Driven Process Thread-1] 
> o.a.n.p.e.PutElasticsearchHttp 
> PutElasticsearchHttp[id=c255d966-015c-1000-2c69-df80f890bb83] Failed to 
> insert 
> StandardFlowFileRecord[uuid=30c45a2a-2f94-4af1-8028-be2a6116c331,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1502685936537-26229028, 
> container=default, section=292], offset=732341, 
> length=10149],offset=0,name=23173788705467383,size=10149] into Elasticsearch 
> due to , transferring to failure"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4292) PutElasticSearchHTTP Missing error message

2018-03-05 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-4292.
--
   Resolution: Fixed
Fix Version/s: 1.6.0

> PutElasticSearchHTTP Missing error message
> --
>
> Key: NIFI-4292
> URL: https://issues.apache.org/jira/browse/NIFI-4292
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Nicholas Carenza
>Priority: Minor
> Fix For: 1.6.0
>
>
> My logs are filled with errors from PutElasticSearchHTTP but no failure 
> reason.
> "2017-08-14 15:15:23,949 ERROR [Timer-Driven Process Thread-1] 
> o.a.n.p.e.PutElasticsearchHttp 
> PutElasticsearchHttp[id=c255d966-015c-1000-2c69-df80f890bb83] Failed to 
> insert 
> StandardFlowFileRecord[uuid=30c45a2a-2f94-4af1-8028-be2a6116c331,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1502685936537-26229028, 
> container=default, section=292], offset=732341, 
> length=10149],offset=0,name=23173788705467383,size=10149] into Elasticsearch 
> due to , transferring to failure"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4246) OAuth 2 Authorization support - Client Credentials Grant

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386565#comment-16386565
 ] 

ASF GitHub Bot commented on NIFI-4246:
--

Github user jdye64 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2085#discussion_r172295045
  
--- Diff: 
nifi-nar-bundles/nifi-oauth-bundle/nifi-oauth/src/main/java/org/apache/nifi/oauth/AbstractOAuthControllerService.java
 ---
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.oauth;
+
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+
+
+public abstract class AbstractOAuthControllerService
+extends AbstractControllerService implements OAuth2ClientService {
+
+protected String accessToken = null;
+protected String refreshToken = null;
+protected String tokenType = null;
+protected long expiresIn = -1;
+protected long expiresTime = -1;
+protected long lastResponseTimestamp = -1;
+protected Map extraHeaders = new HashMap();
+protected String authUrl = null;
+protected long expireTimeSafetyNetSeconds = -1;
+protected String accessTokenRespName = null;
+protected String expireTimeRespName = null;
+protected String expireInRespName = null;
+protected String tokenTypeRespName = null;
+protected String scopeRespName = null;
+
+public static final PropertyDescriptor AUTH_SERVER_URL = new 
PropertyDescriptor
+.Builder().name("OAuth2 Authorization Server URL")
--- End diff --

Good point I will change the name and enable expression language by default


> OAuth 2 Authorization support - Client Credentials Grant
> 
>
> Key: NIFI-4246
> URL: https://issues.apache.org/jira/browse/NIFI-4246
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>Priority: Major
>
> If your interacting with REST endpoints on the web chances are you are going 
> to run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
> Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA 
> is focused solely on the Client Credentials Grant method defined at 
> https://tools.ietf.org/html/rfc6749#section-4.4
> This implementation should provide a ControllerService in which the enduser 
> can configure the credentials for obtaining the authorization grant (access 
> token) from the resource owner. In turn a new property will be added to the 
> InvokeHTTP processor (if it doesn't already exist from one of the other JIRA 
> efforts similar to this one) where the processor can reference this 
> controller service to obtain the access token and insert the appropriate HTTP 
> header (Authorization: Bearer{access_token}) so that the InvokeHTTP processor 
> can interact with the OAuth protected resources without having to worry about 
> setting up the credentials for each InvokeHTTP processor saving time and 
> complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2085: NIFI-4246 - Client Credentials Grant based OAuth2 C...

2018-03-05 Thread jdye64
Github user jdye64 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2085#discussion_r172295045
  
--- Diff: 
nifi-nar-bundles/nifi-oauth-bundle/nifi-oauth/src/main/java/org/apache/nifi/oauth/AbstractOAuthControllerService.java
 ---
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.oauth;
+
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+
+
+public abstract class AbstractOAuthControllerService
+extends AbstractControllerService implements OAuth2ClientService {
+
+protected String accessToken = null;
+protected String refreshToken = null;
+protected String tokenType = null;
+protected long expiresIn = -1;
+protected long expiresTime = -1;
+protected long lastResponseTimestamp = -1;
+protected Map extraHeaders = new HashMap();
+protected String authUrl = null;
+protected long expireTimeSafetyNetSeconds = -1;
+protected String accessTokenRespName = null;
+protected String expireTimeRespName = null;
+protected String expireInRespName = null;
+protected String tokenTypeRespName = null;
+protected String scopeRespName = null;
+
+public static final PropertyDescriptor AUTH_SERVER_URL = new 
PropertyDescriptor
+.Builder().name("OAuth2 Authorization Server URL")
--- End diff --

Good point I will change the name and enable expression language by default


---


[jira] [Commented] (NIFI-4246) OAuth 2 Authorization support - Client Credentials Grant

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386563#comment-16386563
 ] 

ASF GitHub Bot commented on NIFI-4246:
--

Github user jdye64 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2085#discussion_r172294695
  
--- Diff: 
nifi-nar-bundles/nifi-oauth-bundle/nifi-oauth/src/main/java/org/apache/nifi/oauth/AbstractOAuthControllerService.java
 ---
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.oauth;
+
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+
+
+public abstract class AbstractOAuthControllerService
+extends AbstractControllerService implements OAuth2ClientService {
+
+protected String accessToken = null;
+protected String refreshToken = null;
+protected String tokenType = null;
+protected long expiresIn = -1;
+protected long expiresTime = -1;
+protected long lastResponseTimestamp = -1;
+protected Map extraHeaders = new HashMap();
+protected String authUrl = null;
+protected long expireTimeSafetyNetSeconds = -1;
+protected String accessTokenRespName = null;
+protected String expireTimeRespName = null;
+protected String expireInRespName = null;
+protected String tokenTypeRespName = null;
+protected String scopeRespName = null;
+
+public static final PropertyDescriptor AUTH_SERVER_URL = new 
PropertyDescriptor
+.Builder().name("OAuth2 Authorization Server URL")
+.displayName("OAuth2 Authorization Server")
+.description("OAuth2 Authorization Server that grants access 
to the protected resources on the behalf of the resource owner.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor 
RESPONSE_ACCESS_TOKEN_FIELD_NAME = new PropertyDescriptor
+.Builder().name("JSON response 'access_token' name")
+.displayName("JSON response 'access_token' name")
+.description("Name of the field in the JSON response that 
contains the access token. IETF OAuth2 spec default is 'access_token' if your 
API provider's" +
+" response field is different this is where you can 
change that.")
+.defaultValue("access_token")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RESPONSE_EXPIRE_TIME_FIELD_NAME 
= new PropertyDescriptor
+.Builder().name("JSON response 'expire_time' name")
+.displayName("JSON response 'expire_time' name")
+.description("Name of the field in the JSON response that 
contains the expire time. IETF OAuth2 spec default is 'expire_time' if your API 
provider's" +
+" response field is different this is where you can 
change that.")
+.defaultValue("expire_time")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RESPONSE_EXPIRE_IN_FIELD_NAME = 
new PropertyDescriptor
+.Builder().name("JSON response 'expire_in' name")
+.displayName("JSON response 'expire_in' name")
+.description("Name of the field in the JSON response that 
contains the expire in. IETF OAuth2 spec default is 'expire_in' if 

[GitHub] nifi pull request #2085: NIFI-4246 - Client Credentials Grant based OAuth2 C...

2018-03-05 Thread jdye64
Github user jdye64 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2085#discussion_r172294695
  
--- Diff: 
nifi-nar-bundles/nifi-oauth-bundle/nifi-oauth/src/main/java/org/apache/nifi/oauth/AbstractOAuthControllerService.java
 ---
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.oauth;
+
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+
+
+public abstract class AbstractOAuthControllerService
+extends AbstractControllerService implements OAuth2ClientService {
+
+protected String accessToken = null;
+protected String refreshToken = null;
+protected String tokenType = null;
+protected long expiresIn = -1;
+protected long expiresTime = -1;
+protected long lastResponseTimestamp = -1;
+protected Map extraHeaders = new HashMap();
+protected String authUrl = null;
+protected long expireTimeSafetyNetSeconds = -1;
+protected String accessTokenRespName = null;
+protected String expireTimeRespName = null;
+protected String expireInRespName = null;
+protected String tokenTypeRespName = null;
+protected String scopeRespName = null;
+
+public static final PropertyDescriptor AUTH_SERVER_URL = new 
PropertyDescriptor
+.Builder().name("OAuth2 Authorization Server URL")
+.displayName("OAuth2 Authorization Server")
+.description("OAuth2 Authorization Server that grants access 
to the protected resources on the behalf of the resource owner.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor 
RESPONSE_ACCESS_TOKEN_FIELD_NAME = new PropertyDescriptor
+.Builder().name("JSON response 'access_token' name")
+.displayName("JSON response 'access_token' name")
+.description("Name of the field in the JSON response that 
contains the access token. IETF OAuth2 spec default is 'access_token' if your 
API provider's" +
+" response field is different this is where you can 
change that.")
+.defaultValue("access_token")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RESPONSE_EXPIRE_TIME_FIELD_NAME 
= new PropertyDescriptor
+.Builder().name("JSON response 'expire_time' name")
+.displayName("JSON response 'expire_time' name")
+.description("Name of the field in the JSON response that 
contains the expire time. IETF OAuth2 spec default is 'expire_time' if your API 
provider's" +
+" response field is different this is where you can 
change that.")
+.defaultValue("expire_time")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RESPONSE_EXPIRE_IN_FIELD_NAME = 
new PropertyDescriptor
+.Builder().name("JSON response 'expire_in' name")
+.displayName("JSON response 'expire_in' name")
+.description("Name of the field in the JSON response that 
contains the expire in. IETF OAuth2 spec default is 'expire_in' if your API 
provider's" +
+" response field is different this is where you can 
change that.")
+.defaultValue("expire_in")
+.required(true)
+

[GitHub] nifi pull request #2085: NIFI-4246 - Client Credentials Grant based OAuth2 C...

2018-03-05 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2085#discussion_r172294029
  
--- Diff: 
nifi-nar-bundles/nifi-oauth-bundle/nifi-oauth/src/main/java/org/apache/nifi/oauth/AbstractOAuthControllerService.java
 ---
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.oauth;
+
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+
+
+public abstract class AbstractOAuthControllerService
+extends AbstractControllerService implements OAuth2ClientService {
+
+protected String accessToken = null;
+protected String refreshToken = null;
+protected String tokenType = null;
+protected long expiresIn = -1;
+protected long expiresTime = -1;
+protected long lastResponseTimestamp = -1;
+protected Map extraHeaders = new HashMap();
+protected String authUrl = null;
+protected long expireTimeSafetyNetSeconds = -1;
+protected String accessTokenRespName = null;
+protected String expireTimeRespName = null;
+protected String expireInRespName = null;
+protected String tokenTypeRespName = null;
+protected String scopeRespName = null;
+
+public static final PropertyDescriptor AUTH_SERVER_URL = new 
PropertyDescriptor
+.Builder().name("OAuth2 Authorization Server URL")
+.displayName("OAuth2 Authorization Server")
+.description("OAuth2 Authorization Server that grants access 
to the protected resources on the behalf of the resource owner.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor 
RESPONSE_ACCESS_TOKEN_FIELD_NAME = new PropertyDescriptor
+.Builder().name("JSON response 'access_token' name")
+.displayName("JSON response 'access_token' name")
+.description("Name of the field in the JSON response that 
contains the access token. IETF OAuth2 spec default is 'access_token' if your 
API provider's" +
+" response field is different this is where you can 
change that.")
+.defaultValue("access_token")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RESPONSE_EXPIRE_TIME_FIELD_NAME 
= new PropertyDescriptor
+.Builder().name("JSON response 'expire_time' name")
+.displayName("JSON response 'expire_time' name")
+.description("Name of the field in the JSON response that 
contains the expire time. IETF OAuth2 spec default is 'expire_time' if your API 
provider's" +
+" response field is different this is where you can 
change that.")
+.defaultValue("expire_time")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RESPONSE_EXPIRE_IN_FIELD_NAME = 
new PropertyDescriptor
+.Builder().name("JSON response 'expire_in' name")
+.displayName("JSON response 'expire_in' name")
+.description("Name of the field in the JSON response that 
contains the expire in. IETF OAuth2 spec default is 'expire_in' if your API 
provider's" +
+" response field is different this is where you can 
change that.")
+.defaultValue("expire_in")
+.required(true)
+

[jira] [Assigned] (NIFIREG-149) NiFi Registry UI never directs user to login page for LDAP

2018-03-05 Thread Scott Aslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan reassigned NIFIREG-149:
---

Assignee: Scott Aslan

> NiFi Registry UI never directs user to login page for LDAP
> --
>
> Key: NIFIREG-149
> URL: https://issues.apache.org/jira/browse/NIFIREG-149
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Kevin Doran
>Assignee: Scott Aslan
>Priority: Blocker
>
> This does not affect any released version, but [the current master (as of 
> 2018-03-05)|https://github.com/apache/nifi-registry/commit/24039e63dbb33d61b88235532f059bec8d3a0617]
>  exhibits this behavior:
> When using  LDAP login, the auth-guard for the UI is not falling through to 
> the login page. It is getting stuck in an infinite loop attempting to use 
> Kerberos ticket exchange and certificates. 
> Steps to reproduce: Configure NiFi Registry with LDAP identity provider / 
> user group provider. Access any UI resource such as /nifi-registry or 
> /nifi-registry/login. You will be stuck in an infinite loop attempting to 
> check the current browser credentials (can view this in the developer tools 
> -> network tab)
> *Expected behavior:* It should try Kerberos, then certs, then present the 
> user with the user/pass Login page where I can enter LDAP credentials.
> I did not have a NiFi Registry JWT in my browser. This was tested with latest 
> Chrome for macOS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3599) Add nifi.properties value to globally set the default backpressure size threshold for each connection

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386508#comment-16386508
 ] 

ASF GitHub Bot commented on NIFI-3599:
--

Github user mosermw commented on the issue:

https://github.com/apache/nifi/pull/2497
  
@markap14 and @mcgilman I did consider that backpressure settings didn't 
really belong in AboutDTO. The BannerDTO also pulls information from 
nifi.properties, but I didn't think backpressure fit there either.  I didn't 
want to further expand the API by adding something like BackpressureDTO at a 
/nifi-api/flow/backpressure endpoint, but maybe that's the preferred approach? 
Or perhaps a PropertiesDTO at /nifi-api/flow/properties to do something more 
generic?


> Add nifi.properties value to globally set the default backpressure size 
> threshold for each connection
> -
>
> Key: NIFI-3599
> URL: https://issues.apache.org/jira/browse/NIFI-3599
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Dyer
>Assignee: Michael Moser
>Priority: Major
>
> By default each new connection added to the workflow canvas will have a 
> default backpressure size threshold of 10,000 objects. While the threshold 
> can be changed on a connection level it would be convenient to have a global 
> mechanism for setting that value to something other than 10,000. This 
> enhancement would add a property to nifi.properties that would allow for this 
> threshold to be set globally unless otherwise overridden at the connection 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2497: NIFI-3599 Allowed back pressure object count and data size...

2018-03-05 Thread mosermw
Github user mosermw commented on the issue:

https://github.com/apache/nifi/pull/2497
  
@markap14 and @mcgilman I did consider that backpressure settings didn't 
really belong in AboutDTO. The BannerDTO also pulls information from 
nifi.properties, but I didn't think backpressure fit there either.  I didn't 
want to further expand the API by adding something like BackpressureDTO at a 
/nifi-api/flow/backpressure endpoint, but maybe that's the preferred approach? 
Or perhaps a PropertiesDTO at /nifi-api/flow/properties to do something more 
generic?


---


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386463#comment-16386463
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bdesert commented on the issue:

https://github.com/apache/nifi/pull/2478
  
@bbende , thank you!
Both comments make sense. Will commit these changes soon.


> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2478: NIFI-4833 Add scanHBase Processor

2018-03-05 Thread bdesert
Github user bdesert commented on the issue:

https://github.com/apache/nifi/pull/2478
  
@bbende , thank you!
Both comments make sense. Will commit these changes soon.


---


[jira] [Updated] (NIFI-4925) Ranger Authorizer - Memory Leak

2018-03-05 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-4925:
--
Status: Patch Available  (was: In Progress)

> Ranger Authorizer - Memory Leak
> ---
>
> Key: NIFI-4925
> URL: https://issues.apache.org/jira/browse/NIFI-4925
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
>
> Authorization requests/results are now explicitly audited. This change was 
> due to the fact that the Ranger was auditing a lot of false positives 
> previously. This is partly because the NiFi uses authorization to check which 
> features the user may have permissions to. This check is used to 
> enable/disable various parts of the UI. The remainder of the false positives 
> came from the authorizer not knowing the entire context of the request. For 
> instance, when a Processor has no policy we check its parent and so on.
> The memory leak is due to the authorizer holding onto authorization results 
> that are never destined for auditing. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2511: NIFI-4925: Ranger Authorizer Memory Leak

2018-03-05 Thread mcgilman
GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2511

NIFI-4925: Ranger Authorizer Memory Leak

NIFI-4925:
- Addressing memory leak from lingering authorization results that did not 
represent actual access attempts.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-4925

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2511.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2511


commit d8e79a9427bb36f39290feb85ae908e8426f245a
Author: Matt Gilman 
Date:   2018-03-02T21:24:34Z

NIFI-4925:
- Addressing memory leak from lingering authorization results that did not 
represent actual access attempts.




---


[jira] [Commented] (NIFI-4925) Ranger Authorizer - Memory Leak

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386436#comment-16386436
 ] 

ASF GitHub Bot commented on NIFI-4925:
--

GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2511

NIFI-4925: Ranger Authorizer Memory Leak

NIFI-4925:
- Addressing memory leak from lingering authorization results that did not 
represent actual access attempts.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-4925

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2511.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2511


commit d8e79a9427bb36f39290feb85ae908e8426f245a
Author: Matt Gilman 
Date:   2018-03-02T21:24:34Z

NIFI-4925:
- Addressing memory leak from lingering authorization results that did not 
represent actual access attempts.




> Ranger Authorizer - Memory Leak
> ---
>
> Key: NIFI-4925
> URL: https://issues.apache.org/jira/browse/NIFI-4925
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
>
> Authorization requests/results are now explicitly audited. This change was 
> due to the fact that the Ranger was auditing a lot of false positives 
> previously. This is partly because the NiFi uses authorization to check which 
> features the user may have permissions to. This check is used to 
> enable/disable various parts of the UI. The remainder of the false positives 
> came from the authorizer not knowing the entire context of the request. For 
> instance, when a Processor has no policy we check its parent and so on.
> The memory leak is due to the authorizer holding onto authorization results 
> that are never destined for auditing. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFIREG-149) NiFi Registry UI never directs user to login page for LDAP

2018-03-05 Thread Kevin Doran (JIRA)
Kevin Doran created NIFIREG-149:
---

 Summary: NiFi Registry UI never directs user to login page for LDAP
 Key: NIFIREG-149
 URL: https://issues.apache.org/jira/browse/NIFIREG-149
 Project: NiFi Registry
  Issue Type: Bug
Reporter: Kevin Doran


This does not affect any released version, but [the current master (as of 
2018-03-05)|https://github.com/apache/nifi-registry/commit/24039e63dbb33d61b88235532f059bec8d3a0617]
 exhibits this behavior:

When using  LDAP login, the auth-guard for the UI is not falling through to the 
login page. It is getting stuck in an infinite loop attempting to use Kerberos 
ticket exchange and certificates. 

Steps to reproduce: Configure NiFi Registry with LDAP identity provider / user 
group provider. Access any UI resource such as /nifi-registry or 
/nifi-registry/login. You will be stuck in an infinite loop attempting to check 
the current browser credentials (can view this in the developer tools -> 
network tab)

*Expected behavior:* It should try Kerberos, then certs, then present the user 
with the user/pass Login page where I can enter LDAP credentials.

I did not have a NiFi Registry JWT in my browser. This was tested with latest 
Chrome for macOS.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386279#comment-16386279
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2478
  
One other question, what do you envision people most likely do with the 
output of this processor?

The reason I'm asking is because I'm debating if it makes sense to write 
multiple JSON documents to a single flow file without wrapping them in an 
array. GetHBase and FetchHBase didn't have this problem because they wrote a 
row per flow file (which probably wasn't a good idea for GetHBase).

As an example scenario, say we have a bunch of rows coming out of this 
processor using the col-qual-val format like:
```
{"id":"", "message":"The time is Mon Mar 05 10:20:07 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:21:03 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
```

If we then created a schema for this:
```
{
  "name": "scan",
  "namespace": "nifi",
  "type": "record",
  "fields": [
{ "name": "id", "type": "string" },
{ "name": "message", "type": "string" }
  ]
}
```
Then tried to use ConvertRecord with a JsonTreeReader and 
CsvRecordSetWriter, to convert from JSON to CSV, we get:
```
id,message
"",The time is Mon Mar 05 10:20:07 EST 2018
```
It only ends up converting the first JSON document because the 
JsonTreeReader doesn't know how to read multiple records unless its a JSON 
array.

There may be cases where the current output makes sense so I'm not saying 
to change it yet, but just trying to think of what the most common scenario 
will be.


> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2478: NIFI-4833 Add scanHBase Processor

2018-03-05 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2478
  
One other question, what do you envision people most likely do with the 
output of this processor?

The reason I'm asking is because I'm debating if it makes sense to write 
multiple JSON documents to a single flow file without wrapping them in an 
array. GetHBase and FetchHBase didn't have this problem because they wrote a 
row per flow file (which probably wasn't a good idea for GetHBase).

As an example scenario, say we have a bunch of rows coming out of this 
processor using the col-qual-val format like:
```
{"id":"", "message":"The time is Mon Mar 05 10:20:07 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:21:03 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
{"id":"", "message":"The time is Mon Mar 05 10:22:44 EST 2018"}
```

If we then created a schema for this:
```
{
  "name": "scan",
  "namespace": "nifi",
  "type": "record",
  "fields": [
{ "name": "id", "type": "string" },
{ "name": "message", "type": "string" }
  ]
}
```
Then tried to use ConvertRecord with a JsonTreeReader and 
CsvRecordSetWriter, to convert from JSON to CSV, we get:
```
id,message
"",The time is Mon Mar 05 10:20:07 EST 2018
```
It only ends up converting the first JSON document because the 
JsonTreeReader doesn't know how to read multiple records unless its a JSON 
array.

There may be cases where the current output makes sense so I'm not saying 
to change it yet, but just trying to think of what the most common scenario 
will be.


---


[jira] [Commented] (NIFIREG-124) As a user I want the sidenav table sorting to persist when I open dialogs.

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386263#comment-16386263
 ] 

ASF GitHub Bot commented on NIFIREG-124:


Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/100
  
Reviewing...


> As a user I want the sidenav table sorting to persist when I open dialogs.
> --
>
> Key: NIFIREG-124
> URL: https://issues.apache.org/jira/browse/NIFIREG-124
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Major
>
> When editing a group, if the user sorts the listed users in the group table 
> to be descending, but then click Add User button, the sort order in the users 
> in group table switches back to ascending, even if I make no change in the 
> Add User dialog. This bug also exists in when editing a user as well as when 
> editing a bucket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #100: [NIFIREG-124] persist sidenav table sorting

2018-03-05 Thread kevdoran
Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/100
  
Reviewing...


---


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386243#comment-16386243
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2478
  
@bdesert Thanks for the updates, was reviewing the code again and I think 
we need to change to way the `ScanHBaseResultHandler` works...

Currently it adds rows to a list in memory until bulk size is reached, and 
since bulk size defaults to 0, the default case will be that bulk size is never 
reached and all the rows are left as "hanging" rows. This means if someone 
scans a table with 1 million rows, all 1 millions will be in memory before 
being written to the flow file which would not be good for memory usage.

We should be able to write row by row to the flow file and never add them 
to a list. Inside the handler we can use `session.append(flowFile, (out) ->` to 
append a row at a time to the flow file. I think we can then do away with the 
"hanging rows" concept because there won't be anything buffered in memory.


> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2478: NIFI-4833 Add scanHBase Processor

2018-03-05 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2478
  
@bdesert Thanks for the updates, was reviewing the code again and I think 
we need to change to way the `ScanHBaseResultHandler` works...

Currently it adds rows to a list in memory until bulk size is reached, and 
since bulk size defaults to 0, the default case will be that bulk size is never 
reached and all the rows are left as "hanging" rows. This means if someone 
scans a table with 1 million rows, all 1 millions will be in memory before 
being written to the flow file which would not be good for memory usage.

We should be able to write row by row to the flow file and never add them 
to a list. Inside the handler we can use `session.append(flowFile, (out) ->` to 
append a row at a time to the flow file. I think we can then do away with the 
"hanging rows" concept because there won't be anything buffered in memory.


---


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386179#comment-16386179
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r172216534
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -121,34 +136,53 @@
 .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
 .build();
 
-static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
-.name("Batch Size")
-.description("The number of elements returned from the server 
in one batch")
+static final PropertyDescriptor FETCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Fetch Size")
--- End diff --

I missed that it was `name` and not `displayName`.

Maybe what should be done here is to revert that change, and then have the 
commits happen either after reach flowfile (when grouped into big flowfiles) or 
after each batch as defined in that property for the 1:1 result/flowfile option.


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2448: NIFI-4838 Added configurable progressive commits to...

2018-03-05 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r172216534
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -121,34 +136,53 @@
 .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
 .build();
 
-static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
-.name("Batch Size")
-.description("The number of elements returned from the server 
in one batch")
+static final PropertyDescriptor FETCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Fetch Size")
--- End diff --

I missed that it was `name` and not `displayName`.

Maybe what should be done here is to revert that change, and then have the 
commits happen either after reach flowfile (when grouped into big flowfiles) or 
after each batch as defined in that property for the 1:1 result/flowfile option.


---


[jira] [Created] (NIFI-4933) GCS Google Cloud Storage Processor requires Project ID even if it is not necessary

2018-03-05 Thread Julian Gimbel (JIRA)
Julian Gimbel created NIFI-4933:
---

 Summary: GCS Google Cloud Storage Processor requires Project ID 
even if it is not necessary
 Key: NIFI-4933
 URL: https://issues.apache.org/jira/browse/NIFI-4933
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.5.0
Reporter: Julian Gimbel


Google Cloud Storage Processsors like List or Fetch require the Project ID, but 
in the Google Cloud Storage Code the Project ID is only required when creating 
a new Bucket:

[https://github.com/GoogleCloudPlatform/google-cloud-java/blob/e3908826d28d24a5dd68866f1177994050dbe766/google-cloud-storage/src/main/java/com/google/cloud/storage/StorageOptions.java#L109]

With the project ID required we can not download data from buckets that we do 
not own like for example public data sets: 
[https://console.cloud.google.com/storage/browser/gcp-public-data-landsat/?_ga=2.37550372.-565124473.1518597165]

Solution would be to make the Project ID optional and enable the download of 
such data sets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386123#comment-16386123
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r172198724
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -121,34 +136,53 @@
 .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
 .build();
 
-static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
-.name("Batch Size")
-.description("The number of elements returned from the server 
in one batch")
+static final PropertyDescriptor FETCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Fetch Size")
--- End diff --

Changing a property's name will cause it to be recognized as a "different" 
property. This will cause all existing flows containing GetMongo to become 
invalid (the "Batch Size" property will show up at the bottom containing the 
existing value, but the framework will claim the processor is invalid because 
Batch Size is not a supported property).
That's why we use displayName() for the user-friendly name, so we can 
change it at will. I realize you did not have that luxury here, but we still 
would have to keep the name("Batch Size") and add displayName("Fetch Size"). 
This will be confusing in the code (until we change it for real, perhaps in 
NiFi 2.0?) but can be accompanied by documentation.
Also I'm still a little leery of changing the existing property to "fetch" 
vs "batch", then using "batch" in a different context in the added property. 
Would like to get some input from others on this as well.


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2448: NIFI-4838 Added configurable progressive commits to...

2018-03-05 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r172198724
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -121,34 +136,53 @@
 .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
 .build();
 
-static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
-.name("Batch Size")
-.description("The number of elements returned from the server 
in one batch")
+static final PropertyDescriptor FETCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Fetch Size")
--- End diff --

Changing a property's name will cause it to be recognized as a "different" 
property. This will cause all existing flows containing GetMongo to become 
invalid (the "Batch Size" property will show up at the bottom containing the 
existing value, but the framework will claim the processor is invalid because 
Batch Size is not a supported property).
That's why we use displayName() for the user-friendly name, so we can 
change it at will. I realize you did not have that luxury here, but we still 
would have to keep the name("Batch Size") and add displayName("Fetch Size"). 
This will be confusing in the code (until we change it for real, perhaps in 
NiFi 2.0?) but can be accompanied by documentation.
Also I'm still a little leery of changing the existing property to "fetch" 
vs "batch", then using "batch" in a different context in the added property. 
Would like to get some input from others on this as well.


---


[jira] [Commented] (NIFI-4292) PutElasticSearchHTTP Missing error message

2018-03-05 Thread John Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386120#comment-16386120
 ] 

John Smith commented on NIFI-4292:
--

We're also having this issue. Requires delving into the Elasticsearch logs of 
the the node you're trying to insert into to try and figure out what's going on

> PutElasticSearchHTTP Missing error message
> --
>
> Key: NIFI-4292
> URL: https://issues.apache.org/jira/browse/NIFI-4292
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Nicholas Carenza
>Priority: Minor
>
> My logs are filled with errors from PutElasticSearchHTTP but no failure 
> reason.
> "2017-08-14 15:15:23,949 ERROR [Timer-Driven Process Thread-1] 
> o.a.n.p.e.PutElasticsearchHttp 
> PutElasticsearchHttp[id=c255d966-015c-1000-2c69-df80f890bb83] Failed to 
> insert 
> StandardFlowFileRecord[uuid=30c45a2a-2f94-4af1-8028-be2a6116c331,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1502685936537-26229028, 
> container=default, section=292], offset=732341, 
> length=10149],offset=0,name=23173788705467383,size=10149] into Elasticsearch 
> due to , transferring to failure"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4904) PutElasticSearch5 should support higher than elasticsearch 5.0.0

2018-03-05 Thread John Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386118#comment-16386118
 ] 

John Smith commented on NIFI-4904:
--

A PutElasitcsearch6 procesor would be really useful for us. Currently having to 
use the PutElasticsearchHttp which is much slower

> PutElasticSearch5 should support higher than elasticsearch 5.0.0
> 
>
> Key: NIFI-4904
> URL: https://issues.apache.org/jira/browse/NIFI-4904
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: Ubuntu
>Reporter: Dye357
>Priority: Trivial
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Currently the PutElasticSearch5 component is using the following transport 
> artifact
> 
>  org.elasticsearch.client
>  transport
>  ${es.version}
>  
> Where es.version is 5.0.1. Upgrading to the highest 5.x dependency would 
> enable this component to be compatible with later 5.x versions of elastic 
> search as well as early versions of elastic search 6.x.
> Here is Nifi 1.5.0 connecting to ES 6.2.1 on port 9300:
> [2018-02-23T01:41:04,162][WARN ][o.e.t.n.Netty4Transport ] [uQSW8O8] 
> exception caught on transport layer 
> [NettyTcpChannel\{localAddress=/127.0.0.1:9300, 
> remoteAddress=/127.0.0.1:57457}], closing connection
> java.lang.IllegalStateException: Received message from unsupported version: 
> [5.0.0] minimal compatible version is: [5.6.0]
>  at 
> org.elasticsearch.transport.TcpTransport.ensureVersionCompatibility(TcpTransport.java:1430)
>  ~[elasticsearch-6.2.1.jar:6.2.1]
>  at 
> org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1377)
>  ~[elasticsearch-6.2.1.jar:6.2.1]
>  at 
> org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:64)
>  ~[transport-netty4-6.2.1.jar:6.2.1]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
>  [netty-codec-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297)
>  [netty-codec-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413)
>  [netty-codec-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
>  [netty-codec-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) 
> [netty-handler-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> 

[jira] [Commented] (NIFI-3753) ListenBeats: Compressed beats packets may cause: Error decoding Beats frame: Error decompressing frame: invalid distance too far back

2018-03-05 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386116#comment-16386116
 ] 

Joseph Witt commented on NIFI-3753:
---

[~trixpan] will you have a chance to look into this?

> ListenBeats: Compressed beats packets may cause: Error decoding Beats  frame: 
> Error decompressing  frame: invalid distance too far back
> ---
>
> Key: NIFI-3753
> URL: https://issues.apache.org/jira/browse/NIFI-3753
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Andre F de Miranda
>Priority: Critical
>
> 017-04-28 02:03:37,153 ERROR [pool-106-thread-1] 
> o.a.nifi.processors.beats.List
> enBeats
> org.apache.nifi.processors.beats.frame.BeatsFrameException: Error decoding 
> Beats
>  frame: Error decompressing  frame: invalid distance too far back
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.process(BeatsDeco
> der.java:123) ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> at 
> org.apache.nifi.processors.beats.handler.BeatsSocketChannelHandler.pr
> ocessBuffer(BeatsSocketChannelHandler.java:71) 
> ~[nifi-beats-processors-1.2.0-SNA
> PSHOT.jar:1.2.0-SNAPSHOT]
> at 
> org.apache.nifi.processor.util.listen.handler.socket.StandardSocketCh
> annelHandler.run(StandardSocketChannelHandler.java:76) 
> [nifi-processor-utils-1.2
> .0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
> java:1142) [na:1.8.0_131]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_131]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Caused by: org.apache.nifi.processors.beats.frame.BeatsFrameException: Error 
> decompressing  frame: invalid distance too far back
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.processPAYLOAD(BeatsDecoder.java:292)
>  ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.process(BeatsDecoder.java:103)
>  ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> ... 5 common frames omitted
> Caused by: java.util.zip.ZipException: invalid distance too far back
> at 
> java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164) 
> ~[na:1.8.0_131]
> at java.io.FilterInputStream.read(FilterInputStream.java:107) 
> ~[na:1.8.0_131]
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.processPAYLOAD(BeatsDecoder.java:277)
>  ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> ... 6 common frames omitted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3753) ListenBeats: Compressed beats packets may cause: Error decoding Beats frame: Error decompressing frame: invalid distance too far back

2018-03-05 Thread John Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386114#comment-16386114
 ] 

John Smith commented on NIFI-3753:
--

We are also having this issue. Had to turn off compression in beats completely 
and set bulk max size to 0

> ListenBeats: Compressed beats packets may cause: Error decoding Beats  frame: 
> Error decompressing  frame: invalid distance too far back
> ---
>
> Key: NIFI-3753
> URL: https://issues.apache.org/jira/browse/NIFI-3753
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Andre F de Miranda
>Priority: Critical
>
> 017-04-28 02:03:37,153 ERROR [pool-106-thread-1] 
> o.a.nifi.processors.beats.List
> enBeats
> org.apache.nifi.processors.beats.frame.BeatsFrameException: Error decoding 
> Beats
>  frame: Error decompressing  frame: invalid distance too far back
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.process(BeatsDeco
> der.java:123) ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> at 
> org.apache.nifi.processors.beats.handler.BeatsSocketChannelHandler.pr
> ocessBuffer(BeatsSocketChannelHandler.java:71) 
> ~[nifi-beats-processors-1.2.0-SNA
> PSHOT.jar:1.2.0-SNAPSHOT]
> at 
> org.apache.nifi.processor.util.listen.handler.socket.StandardSocketCh
> annelHandler.run(StandardSocketChannelHandler.java:76) 
> [nifi-processor-utils-1.2
> .0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
> java:1142) [na:1.8.0_131]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_131]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Caused by: org.apache.nifi.processors.beats.frame.BeatsFrameException: Error 
> decompressing  frame: invalid distance too far back
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.processPAYLOAD(BeatsDecoder.java:292)
>  ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.process(BeatsDecoder.java:103)
>  ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> ... 5 common frames omitted
> Caused by: java.util.zip.ZipException: invalid distance too far back
> at 
> java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164) 
> ~[na:1.8.0_131]
> at java.io.FilterInputStream.read(FilterInputStream.java:107) 
> ~[na:1.8.0_131]
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.processPAYLOAD(BeatsDecoder.java:277)
>  ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> ... 6 common frames omitted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4918) JMS Connection Factory setting the dynamic Properties wrong

2018-03-05 Thread Julian Gimbel (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386108#comment-16386108
 ] 

Julian Gimbel commented on NIFI-4918:
-

https://github.com/apache/nifi/pull/2499

> JMS Connection Factory setting the dynamic Properties wrong
> ---
>
> Key: NIFI-4918
> URL: https://issues.apache.org/jira/browse/NIFI-4918
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Julian Gimbel
>Priority: Minor
>
> When trying to set the Property setSSLTrustedCertificate for the tibco jms 
> Connection Factory the process will sometimes fail, because this Method is 
> implemented three times and accepts different parameters. Therefor we should 
> implement a fix that checks through the methods if one of it requires the 
> type of parameter that was provided by the user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2448: NIFI-4838 Added configurable progressive commits to GetMon...

2018-03-05 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2448
  
@mattyb149 I think the ticket's done now.


---


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386095#comment-16386095
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2448
  
@mattyb149 I think the ticket's done now.


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)