[jira] [Commented] (NIFI-4164) Realistic Time Series Processor Simulator

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078948#comment-16078948
 ] 

ASF GitHub Bot commented on NIFI-4164:
--

GitHub user cherrera2001 opened a pull request:

https://github.com/apache/nifi/pull/1997

NIFI-4164 Adding a realistic time simulator processor to NiFi

This is the initial commit of the processor. It can be used by using the 
bundled basicConfig.json or unitTestConfig.json files found within 
/test/Resources of the processor. 

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [X] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [X] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [X] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hashmapinc/nifi NIFI-4164

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1997.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1997


commit e980796ed93b3053567f029f3d1ddb1b9c0ae46c
Author: Chris Herrera 
Date:   2017-07-08T04:18:00Z

Inital commit of the processor

Inital commit of the processor




> Realistic Time Series Processor Simulator
> -
>
> Key: NIFI-4164
> URL: https://issues.apache.org/jira/browse/NIFI-4164
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Chris Herrera
>Assignee: Chris Herrera
>Priority: Minor
>  Labels: features
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to validate several flows that deal with sensor data, it would be 
> good to have a built in time series simulator processor that generates data 
> and can send it out via a flow file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1997: NIFI-4164 Adding a realistic time simulator process...

2017-07-07 Thread cherrera2001
GitHub user cherrera2001 opened a pull request:

https://github.com/apache/nifi/pull/1997

NIFI-4164 Adding a realistic time simulator processor to NiFi

This is the initial commit of the processor. It can be used by using the 
bundled basicConfig.json or unitTestConfig.json files found within 
/test/Resources of the processor. 

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [X] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [X] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [X] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hashmapinc/nifi NIFI-4164

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1997.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1997


commit e980796ed93b3053567f029f3d1ddb1b9c0ae46c
Author: Chris Herrera 
Date:   2017-07-08T04:18:00Z

Inital commit of the processor

Inital commit of the processor




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078932#comment-16078932
 ] 

ASF GitHub Bot commented on NIFI-1613:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1976
  
The code changes look good; I have a question in general about expected 
behavior when the database doesn't explicitly support boolean types (such as 
Oracle SQL). Let's say I have a JSON object `{"id": 1, "b": true}` and a table 
with column "id" of type INT and "b" of type NUMBER(1). Do we need to support 
this case, or if the target doesn't have a BOOLEAN type, should the onus be on 
the flow designer to change the values accordingly (such as with 
JoltTransformJSON)?

Also should we include BIT and/or other numeric types in the switch 
statement?


> ConvertJSONToSQL Drops Type Information
> ---
>
> Key: NIFI-1613
> URL: https://issues.apache.org/jira/browse/NIFI-1613
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1, 0.5.1
> Environment: Ubuntu 14.04 LTS
>Reporter: Aaron Stephens
>Assignee: Toivo Adams
>  Labels: ConvertJSONToSQL, Phoenix, SQL
>
> It appears that the ConvertJSONToSQL processor is turning Boolean (and 
> possibly Integer and Float) values into Strings.  This is okay for some 
> drivers (like PostgreSQL) which can coerce a String back into a Boolean, but 
> it causes issues for others (specifically Phoenix in my case).
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1976: NIFI-1613: Make use of column type correctly at ConvertJSO...

2017-07-07 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1976
  
The code changes look good; I have a question in general about expected 
behavior when the database doesn't explicitly support boolean types (such as 
Oracle SQL). Let's say I have a JSON object `{"id": 1, "b": true}` and a table 
with column "id" of type INT and "b" of type NUMBER(1). Do we need to support 
this case, or if the target doesn't have a BOOLEAN type, should the onus be on 
the flow designer to change the values accordingly (such as with 
JoltTransformJSON)?

Also should we include BIT and/or other numeric types in the switch 
statement?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4168) Invalid HBase config file throws runtime exception

2017-07-07 Thread Bryan Bende (JIRA)
Bryan Bende created NIFI-4168:
-

 Summary: Invalid HBase config file throws runtime exception
 Key: NIFI-4168
 URL: https://issues.apache.org/jira/browse/NIFI-4168
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.3.0, 1.1.1, 1.2.0, 1.0.0
Reporter: Bryan Bende
Priority: Minor


If you enter a file that is not a real XML site file into the 
HBase_1_1_2_ClientService as a config file, it throws the following exception 
and the service is invalid:

{code}
2017-07-07 16:56:04,817 ERROR [NiFi Web Server-17] 
o.a.n.c.AbstractConfiguredComponent Failed to perform validation of 
HBase_1_1_2_ClientService[id=1ed61bd5-015d-1000-b727-9257831db2c2]
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: 
file:///somefile.txt; lineNumber: 1; columnNumber: 1; Content is not allowed in 
prolog.
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2645)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2502)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2405)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:981)
at 
org.apache.nifi.hadoop.SecurityUtil.isSecurityEnabled(SecurityUtil.java:86)
at 
org.apache.nifi.hadoop.KerberosProperties.validatePrincipalAndKeytab(KerberosProperties.java:121)
at 
org.apache.nifi.hbase.HBase_1_1_2_ClientService.customValidate(HBase_1_1_2_ClientService.java:173)
at 
org.apache.nifi.components.AbstractConfigurableComponent.validate(AbstractConfigurableComponent.java:126)
{code}

If you exit the controller services screen and come back in, the service is now 
valid.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-4151) Slow response times when requesting Process Group Status

2017-07-07 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-4151.
---
Resolution: Fixed

> Slow response times when requesting Process Group Status
> 
>
> Key: NIFI-4151
> URL: https://issues.apache.org/jira/browse/NIFI-4151
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand 
> connections and input/output ports as well. When I refresh stats it is taking 
> 3-4 seconds to pull back the status. And when I go to the Summary table, it's 
> taking about 8 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4151) Slow response times when requesting Process Group Status

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078655#comment-16078655
 ] 

ASF GitHub Bot commented on NIFI-4151:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1995


> Slow response times when requesting Process Group Status
> 
>
> Key: NIFI-4151
> URL: https://issues.apache.org/jira/browse/NIFI-4151
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand 
> connections and input/output ports as well. When I refresh stats it is taking 
> 3-4 seconds to pull back the status. And when I go to the Summary table, it's 
> taking about 8 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4151) Slow response times when requesting Process Group Status

2017-07-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078652#comment-16078652
 ] 

ASF subversion and git services commented on NIFI-4151:
---

Commit 9e296830ab813bbdecf65b57ba723ae822abeff5 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=9e29683 ]

NIFI-4151: Ensure that we properly call invalidateValidationContext() when 
properties change; ensure that in the controller service provider we don't 
replace a controller service with a new node if the ID's match, as we won't be 
able to actually add the new one to the flow. This closes #1995


> Slow response times when requesting Process Group Status
> 
>
> Key: NIFI-4151
> URL: https://issues.apache.org/jira/browse/NIFI-4151
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand 
> connections and input/output ports as well. When I refresh stats it is taking 
> 3-4 seconds to pull back the status. And when I go to the Summary table, it's 
> taking about 8 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4151) Slow response times when requesting Process Group Status

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078656#comment-16078656
 ] 

ASF GitHub Bot commented on NIFI-4151:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/1995
  
Thanks @markap14. This has been merged to master.


> Slow response times when requesting Process Group Status
> 
>
> Key: NIFI-4151
> URL: https://issues.apache.org/jira/browse/NIFI-4151
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand 
> connections and input/output ports as well. When I refresh stats it is taking 
> 3-4 seconds to pull back the status. And when I go to the Summary table, it's 
> taking about 8 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1995: NIFI-4151: Ensure that we properly call invalidateV...

2017-07-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1995


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1995: NIFI-4151: Ensure that we properly call invalidateValidati...

2017-07-07 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/1995
  
Thanks @markap14. This has been merged to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread benqiu2016
Github user benqiu2016 commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126240211
  
--- Diff: libminifi/src/RemoteProcessorGroupPort.cpp ---
@@ -20,6 +20,9 @@
 
 #include "../include/RemoteProcessorGroupPort.h"
 
+#include 
+#include 
+#include 
--- End diff --

for now, i think we are tied couple curl with our code like others 
InvokeHttp, etc. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread benqiu2016
Github user benqiu2016 commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126239506
  
--- Diff: libminifi/include/utils/HTTPUtils.h ---
@@ -88,6 +90,40 @@ struct HTTPRequestResponse {
 
 };
 
+static void parse_url(std::string &url, std::string &host, int &port, 
std::string &protocol) {
--- End diff --

for now, we can document the same in the README.md to specify the URL 
format if it is OK with you. 
As for the access control, i looked at the doc, it looks like we need to 
use token, etc for a secure cluster. I will need to ask around.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (NIFI-1586) embedded zookeeper disk utilization grows unbounded

2017-07-07 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck reopened NIFI-1586:
---

Fix needs to be updated to start the DatadirCleanupManager when ZK is started 
in standalone mode.

> embedded zookeeper disk utilization grows unbounded
> ---
>
> Key: NIFI-1586
> URL: https://issues.apache.org/jira/browse/NIFI-1586
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
> Environment: latest 0.5.1 release
>Reporter: Matthew Clarke
>Assignee: Jeff Storck
> Fix For: 1.4.0
>
> Attachments: ZK_Autopurge_Test.xml
>
>
> Observed that embedded NiFi zookeeper disk utilization will grow unbounded.  
> Zookeeper will occasional create snapshots but at no time will it ever purge 
> any of those snapshots it creates. This behavior is documented here:
> https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_administering
> It is the operators responsibility to purge old snapshot files. NiFi needs to 
> provide a configuration that will automate this pruge.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4167) Occasional deadlock when trying to clean up old Content Claims

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078607#comment-16078607
 ] 

ASF GitHub Bot commented on NIFI-4167:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/1996

NIFI-4167: StandardResourceClaimManager should not synchronize on a R…

…esourceClaim in order to determine the claim count

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4167

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1996.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1996


commit 40768b34336a5d08904a7a1c42f0b6e3ac1d36b1
Author: Mark Payne 
Date:   2017-07-07T20:07:15Z

NIFI-4167: StandardResourceClaimManager should not synchronize on a 
ResourceClaim in order to determine the claim count




> Occasional deadlock when trying to clean up old Content Claims
> --
>
> Key: NIFI-4167
> URL: https://issues.apache.org/jira/browse/NIFI-4167
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.4.0
>
>
> Occasionally we'll see the Content Repository stop cleaning up old claims. A 
> thread dump shows:
> {code}
> "FileSystemRepository Workers Thread-3" Id=97 BLOCKED on 
> org.apache.nifi.controller.repository.claim.StandardResourceClaim@6b5aa020 
> at 
> org.apache.nifi.controller.repository.claim.StandardResourceClaimManager.getClaimantCount(StandardResourceClaimManager.java:73)
>  
> at 
> org.apache.nifi.controller.repository.claim.StandardResourceClaim.isInUse(StandardResourceClaim.java:120)
>  
> at 
> org.apache.nifi.controller.repository.FileSystemRepository.remove(FileSystemRepository.java:612)
>  
> at 
> org.apache.nifi.controller.repository.FileSystemRepository.access$1200(FileSystemRepository.java:83)
>  
> at 
> org.apache.nifi.controller.repository.FileSystemRepository$ArchiveOrDestroyDestructableClaims.run(FileSystemRepository.java:1442)
>  
> {code}
> While another thread shows that it's being marked as Destructable:
> {code}
> "pool-10-thread-1" Id=132 TIMED_WAITING on 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@38063d7e
>  
> at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  
> at 
> java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:385) 
> at 
> org.apache.nifi.controller.repository.claim.StandardResourceClaimManager.markDestructable(StandardResourceClaimManager.java:152)
>  
> - waiting on 
> org.apache.nifi.controller.repository.claim.

[jira] [Updated] (NIFI-4167) Occasional deadlock when trying to clean up old Content Claims

2017-07-07 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4167:
-
Status: Patch Available  (was: Open)

> Occasional deadlock when trying to clean up old Content Claims
> --
>
> Key: NIFI-4167
> URL: https://issues.apache.org/jira/browse/NIFI-4167
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.4.0
>
>
> Occasionally we'll see the Content Repository stop cleaning up old claims. A 
> thread dump shows:
> {code}
> "FileSystemRepository Workers Thread-3" Id=97 BLOCKED on 
> org.apache.nifi.controller.repository.claim.StandardResourceClaim@6b5aa020 
> at 
> org.apache.nifi.controller.repository.claim.StandardResourceClaimManager.getClaimantCount(StandardResourceClaimManager.java:73)
>  
> at 
> org.apache.nifi.controller.repository.claim.StandardResourceClaim.isInUse(StandardResourceClaim.java:120)
>  
> at 
> org.apache.nifi.controller.repository.FileSystemRepository.remove(FileSystemRepository.java:612)
>  
> at 
> org.apache.nifi.controller.repository.FileSystemRepository.access$1200(FileSystemRepository.java:83)
>  
> at 
> org.apache.nifi.controller.repository.FileSystemRepository$ArchiveOrDestroyDestructableClaims.run(FileSystemRepository.java:1442)
>  
> {code}
> While another thread shows that it's being marked as Destructable:
> {code}
> "pool-10-thread-1" Id=132 TIMED_WAITING on 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@38063d7e
>  
> at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  
> at 
> java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:385) 
> at 
> org.apache.nifi.controller.repository.claim.StandardResourceClaimManager.markDestructable(StandardResourceClaimManager.java:152)
>  
> - waiting on 
> org.apache.nifi.controller.repository.claim.StandardResourceClaim@6b5aa020 
> at 
> org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.markDestructable(WriteAheadFlowFileRepository.java:186)
>  
> at 
> org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.onGlobalSync(WriteAheadFlowFileRepository.java:287)
>  
> at 
> org.wali.MinimalLockingWriteAheadLog.checkpoint(MinimalLockingWriteAheadLog.java:565)
>  
> - waiting on org.wali.MinimalLockingWriteAheadLog@65ab87e4 
> at 
> org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.checkpoint(WriteAheadFlowFileRepository.java:416)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1996: NIFI-4167: StandardResourceClaimManager should not ...

2017-07-07 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/1996

NIFI-4167: StandardResourceClaimManager should not synchronize on a R…

…esourceClaim in order to determine the claim count

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4167

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1996.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1996


commit 40768b34336a5d08904a7a1c42f0b6e3ac1d36b1
Author: Mark Payne 
Date:   2017-07-07T20:07:15Z

NIFI-4167: StandardResourceClaimManager should not synchronize on a 
ResourceClaim in order to determine the claim count




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4167) Occasional deadlock when trying to clean up old Content Claims

2017-07-07 Thread Mark Payne (JIRA)
Mark Payne created NIFI-4167:


 Summary: Occasional deadlock when trying to clean up old Content 
Claims
 Key: NIFI-4167
 URL: https://issues.apache.org/jira/browse/NIFI-4167
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
Priority: Critical
 Fix For: 1.4.0


Occasionally we'll see the Content Repository stop cleaning up old claims. A 
thread dump shows:

{code}
"FileSystemRepository Workers Thread-3" Id=97 BLOCKED on 
org.apache.nifi.controller.repository.claim.StandardResourceClaim@6b5aa020 
at 
org.apache.nifi.controller.repository.claim.StandardResourceClaimManager.getClaimantCount(StandardResourceClaimManager.java:73)
 
at 
org.apache.nifi.controller.repository.claim.StandardResourceClaim.isInUse(StandardResourceClaim.java:120)
 
at 
org.apache.nifi.controller.repository.FileSystemRepository.remove(FileSystemRepository.java:612)
 
at 
org.apache.nifi.controller.repository.FileSystemRepository.access$1200(FileSystemRepository.java:83)
 
at 
org.apache.nifi.controller.repository.FileSystemRepository$ArchiveOrDestroyDestructableClaims.run(FileSystemRepository.java:1442)
 
{code}

While another thread shows that it's being marked as Destructable:

{code}
"pool-10-thread-1" Id=132 TIMED_WAITING on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@38063d7e 
at sun.misc.Unsafe.park(Native Method) 
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 
at java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:385) 
at 
org.apache.nifi.controller.repository.claim.StandardResourceClaimManager.markDestructable(StandardResourceClaimManager.java:152)
 
- waiting on 
org.apache.nifi.controller.repository.claim.StandardResourceClaim@6b5aa020 
at 
org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.markDestructable(WriteAheadFlowFileRepository.java:186)
 
at 
org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.onGlobalSync(WriteAheadFlowFileRepository.java:287)
 
at 
org.wali.MinimalLockingWriteAheadLog.checkpoint(MinimalLockingWriteAheadLog.java:565)
 
- waiting on org.wali.MinimalLockingWriteAheadLog@65ab87e4 
at 
org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.checkpoint(WriteAheadFlowFileRepository.java:416)
 
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078593#comment-16078593
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126219285
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerSe

[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078590#comment-16078590
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126214244
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerSe

[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078591#comment-16078591
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126216880
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerSe

[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078592#comment-16078592
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126218471
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerSe

[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078595#comment-16078595
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126216557
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerSe

[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126217156
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
--- End diff --

We should add @ReadsAttribute and @WritesAttribute annotations to document 
when "retry.index" will be written and when/how it will be read and used.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078597#comment-16078597
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126231150
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerSe

[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126214244
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor COMPLEX_FIELD_STRATEGY = new 
PropertyDescriptor.Builder()
+.name("Complex Field Strate

[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078596#comment-16078596
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126217156
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
--- End diff --

We should add @ReadsAttribute and @WritesAttribute annotations to document 
when "retry.index" will be written and when/how it will be read and used.


> Create EvaluateRecordPath processor
> ---
>
> Key: NIFI-4024
> URL: https://issues.apache.org/jira/browse/NIFI-4024
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Steve Champagne
>Priority: Minor
>
> With the new RecordPath DSL, it would be nice if there was a processor that 
> could pull fields into attributes of the flowfile based on a RecordPath. This 
> would be similar to the EvaluateJsonPath processor that currently exists, 
> except it could be used to pull fields from arbitrary record formats. My 
> current use case for it would be pulling fields out of Avro records while 
> skipping the steps of having to convert Avro to JSON, evaluate JsonPath, and 
> then converting back to Avro. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078594#comment-16078594
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126223383
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
--- End diff --

I'm ok with leaving this as is, but just wanted to mention that we could 
make this "Row Id Record Path" which would take a record path that would be 
evaluated to obtain the row id value. This way it can get a row id from 
something more complex then a top-level field.

I think the current functionality supports most use-cases though.


> Create EvaluateRecordPath processor
> ---
>
> Key: NIFI-4024
> URL: https://issues.apache.org/jira/browse/NIFI-4024
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Steve Champagne
>Priority: Minor
>
> With the new RecordPath DSL, it would be nice if there was a processor that 
> could pull fields into attributes of the flowfile based on a RecordPath. This 
> would be similar to the EvaluateJsonPath processor that currently exists, 
> except it could be used to pull fields from arbitrary record formats. My 
> current use case for it would be pulling fields out of Avro records while 
> skipping the steps of having to convert Avro to JSON, evaluate JsonPath, and 
> then converting back to Avro. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126216880
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor COMPLEX_FIELD_STRATEGY = new 
PropertyDescriptor.Builder()
+.name("Complex Field Strate

[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126231150
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor COMPLEX_FIELD_STRATEGY = new 
PropertyDescriptor.Builder()
+.name("Complex Field Strate

[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126223383
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
--- End diff --

I'm ok with leaving this as is, but just wanted to mention that we could 
make this "Row Id Record Path" which would take a record path that would be 
evaluated to obtain the row id value. This way it can get a row id from 
something more complex then a top-level field.

I think the current functionality supports most use-cases though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126218471
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor COMPLEX_FIELD_STRATEGY = new 
PropertyDescriptor.Builder()
+.name("Complex Field Strate

[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126216557
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor COMPLEX_FIELD_STRATEGY = new 
PropertyDescriptor.Builder()
+.name("Complex Field Strate

[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r126219285
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Name")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("Specifies the Controller Service to use for 
parsing incoming data and determining the data's schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor COMPLEX_FIELD_STRATEGY = new 
PropertyDescriptor.Builder()
+.name("Complex Field Strate

[jira] [Commented] (NIFI-4151) Slow response times when requesting Process Group Status

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078550#comment-16078550
 ] 

ASF GitHub Bot commented on NIFI-4151:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/1995

NIFI-4151: Ensure that we properly call invalidateValidationContext()…

… when properties change; ensure that in the controller service provider we 
don't replace a controller service with a new node if the ID's match, as we 
won't be able to actually add the new one to the flow.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4151-Fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1995.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1995


commit b3b6d547290a1e51cc1cc0a0302782136a996a94
Author: Mark Payne 
Date:   2017-07-07T19:18:51Z

NIFI-4151: Ensure that we properly call invalidateValidationContext() when 
properties change; ensure that in the controller service provider we don't 
replace a controller service with a new node if the ID's match, as we won't be 
able to actually add the new one to the flow.




> Slow response times when requesting Process Group Status
> 
>
> Key: NIFI-4151
> URL: https://issues.apache.org/jira/browse/NIFI-4151
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand 
> connections and input/output ports as well. When I refresh stats it is taking 
> 3-4 seconds to pull back the status. And when I go to the Summary table, it's 
> taking about 8 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1995: NIFI-4151: Ensure that we properly call invalidateV...

2017-07-07 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/1995

NIFI-4151: Ensure that we properly call invalidateValidationContext()…

… when properties change; ensure that in the controller service provider 
we don't replace a controller service with a new node if the ID's match, as we 
won't be able to actually add the new one to the flow.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4151-Fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1995.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1995


commit b3b6d547290a1e51cc1cc0a0302782136a996a94
Author: Mark Payne 
Date:   2017-07-07T19:18:51Z

NIFI-4151: Ensure that we properly call invalidateValidationContext() when 
properties change; ensure that in the controller service provider we don't 
replace a controller service with a new node if the ID's match, as we won't be 
able to actually add the new one to the flow.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4166) Create toolkit module to generate and build Swagger API library for NiFi

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078549#comment-16078549
 ] 

ASF GitHub Bot commented on NIFI-4166:
--

GitHub user jskora opened a pull request:

https://github.com/apache/nifi/pull/1994

NIFI-4166 - Create toolkit module to generate and build Swagger API library 
for NiFi REST API

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jskora/nifi NIFI-4166

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1994.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1994


commit dd67c05e353db6fa8655121cfd86f65cd9475a63
Author: Joe Skora 
Date:   2017-05-31T04:08:19Z

Create nifi-toolkit-api.




> Create toolkit module to generate and build Swagger API library for NiFi
> 
>
> Key: NIFI-4166
> URL: https://issues.apache.org/jira/browse/NIFI-4166
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Joe Skora
>Assignee: Joe Skora
>Priority: Minor
>
> Create a new toolkit module to generate the Swagger API library based on the 
> current REST API annotations in the NiFi source by way of the Swagger Codegen 
> Maven Plugin.  This should make it easier access to access the REST API from 
> Java code or Groovy scripts.
> Swagger Codegen support other languages, so this could be expanded for 
> additional API client types.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1994: NIFI-4166 - Create toolkit module to generate and b...

2017-07-07 Thread jskora
GitHub user jskora opened a pull request:

https://github.com/apache/nifi/pull/1994

NIFI-4166 - Create toolkit module to generate and build Swagger API library 
for NiFi REST API

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jskora/nifi NIFI-4166

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1994.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1994


commit dd67c05e353db6fa8655121cfd86f65cd9475a63
Author: Joe Skora 
Date:   2017-05-31T04:08:19Z

Create nifi-toolkit-api.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4151) Slow response times when requesting Process Group Status

2017-07-07 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078546#comment-16078546
 ] 

Mark Payne commented on NIFI-4151:
--

I am re-opening this issue because I've come across some issues that appear to 
be related to this. Controller services are not properly being validated. They 
are instead using the validation context from a previous set of properties. So 
if a controller service is invalid and then the properties are changed, it 
still validates against the old set of properties. Also, if we click to a 
controller service from a processor, it appears that the page that comes up is 
not the correct page of controller services. It shows no services in the table. 
Will create a new PR for this issue.

> Slow response times when requesting Process Group Status
> 
>
> Key: NIFI-4151
> URL: https://issues.apache.org/jira/browse/NIFI-4151
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand 
> connections and input/output ports as well. When I refresh stats it is taking 
> 3-4 seconds to pull back the status. And when I go to the Summary table, it's 
> taking about 8 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (NIFI-4151) Slow response times when requesting Process Group Status

2017-07-07 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne reopened NIFI-4151:
--

> Slow response times when requesting Process Group Status
> 
>
> Key: NIFI-4151
> URL: https://issues.apache.org/jira/browse/NIFI-4151
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a flow with > 1,000 Process Groups and 2500 Processors. A few thousand 
> connections and input/output ports as well. When I refresh stats it is taking 
> 3-4 seconds to pull back the status. And when I go to the Summary table, it's 
> taking about 8 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4166) Create toolkit module to generate and build Swagger API library for NiFi

2017-07-07 Thread Joe Skora (JIRA)
Joe Skora created NIFI-4166:
---

 Summary: Create toolkit module to generate and build Swagger API 
library for NiFi
 Key: NIFI-4166
 URL: https://issues.apache.org/jira/browse/NIFI-4166
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Tools and Build
Reporter: Joe Skora
Assignee: Joe Skora
Priority: Minor


Create a new toolkit module to generate the Swagger API library based on the 
current REST API annotations in the NiFi source by way of the Swagger Codegen 
Maven Plugin.  This should make it easier access to access the REST API from 
Java code or Groovy scripts.

Swagger Codegen support other languages, so this could be expanded for 
additional API client types.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126219277
  
--- Diff: libminifi/src/RemoteProcessorGroupPort.cpp ---
@@ -20,6 +20,9 @@
 
 #include "../include/RemoteProcessorGroupPort.h"
 
+#include 
+#include 
+#include 
--- End diff --

Encapsulating the curl / http communication in this implementation has some 
pros and cons. On the upside, it abstracts the details of the network 
communication from the users of the interface, making calling it simple. A 
downside is that it's harder to unit test the logic in this class as the 
networking component is not mockable / stub-able without setting up an HTTP 
server. Also if we ever want to change HTTP client libraries, it is more work 
as it is tightly coupled here. I don't know if it's worth changing now as it 
would be a significant refactoring, and it's certainly not just in this case 
but something that goes for a lot of our code base.  It would be nice if there 
was more decoupled modularity in these components.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126214904
  
--- Diff: libminifi/include/utils/HTTPUtils.h ---
@@ -88,6 +90,40 @@ struct HTTPRequestResponse {
 
 };
 
+static void parse_url(std::string &url, std::string &host, int &port, 
std::string &protocol) {
--- End diff --

IPv6 was just one example. Another case would be 
username:passw...@example.com (Not that that is a typical NiFi deployment case, 
but for the sake of argument as we can't anticipate how these tools will be 
deployed in every scenario.) I am not saying we necessarily need full RFC URI/L 
compliance, but I do think whatever is / is not supported in terms of remote 
s2s urls should be documented somewhere that the users / admins can find it, or 
logged if parsing fails so that users can more easily discover at runtime why 
they cannot connect to their IPv6 URL :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread benqiu2016
Github user benqiu2016 commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126211327
  
--- Diff: libminifi/src/RemoteProcessorGroupPort.cpp ---
@@ -150,6 +194,87 @@ void 
RemoteProcessorGroupPort::onTrigger(core::ProcessContext *context, core::Pr
   return;
 }
 
+void RemoteProcessorGroupPort::refreshRemoteSite2SiteInfo() {
+  if (this->host_.empty() || this->port_ == -1 || this->protocol_.empty())
+  return;
+
+  std::string fullUrl = this->protocol_ + this->host_ + ":" + 
std::to_string(this->port_) + "/nifi-api/controller/";
+
+  this->site2site_port_ = -1;
+  CURL *http_session = curl_easy_init();
+
+  curl_easy_setopt(http_session, CURLOPT_URL, fullUrl.c_str());
+
+  utils::HTTPRequestResponse content;
+  curl_easy_setopt(http_session, CURLOPT_WRITEFUNCTION,
+  &utils::HTTPRequestResponse::recieve_write);
+
+  curl_easy_setopt(http_session, CURLOPT_WRITEDATA,
+  static_cast(&content));
+
+  CURLcode res = curl_easy_perform(http_session);
+
+  if (res == CURLE_OK) {
+std::string response_body(content.data.begin(), content.data.end());
+int64_t http_code = 0;
+curl_easy_getinfo(http_session, CURLINFO_RESPONSE_CODE, &http_code);
+char *content_type;
+/* ask for the content-type */
+curl_easy_getinfo(http_session, CURLINFO_CONTENT_TYPE, &content_type);
+
+bool isSuccess = ((int32_t) (http_code / 100)) == 2
+&& res != CURLE_ABORTED_BY_CALLBACK;
+bool body_empty = IsNullOrEmpty(content.data);
+
+if (isSuccess && !body_empty) {
+  std::string controller = std::move(response_body);
+  logger_->log_debug("controller config %s", controller.c_str());
+  Json::Value value;
+  Json::Reader reader;
+  bool parsingSuccessful = reader.parse(controller, value);
+  if (parsingSuccessful && !value.empty()) {
+Json::Value controllerValue = value["controller"];
+if (!controllerValue.empty()) {
+  Json::Value port = controllerValue["remoteSiteListeningPort"];
+  if (!port.empty())
+this->site2site_port_ = port.asInt();
+  Json::Value secure = controllerValue["siteToSiteSecure"];
+  if (!secure.empty())
+this->site2site_secure_ = secure.asBool();
+}
+logger_->log_info("process group remote site2site port %d, is 
secure %d", site2site_port_, site2site_secure_);
+  }
+} else {
+  logger_->log_error("Cannot output body to content for 
ProcessGroup::refreshRemoteSite2SiteInfo");
+}
+  } else {
+logger_->log_error(
+"ProcessGroup::refreshRemoteSite2SiteInfo -- curl_easy_perform() 
failed %s\n",
+curl_easy_strerror(res));
+  }
+  curl_easy_cleanup(http_session);
+}
+
+void RemoteProcessorGroupPort::refreshPeerList() {
+  refreshRemoteSite2SiteInfo();
+  if (site2site_port_ == -1)
+return;
+
+  this->site2site_peer_status_list_.clear();
--- End diff --

refreshRemoteSite2SiteInfo can fail.
if it is failed. it will set the this->site2site_port_ = -1;
if that's the case, we do not need to clear the list.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread benqiu2016
Github user benqiu2016 commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126210180
  
--- Diff: libminifi/src/RemoteProcessorGroupPort.cpp ---
@@ -150,6 +194,87 @@ void 
RemoteProcessorGroupPort::onTrigger(core::ProcessContext *context, core::Pr
   return;
 }
 
+void RemoteProcessorGroupPort::refreshRemoteSite2SiteInfo() {
+  if (this->host_.empty() || this->port_ == -1 || this->protocol_.empty())
+  return;
+
+  std::string fullUrl = this->protocol_ + this->host_ + ":" + 
std::to_string(this->port_) + "/nifi-api/controller/";
--- End diff --

the /nifi-api/controller return 
it return the site2site port and we can use that to do the site2site 
negotiation with master to find the peer info
{
  "revision": {
"clientId": "d40fb824-070b-4547-9b1c-f50f5ba0a677"
  },
  "controller": {
"id": "fe4a3a42-53b6-4af1-a80d-6fdfe60de97f",
"name": "NiFi Flow",
"comments": "",
"runningCount": 3,
"stoppedCount": 2,
"invalidCount": 0,
"disabledCount": 0,
"inputPortCount": 1,
"outputPortCount": 1,
"remoteSiteListeningPort": 10001,
"siteToSiteSecure": false,
"instanceId": "9d841c51-ab00-422e-811c-53c6dc2f8e59",
"inputPorts": [
  {
"id": "471deef6-2a6e-4a7d-912a-81cc17e3a204",
"name": " From Node A",
"comments": "",
"state": "RUNNING"
  }
],
"outputPorts": [
  {
"id": "75f88005-0a87-4fef-8320-6219cdbcf18b",
"name": "To A",
"comments": "",
"state": "STOPPED"
  }
]
  }
}


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread benqiu2016
Github user benqiu2016 commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126209183
  
--- Diff: libminifi/include/Site2SiteClientProtocol.h ---
@@ -387,6 +387,15 @@ class DataPacket {
 
 };
 
+/**
+  * Site2Site Peer
+  */
+ typedef struct Site2SitePeerStatus {
+   std::string host_;
+   int port_;
+   bool isSecure_;
+ } Site2SitePeerStatus;
--- End diff --

yes, there is a flowFileCount, i want to reduce the complexity of looking 
at flowFileCount and doing round robin. If we need to use flowFileCount, it 
means that we need refresh the peer status periodically and even that, we can 
only catch a snapshot of the flowFileCount based on the refreshing interval, it 
is not accurate.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread benqiu2016
Github user benqiu2016 commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126208386
  
--- Diff: libminifi/include/utils/HTTPUtils.h ---
@@ -88,6 +90,40 @@ struct HTTPRequestResponse {
 
 };
 
+static void parse_url(std::string &url, std::string &host, int &port, 
std::string &protocol) {
--- End diff --

now it was designed for support ipv4 now. If we need a RFC compliant URL 
parser, we can certainly use a open source compliant URL parser.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread benqiu2016
Github user benqiu2016 commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126207949
  
--- Diff: libminifi/include/RemoteProcessorGroupPort.h ---
@@ -84,6 +93,17 @@ class RemoteProcessorGroupPort : public core::Processor {
   void setTransmitting(bool val) {
 transmitting_ = val;
   }
+  // setURL
+  void setURL(std::string val) {
+url_ = val;
+utils::parse_url(url_, host_, port_, protocol_);
--- End diff --

if the val is empty, the url will be empty and other will empty also.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3677) ExtracMediaMetadata should close TikaInputStream or use Tika TemporaryResources

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078422#comment-16078422
 ] 

ASF GitHub Bot commented on NIFI-3677:
--

GitHub user jskora opened a pull request:

https://github.com/apache/nifi/pull/1993

NIFI-3677 - ExtractMediaMetadata should close TikaInputStream

* Added finally block to close TikaInputStream.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ X Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jskora/nifi NIFI-3677

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1993.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1993


commit 37cf1252fb9c150b5c7bc7470a1fa1712dd7ab03
Author: Joe Skora 
Date:   2017-07-07T17:38:28Z

NIFI-3677 - ExtractMediaMetadata should close TikaInputStream
* Added finally block to close TikaInputStream.




> ExtracMediaMetadata should close TikaInputStream or use Tika 
> TemporaryResources
> ---
>
> Key: NIFI-3677
> URL: https://issues.apache.org/jira/browse/NIFI-3677
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.8.0, 1.2.0
>Reporter: Joe Skora
>Assignee: Joe Skora
>Priority: Minor
>
> Per Tika docs\[1], {{ExtractMediaMetadata}} should cleanup the 
> {{TikaInputStream}} it creates or use a Tika's {{TemporaryResources}} class 
> for automatic cleanup.
> \[1]https://tika.apache.org/1.8/api/org/apache/tika/io/TikaInputStream.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1993: NIFI-3677 - ExtractMediaMetadata should close TikaI...

2017-07-07 Thread jskora
GitHub user jskora opened a pull request:

https://github.com/apache/nifi/pull/1993

NIFI-3677 - ExtractMediaMetadata should close TikaInputStream

* Added finally block to close TikaInputStream.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ X Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jskora/nifi NIFI-3677

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1993.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1993


commit 37cf1252fb9c150b5c7bc7470a1fa1712dd7ab03
Author: Joe Skora 
Date:   2017-07-07T17:38:28Z

NIFI-3677 - ExtractMediaMetadata should close TikaInputStream
* Added finally block to close TikaInputStream.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4165) Update NiFi FlowFile Repository Toolkit to provide ability to remove FlowFiles whose content is missing

2017-07-07 Thread Mark Payne (JIRA)
Mark Payne created NIFI-4165:


 Summary: Update NiFi FlowFile Repository Toolkit to provide 
ability to remove FlowFiles whose content is missing
 Key: NIFI-4165
 URL: https://issues.apache.org/jira/browse/NIFI-4165
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Tools and Build
Reporter: Mark Payne
Assignee: Mark Payne


The FlowFile Repo toolkit has the ability to address issues with flowfile repo 
corruption due to sudden power loss. Another problem that has been known to 
occur is if content goes missing from the content repository for whatever 
reason (say some process deletes some of the files) then the FlowFile Repo can 
contain a lot of FlowFiles whose content is missing. This causes a lot of 
problems with stack traces being dumped to logs and the flow taking a really 
long time to get back to normal. We should update the toolkit to provide a 
mechanism for pointing to a FlowFile Repo and Content Repo, then writing out a 
new FlowFile Repo that removes any FlowFile whose content is missing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126160245
  
--- Diff: libminifi/include/RemoteProcessorGroupPort.h ---
@@ -84,6 +93,17 @@ class RemoteProcessorGroupPort : public core::Processor {
   void setTransmitting(bool val) {
 transmitting_ = val;
   }
+  // setURL
+  void setURL(std::string val) {
+url_ = val;
+utils::parse_url(url_, host_, port_, protocol_);
--- End diff --

do we need a null check here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126172307
  
--- Diff: libminifi/include/Site2SiteClientProtocol.h ---
@@ -387,6 +387,15 @@ class DataPacket {
 
 };
 
+/**
+  * Site2Site Peer
+  */
+ typedef struct Site2SitePeerStatus {
+   std::string host_;
+   int port_;
+   bool isSecure_;
+ } Site2SitePeerStatus;
--- End diff --

What about flowFileCount, the number of flow files the peer holds? That is 
discoverable using the `/site-to-site/peers` endpoint and could be useful if 
the client wants to use it for load balancing decisions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126173718
  
--- Diff: libminifi/src/RemoteProcessorGroupPort.cpp ---
@@ -150,6 +194,87 @@ void 
RemoteProcessorGroupPort::onTrigger(core::ProcessContext *context, core::Pr
   return;
 }
 
+void RemoteProcessorGroupPort::refreshRemoteSite2SiteInfo() {
+  if (this->host_.empty() || this->port_ == -1 || this->protocol_.empty())
+  return;
+
+  std::string fullUrl = this->protocol_ + this->host_ + ":" + 
std::to_string(this->port_) + "/nifi-api/controller/";
+
+  this->site2site_port_ = -1;
+  CURL *http_session = curl_easy_init();
+
+  curl_easy_setopt(http_session, CURLOPT_URL, fullUrl.c_str());
+
+  utils::HTTPRequestResponse content;
+  curl_easy_setopt(http_session, CURLOPT_WRITEFUNCTION,
+  &utils::HTTPRequestResponse::recieve_write);
+
+  curl_easy_setopt(http_session, CURLOPT_WRITEDATA,
+  static_cast(&content));
+
+  CURLcode res = curl_easy_perform(http_session);
+
+  if (res == CURLE_OK) {
+std::string response_body(content.data.begin(), content.data.end());
+int64_t http_code = 0;
+curl_easy_getinfo(http_session, CURLINFO_RESPONSE_CODE, &http_code);
+char *content_type;
+/* ask for the content-type */
+curl_easy_getinfo(http_session, CURLINFO_CONTENT_TYPE, &content_type);
+
+bool isSuccess = ((int32_t) (http_code / 100)) == 2
+&& res != CURLE_ABORTED_BY_CALLBACK;
+bool body_empty = IsNullOrEmpty(content.data);
+
+if (isSuccess && !body_empty) {
+  std::string controller = std::move(response_body);
+  logger_->log_debug("controller config %s", controller.c_str());
+  Json::Value value;
+  Json::Reader reader;
+  bool parsingSuccessful = reader.parse(controller, value);
+  if (parsingSuccessful && !value.empty()) {
+Json::Value controllerValue = value["controller"];
+if (!controllerValue.empty()) {
+  Json::Value port = controllerValue["remoteSiteListeningPort"];
+  if (!port.empty())
+this->site2site_port_ = port.asInt();
+  Json::Value secure = controllerValue["siteToSiteSecure"];
+  if (!secure.empty())
+this->site2site_secure_ = secure.asBool();
+}
+logger_->log_info("process group remote site2site port %d, is 
secure %d", site2site_port_, site2site_secure_);
+  }
+} else {
+  logger_->log_error("Cannot output body to content for 
ProcessGroup::refreshRemoteSite2SiteInfo");
+}
+  } else {
+logger_->log_error(
+"ProcessGroup::refreshRemoteSite2SiteInfo -- curl_easy_perform() 
failed %s\n",
+curl_easy_strerror(res));
+  }
+  curl_easy_cleanup(http_session);
+}
+
+void RemoteProcessorGroupPort::refreshPeerList() {
+  refreshRemoteSite2SiteInfo();
+  if (site2site_port_ == -1)
+return;
+
+  this->site2site_peer_status_list_.clear();
--- End diff --

Can the logic below this fail under certain conditions? If so, is it work 
waiting to clear the current list until we are sure we have an updated list?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126168614
  
--- Diff: libminifi/include/utils/HTTPUtils.h ---
@@ -88,6 +90,40 @@ struct HTTPRequestResponse {
 
 };
 
+static void parse_url(std::string &url, std::string &host, int &port, 
std::string &protocol) {
--- End diff --

This URL parsing method is a subset of valid URLs. Are we ok with that? For 
example, it will fail if passed a Literal IPv6 URL as defined by [RFC 
3986](https://tools.ietf.org/html/rfc3986) or [RFC 
2732](https://tools.ietf.org/html/rfc2732), e.g.:

 `http://[FEDC:BA98:7654:3210:FEDC:BA98:7654:3210]:80/nifi`

If we are ok with this parser being designed to work with URLs of a certain 
format, we should document those assumptions of inputs.

If we would like this part of the code to be a bit more robust for cases we 
haven't thought of (both present and future), we could consider adding a 
dependency to a RFC-compliant URI parser, or, if we want to avoid adding a 
dependency, just expanding the logic to cover more valid URLs.

Lastly, should the url input arg be const and checked for null?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #114: site2site port negotiation

2017-07-07 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/114#discussion_r126181684
  
--- Diff: libminifi/src/RemoteProcessorGroupPort.cpp ---
@@ -150,6 +194,87 @@ void 
RemoteProcessorGroupPort::onTrigger(core::ProcessContext *context, core::Pr
   return;
 }
 
+void RemoteProcessorGroupPort::refreshRemoteSite2SiteInfo() {
+  if (this->host_.empty() || this->port_ == -1 || this->protocol_.empty())
+  return;
+
+  std::string fullUrl = this->protocol_ + this->host_ + ":" + 
std::to_string(this->port_) + "/nifi-api/controller/";
--- End diff --

If port is not available this should default to 80 for http and 443 for 
https.

Also, I'm new to the NiFi REST API... what does /nifi-api/controller 
return? the site2site impl's I see pull from /site-to-site/peers for this info. 
curious about the difference (will try it out later today)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4155) Expand EnforceOrder capability to cluster

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078197#comment-16078197
 ] 

ASF GitHub Bot commented on NIFI-4155:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/1984
  
@ijokarumawak ah I remember seeing that come up but didn't realize it was 
finished. That's awesome!

Definitely agree with your second statement but I'd add that we should warn 
the users of the danger in the description of the cluster state option here and 
potentially add it to the processor capability description too.


> Expand EnforceOrder capability to cluster
> -
>
> Key: NIFI-4155
> URL: https://issues.apache.org/jira/browse/NIFI-4155
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> Currently, EnforceOrder is able to enforce FlowFile ordering within a NiFi 
> node, and it uses local managed state to track progress.
> If it is configurable which state management to use from Local and Cluster, 
> EnforceOrder would be also able to enforce ordering across cluster with 
> cluster scope state management. It would be useful for some use-cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1984: NIFI-4155: Expand EnforceOrder capability to cluster

2017-07-07 Thread JPercivall
Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/1984
  
@ijokarumawak ah I remember seeing that come up but didn't realize it was 
finished. That's awesome!

Definitely agree with your second statement but I'd add that we should warn 
the users of the danger in the description of the cluster state option here and 
potentially add it to the processor capability description too.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4164) Realistic Time Series Processor Simulator

2017-07-07 Thread Chris Herrera (JIRA)
Chris Herrera created NIFI-4164:
---

 Summary: Realistic Time Series Processor Simulator
 Key: NIFI-4164
 URL: https://issues.apache.org/jira/browse/NIFI-4164
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Chris Herrera
Assignee: Chris Herrera
Priority: Minor


In order to validate several flows that deal with sensor data, it would be good 
to have a built in time series simulator processor that generates data and can 
send it out via a flow file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4135) RangerNiFiAuthorizer should support storing audit info to HDFS

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078130#comment-16078130
 ] 

ASF GitHub Bot commented on NIFI-4135:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1956


> RangerNiFiAuthorizer should support storing audit info to HDFS
> --
>
> Key: NIFI-4135
> URL: https://issues.apache.org/jira/browse/NIFI-4135
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
> Fix For: 1.4.0
>
>
> When using Ranger to support authorization an option to log auditing 
> information to HDFS can be supported.  The RangerNiFiAuthorizer should be 
> prepared to communicate with a hadoop cluster in order to support this 
> feature. In it's current implementation the authorizer does not have the 
> hadoop-client jars available as a dependency nor does it support the ability 
> to refer to the required *.site.xml files in order to communicate without 
> using the default configuration.  Both of these changes are needed in order 
> to send audit info to HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4135) RangerNiFiAuthorizer should support storing audit info to HDFS

2017-07-07 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4135:
--
Fix Version/s: 1.4.0

> RangerNiFiAuthorizer should support storing audit info to HDFS
> --
>
> Key: NIFI-4135
> URL: https://issues.apache.org/jira/browse/NIFI-4135
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
> Fix For: 1.4.0
>
>
> When using Ranger to support authorization an option to log auditing 
> information to HDFS can be supported.  The RangerNiFiAuthorizer should be 
> prepared to communicate with a hadoop cluster in order to support this 
> feature. In it's current implementation the authorizer does not have the 
> hadoop-client jars available as a dependency nor does it support the ability 
> to refer to the required *.site.xml files in order to communicate without 
> using the default configuration.  Both of these changes are needed in order 
> to send audit info to HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4135) RangerNiFiAuthorizer should support storing audit info to HDFS

2017-07-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078128#comment-16078128
 ] 

ASF subversion and git services commented on NIFI-4135:
---

Commit 6df97bbc88beedff8bed516ffef6e083d3172ad8 in nifi's branch 
refs/heads/master from [~YolandaMDavis]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=6df97bb ]

NIFI-4135 - added hadoop-client and enhanced Authorizers entity to support 
classpath for resources entry
NIFI-4135 - classpath under class

This closes #1956.

Signed-off-by: Bryan Bende 


> RangerNiFiAuthorizer should support storing audit info to HDFS
> --
>
> Key: NIFI-4135
> URL: https://issues.apache.org/jira/browse/NIFI-4135
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
> Fix For: 1.4.0
>
>
> When using Ranger to support authorization an option to log auditing 
> information to HDFS can be supported.  The RangerNiFiAuthorizer should be 
> prepared to communicate with a hadoop cluster in order to support this 
> feature. In it's current implementation the authorizer does not have the 
> hadoop-client jars available as a dependency nor does it support the ability 
> to refer to the required *.site.xml files in order to communicate without 
> using the default configuration.  Both of these changes are needed in order 
> to send audit info to HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1956: NIFI-4135 - added hadoop-client and enhanced Author...

2017-07-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1956


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4135) RangerNiFiAuthorizer should support storing audit info to HDFS

2017-07-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078129#comment-16078129
 ] 

ASF subversion and git services commented on NIFI-4135:
---

Commit 6df97bbc88beedff8bed516ffef6e083d3172ad8 in nifi's branch 
refs/heads/master from [~YolandaMDavis]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=6df97bb ]

NIFI-4135 - added hadoop-client and enhanced Authorizers entity to support 
classpath for resources entry
NIFI-4135 - classpath under class

This closes #1956.

Signed-off-by: Bryan Bende 


> RangerNiFiAuthorizer should support storing audit info to HDFS
> --
>
> Key: NIFI-4135
> URL: https://issues.apache.org/jira/browse/NIFI-4135
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
> Fix For: 1.4.0
>
>
> When using Ranger to support authorization an option to log auditing 
> information to HDFS can be supported.  The RangerNiFiAuthorizer should be 
> prepared to communicate with a hadoop cluster in order to support this 
> feature. In it's current implementation the authorizer does not have the 
> hadoop-client jars available as a dependency nor does it support the ability 
> to refer to the required *.site.xml files in order to communicate without 
> using the default configuration.  Both of these changes are needed in order 
> to send audit info to HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4135) RangerNiFiAuthorizer should support storing audit info to HDFS

2017-07-07 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4135:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> RangerNiFiAuthorizer should support storing audit info to HDFS
> --
>
> Key: NIFI-4135
> URL: https://issues.apache.org/jira/browse/NIFI-4135
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
> Fix For: 1.4.0
>
>
> When using Ranger to support authorization an option to log auditing 
> information to HDFS can be supported.  The RangerNiFiAuthorizer should be 
> prepared to communicate with a hadoop cluster in order to support this 
> feature. In it's current implementation the authorizer does not have the 
> hadoop-client jars available as a dependency nor does it support the ability 
> to refer to the required *.site.xml files in order to communicate without 
> using the default configuration.  Both of these changes are needed in order 
> to send audit info to HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4135) RangerNiFiAuthorizer should support storing audit info to HDFS

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078125#comment-16078125
 ] 

ASF GitHub Bot commented on NIFI-4135:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1956
  
+1 looks good, thanks for making the update, I'll merge to master.


> RangerNiFiAuthorizer should support storing audit info to HDFS
> --
>
> Key: NIFI-4135
> URL: https://issues.apache.org/jira/browse/NIFI-4135
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>
> When using Ranger to support authorization an option to log auditing 
> information to HDFS can be supported.  The RangerNiFiAuthorizer should be 
> prepared to communicate with a hadoop cluster in order to support this 
> feature. In it's current implementation the authorizer does not have the 
> hadoop-client jars available as a dependency nor does it support the ability 
> to refer to the required *.site.xml files in order to communicate without 
> using the default configuration.  Both of these changes are needed in order 
> to send audit info to HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1956: NIFI-4135 - added hadoop-client and enhanced Authorizers e...

2017-07-07 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1956
  
+1 looks good, thanks for making the update, I'll merge to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4163) ContentNotFoundException causes processor to administratively yield

2017-07-07 Thread Mark Payne (JIRA)
Mark Payne created NIFI-4163:


 Summary: ContentNotFoundException causes processor to 
administratively yield
 Key: NIFI-4163
 URL: https://issues.apache.org/jira/browse/NIFI-4163
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne


If a Processor attempts to read a FlowFile whose content is missing, a 
ContentNotFoundException is thrown. If the processor does not catch this, the 
framework will administratively yield the processor because it is an unexpected 
exception. However, the framework should handle this and not yield the 
processor.

Additionally, when the framework does yield administratively for this type of 
case, it uses a yield duration of "1 sec" even if nifi.properties has 
administrative yield duration set to 30 secs, so it is not properly honoring 
the property.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4127) Create a CompositeUserGroupProvider

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078097#comment-16078097
 ] 

ASF GitHub Bot commented on NIFI-4127:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/1978
  
@pvillard31 A second commit has been pushed updating the documentation and 
providing an example of the composite configurable user group provider. Thanks 
again for the review!


> Create a CompositeUserGroupProvider
> ---
>
> Key: NIFI-4127
> URL: https://issues.apache.org/jira/browse/NIFI-4127
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>
> Create a CompositeUserGroupProvider to support loading users/groups from 
> multiple sources. This composite implementation should support
> {noformat}
> 0-1 ConfigurableUserGroupProvider
> 0-n UserGroupProviders
> {noformat}
> Only a single ConfigurableUserGroupProvider can be supplied to keep these 
> sources/implementation details hidden from the end users. The 
> CompositeUserGroupProvider must be configured with at least 1 underlying 
> provider.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1978: NIFI-4127: Composite User Group Providers

2017-07-07 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/1978
  
@pvillard31 A second commit has been pushed updating the documentation and 
providing an example of the composite configurable user group provider. Thanks 
again for the review!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078053#comment-16078053
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1961
  
@bbende I ran this against a large body of our test data, and it seemed to 
work just fine.


> Create EvaluateRecordPath processor
> ---
>
> Key: NIFI-4024
> URL: https://issues.apache.org/jira/browse/NIFI-4024
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Steve Champagne
>Priority: Minor
>
> With the new RecordPath DSL, it would be nice if there was a processor that 
> could pull fields into attributes of the flowfile based on a RecordPath. This 
> would be similar to the EvaluateJsonPath processor that currently exists, 
> except it could be used to pull fields from arbitrary record formats. My 
> current use case for it would be pulling fields out of Avro records while 
> skipping the steps of having to convert Avro to JSON, evaluate JsonPath, and 
> then converting back to Avro. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecord

2017-07-07 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1961
  
@bbende I ran this against a large body of our test data, and it seemed to 
work just fine.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4124) Add a Record API-based PutMongo clone

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078052#comment-16078052
 ] 

ASF GitHub Bot commented on NIFI-4124:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1945
  
@markap14 @bbende @pvillard31 I made the changes. It should be ready for a 
merge.


> Add a Record API-based PutMongo clone
> -
>
> Key: NIFI-4124
> URL: https://issues.apache.org/jira/browse/NIFI-4124
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>  Labels: mongodb, putmongo, records
>
> A new processor that can use the Record API to put data into Mongo is needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1945: NIFI-4124 Added org.apache.nifi.mongo.PutMongoRecord.

2017-07-07 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1945
  
@markap14 @bbende @pvillard31 I made the changes. It should be ready for a 
merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (NIFI-4161) AWS Processors PROXY_HOST and PROXY_HOST_PORT properties do not support expression language

2017-07-07 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-4161.
--
   Resolution: Fixed
Fix Version/s: 1.4.0

> AWS Processors PROXY_HOST and PROXY_HOST_PORT properties do not support 
> expression language
> ---
>
> Key: NIFI-4161
> URL: https://issues.apache.org/jira/browse/NIFI-4161
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Kenneth Wydler
>Priority: Minor
> Fix For: 1.4.0
>
>
> AbstractAWSProcessor marks PROXY_HOST and PROXY_HOST_PORT properties as 
> supporting Nifi's expression language. However, when accessing the properties 
> values evaluateAttributeExpressions() is not called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4161) AWS Processors PROXY_HOST and PROXY_HOST_PORT properties do not support expression language

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077969#comment-16077969
 ] 

ASF GitHub Bot commented on NIFI-4161:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1990
  
+1, merging to master, thanks @kwydler 


> AWS Processors PROXY_HOST and PROXY_HOST_PORT properties do not support 
> expression language
> ---
>
> Key: NIFI-4161
> URL: https://issues.apache.org/jira/browse/NIFI-4161
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Kenneth Wydler
>Priority: Minor
>
> AbstractAWSProcessor marks PROXY_HOST and PROXY_HOST_PORT properties as 
> supporting Nifi's expression language. However, when accessing the properties 
> values evaluateAttributeExpressions() is not called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4161) AWS Processors PROXY_HOST and PROXY_HOST_PORT properties do not support expression language

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077971#comment-16077971
 ] 

ASF GitHub Bot commented on NIFI-4161:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1990


> AWS Processors PROXY_HOST and PROXY_HOST_PORT properties do not support 
> expression language
> ---
>
> Key: NIFI-4161
> URL: https://issues.apache.org/jira/browse/NIFI-4161
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Kenneth Wydler
>Priority: Minor
>
> AbstractAWSProcessor marks PROXY_HOST and PROXY_HOST_PORT properties as 
> supporting Nifi's expression language. However, when accessing the properties 
> values evaluateAttributeExpressions() is not called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4161) AWS Processors PROXY_HOST and PROXY_HOST_PORT properties do not support expression language

2017-07-07 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4161:
-
Component/s: Extensions

> AWS Processors PROXY_HOST and PROXY_HOST_PORT properties do not support 
> expression language
> ---
>
> Key: NIFI-4161
> URL: https://issues.apache.org/jira/browse/NIFI-4161
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Kenneth Wydler
>Priority: Minor
>
> AbstractAWSProcessor marks PROXY_HOST and PROXY_HOST_PORT properties as 
> supporting Nifi's expression language. However, when accessing the properties 
> values evaluateAttributeExpressions() is not called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1990: NIFI-4161: Adding expression evaluation to AWS PROXY_HOST ...

2017-07-07 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1990
  
+1, merging to master, thanks @kwydler 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1990: NIFI-4161: Adding expression evaluation to AWS PROX...

2017-07-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1990


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4161) AWS Processors PROXY_HOST and PROXY_HOST_PORT properties do not support expression language

2017-07-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077968#comment-16077968
 ] 

ASF subversion and git services commented on NIFI-4161:
---

Commit e89512e7449e8cda4a4793368339b61b8c4283fe in nifi's branch 
refs/heads/master from [~kwydler]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e89512e ]

NIFI-4161: Adding expression evaluation to AWS PROXY_HOST and PROXY_HOST_PORT 
property usage

Signed-off-by: Pierre Villard 

This closes #1990.


> AWS Processors PROXY_HOST and PROXY_HOST_PORT properties do not support 
> expression language
> ---
>
> Key: NIFI-4161
> URL: https://issues.apache.org/jira/browse/NIFI-4161
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Kenneth Wydler
>Priority: Minor
>
> AbstractAWSProcessor marks PROXY_HOST and PROXY_HOST_PORT properties as 
> supporting Nifi's expression language. However, when accessing the properties 
> values evaluateAttributeExpressions() is not called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4160) SFTPTransfer should specify connection timeout

2017-07-07 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4160:
-
   Resolution: Fixed
Fix Version/s: 1.4.0
   Status: Resolved  (was: Patch Available)

> SFTPTransfer should specify connection timeout
> --
>
> Key: NIFI-4160
> URL: https://issues.apache.org/jira/browse/NIFI-4160
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
> Fix For: 1.4.0
>
>
> For FTP/SFTP, we use com.jcraft.jsch library. When XXXSFTP processor make use 
> of the library, there are two occasions to set connection timeout. One is 
> connecting a session (Socket), and the other is opening a SFTP channel.
> Currently we set a connection timeout which can be specified via processor 
> property for connecting a session, but not for opening an SFTP channel.
> Then the library uses default 10ms x 2000 retry to open the channel. With 
> slow SFTP servers it's possible that opening channel takes longer than 10ms, 
> which can cause following error:
> {code}
> java.io.IOException: Failed to obtain connection to remote host due to 
> com.jcraft.jsch.JSchException: channel is not opened
> {code}
> We should specify connection timeout for opening channel as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4160) SFTPTransfer should specify connection timeout

2017-07-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077930#comment-16077930
 ] 

ASF subversion and git services commented on NIFI-4160:
---

Commit e84f9a24164a3c939664a2259f3a0c07c20cfb97 in nifi's branch 
refs/heads/master from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e84f9a2 ]

NIFI-4160: SFTPTransfer connection timeout for opening channel.

Signed-off-by: Pierre Villard 

This closes #1991.


> SFTPTransfer should specify connection timeout
> --
>
> Key: NIFI-4160
> URL: https://issues.apache.org/jira/browse/NIFI-4160
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> For FTP/SFTP, we use com.jcraft.jsch library. When XXXSFTP processor make use 
> of the library, there are two occasions to set connection timeout. One is 
> connecting a session (Socket), and the other is opening a SFTP channel.
> Currently we set a connection timeout which can be specified via processor 
> property for connecting a session, but not for opening an SFTP channel.
> Then the library uses default 10ms x 2000 retry to open the channel. With 
> slow SFTP servers it's possible that opening channel takes longer than 10ms, 
> which can cause following error:
> {code}
> java.io.IOException: Failed to obtain connection to remote host due to 
> com.jcraft.jsch.JSchException: channel is not opened
> {code}
> We should specify connection timeout for opening channel as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4160) SFTPTransfer should specify connection timeout

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077933#comment-16077933
 ] 

ASF GitHub Bot commented on NIFI-4160:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1991


> SFTPTransfer should specify connection timeout
> --
>
> Key: NIFI-4160
> URL: https://issues.apache.org/jira/browse/NIFI-4160
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> For FTP/SFTP, we use com.jcraft.jsch library. When XXXSFTP processor make use 
> of the library, there are two occasions to set connection timeout. One is 
> connecting a session (Socket), and the other is opening a SFTP channel.
> Currently we set a connection timeout which can be specified via processor 
> property for connecting a session, but not for opening an SFTP channel.
> Then the library uses default 10ms x 2000 retry to open the channel. With 
> slow SFTP servers it's possible that opening channel takes longer than 10ms, 
> which can cause following error:
> {code}
> java.io.IOException: Failed to obtain connection to remote host due to 
> com.jcraft.jsch.JSchException: channel is not opened
> {code}
> We should specify connection timeout for opening channel as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4160) SFTPTransfer should specify connection timeout

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077932#comment-16077932
 ] 

ASF GitHub Bot commented on NIFI-4160:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1991
  
+1, merging to master, thanks @ijokarumawak 


> SFTPTransfer should specify connection timeout
> --
>
> Key: NIFI-4160
> URL: https://issues.apache.org/jira/browse/NIFI-4160
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> For FTP/SFTP, we use com.jcraft.jsch library. When XXXSFTP processor make use 
> of the library, there are two occasions to set connection timeout. One is 
> connecting a session (Socket), and the other is opening a SFTP channel.
> Currently we set a connection timeout which can be specified via processor 
> property for connecting a session, but not for opening an SFTP channel.
> Then the library uses default 10ms x 2000 retry to open the channel. With 
> slow SFTP servers it's possible that opening channel takes longer than 10ms, 
> which can cause following error:
> {code}
> java.io.IOException: Failed to obtain connection to remote host due to 
> com.jcraft.jsch.JSchException: channel is not opened
> {code}
> We should specify connection timeout for opening channel as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1991: NIFI-4160: SFTPTransfer connection timeout for opening cha...

2017-07-07 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1991
  
+1, merging to master, thanks @ijokarumawak 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1991: NIFI-4160: SFTPTransfer connection timeout for open...

2017-07-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1991


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-4162) PutSQL batch update error message should include the cause

2017-07-07 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4162:
-
   Resolution: Fixed
Fix Version/s: 1.4.0
   Status: Resolved  (was: Patch Available)

> PutSQL batch update error message should include the cause
> --
>
> Key: NIFI-4162
> URL: https://issues.apache.org/jira/browse/NIFI-4162
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
> Fix For: 1.4.0
>
>
> When PutSQL executes SQL as batch mode and an exception is thrown, it logs 
> following error message:
> {code}
> 2017-07-07 18:01:38,646 ERROR [Timer-Driven Process Thread-1] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update. There were a total of 1 FlowFiles that failed, 0 
> that succeeded, and 0 that were not execute and will be routed to retry;
> {code}
> It doesn't use the thrown Exception, so it's difficult for user to understand 
> what caused the error. The exception contains useful information. If we 
> logged the exception, user can see following log:
> {code}
> 2017-07-07 18:05:17,155 ERROR [Timer-Driven Process Thread-4] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update, java.sql.BatchUpdateException: Duplicate entry '1' 
> for key 'PRIMARY'. There were a total of 1 FlowFiles that failed, 0 that 
> succeeded, and 0 that were not execute and will be routed to retry; : 
> java.sql.BatchUpdateException: Duplicate entry '1' for key 'PRIMARY'
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4162) PutSQL batch update error message should include the cause

2017-07-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077923#comment-16077923
 ] 

ASF subversion and git services commented on NIFI-4162:
---

Commit 50c364a793e8605f0b6f2e48ef5e0e08cfcf817d in nifi's branch 
refs/heads/master from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=50c364a ]

NIFI-4162: PutSQL batch update error message should include the cause

Signed-off-by: Pierre Villard 

This closes #1992.


> PutSQL batch update error message should include the cause
> --
>
> Key: NIFI-4162
> URL: https://issues.apache.org/jira/browse/NIFI-4162
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> When PutSQL executes SQL as batch mode and an exception is thrown, it logs 
> following error message:
> {code}
> 2017-07-07 18:01:38,646 ERROR [Timer-Driven Process Thread-1] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update. There were a total of 1 FlowFiles that failed, 0 
> that succeeded, and 0 that were not execute and will be routed to retry;
> {code}
> It doesn't use the thrown Exception, so it's difficult for user to understand 
> what caused the error. The exception contains useful information. If we 
> logged the exception, user can see following log:
> {code}
> 2017-07-07 18:05:17,155 ERROR [Timer-Driven Process Thread-4] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update, java.sql.BatchUpdateException: Duplicate entry '1' 
> for key 'PRIMARY'. There were a total of 1 FlowFiles that failed, 0 that 
> succeeded, and 0 that were not execute and will be routed to retry; : 
> java.sql.BatchUpdateException: Duplicate entry '1' for key 'PRIMARY'
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1992: NIFI-4162: PutSQL batch update error message should...

2017-07-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1992


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4162) PutSQL batch update error message should include the cause

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077924#comment-16077924
 ] 

ASF GitHub Bot commented on NIFI-4162:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1992
  
+1, merging to master, thanks @ijokarumawak 


> PutSQL batch update error message should include the cause
> --
>
> Key: NIFI-4162
> URL: https://issues.apache.org/jira/browse/NIFI-4162
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> When PutSQL executes SQL as batch mode and an exception is thrown, it logs 
> following error message:
> {code}
> 2017-07-07 18:01:38,646 ERROR [Timer-Driven Process Thread-1] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update. There were a total of 1 FlowFiles that failed, 0 
> that succeeded, and 0 that were not execute and will be routed to retry;
> {code}
> It doesn't use the thrown Exception, so it's difficult for user to understand 
> what caused the error. The exception contains useful information. If we 
> logged the exception, user can see following log:
> {code}
> 2017-07-07 18:05:17,155 ERROR [Timer-Driven Process Thread-4] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update, java.sql.BatchUpdateException: Duplicate entry '1' 
> for key 'PRIMARY'. There were a total of 1 FlowFiles that failed, 0 that 
> succeeded, and 0 that were not execute and will be routed to retry; : 
> java.sql.BatchUpdateException: Duplicate entry '1' for key 'PRIMARY'
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4162) PutSQL batch update error message should include the cause

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077926#comment-16077926
 ] 

ASF GitHub Bot commented on NIFI-4162:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1992


> PutSQL batch update error message should include the cause
> --
>
> Key: NIFI-4162
> URL: https://issues.apache.org/jira/browse/NIFI-4162
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> When PutSQL executes SQL as batch mode and an exception is thrown, it logs 
> following error message:
> {code}
> 2017-07-07 18:01:38,646 ERROR [Timer-Driven Process Thread-1] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update. There were a total of 1 FlowFiles that failed, 0 
> that succeeded, and 0 that were not execute and will be routed to retry;
> {code}
> It doesn't use the thrown Exception, so it's difficult for user to understand 
> what caused the error. The exception contains useful information. If we 
> logged the exception, user can see following log:
> {code}
> 2017-07-07 18:05:17,155 ERROR [Timer-Driven Process Thread-4] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update, java.sql.BatchUpdateException: Duplicate entry '1' 
> for key 'PRIMARY'. There were a total of 1 FlowFiles that failed, 0 that 
> succeeded, and 0 that were not execute and will be routed to retry; : 
> java.sql.BatchUpdateException: Duplicate entry '1' for key 'PRIMARY'
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1992: NIFI-4162: PutSQL batch update error message should includ...

2017-07-07 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1992
  
+1, merging to master, thanks @ijokarumawak 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-4162) PutSQL batch update error message should include the cause

2017-07-07 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-4162:

Status: Patch Available  (was: In Progress)

> PutSQL batch update error message should include the cause
> --
>
> Key: NIFI-4162
> URL: https://issues.apache.org/jira/browse/NIFI-4162
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> When PutSQL executes SQL as batch mode and an exception is thrown, it logs 
> following error message:
> {code}
> 2017-07-07 18:01:38,646 ERROR [Timer-Driven Process Thread-1] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update. There were a total of 1 FlowFiles that failed, 0 
> that succeeded, and 0 that were not execute and will be routed to retry;
> {code}
> It doesn't use the thrown Exception, so it's difficult for user to understand 
> what caused the error. The exception contains useful information. If we 
> logged the exception, user can see following log:
> {code}
> 2017-07-07 18:05:17,155 ERROR [Timer-Driven Process Thread-4] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update, java.sql.BatchUpdateException: Duplicate entry '1' 
> for key 'PRIMARY'. There were a total of 1 FlowFiles that failed, 0 that 
> succeeded, and 0 that were not execute and will be routed to retry; : 
> java.sql.BatchUpdateException: Duplicate entry '1' for key 'PRIMARY'
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4162) PutSQL batch update error message should include the cause

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077848#comment-16077848
 ] 

ASF GitHub Bot commented on NIFI-4162:
--

GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/1992

NIFI-4162: PutSQL batch update error message should include the cause

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-4162

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1992.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1992


commit b8229fd5cc4ec5c6285478ce5214de1d5963084a
Author: Koji Kawamura 
Date:   2017-07-07T09:11:41Z

NIFI-4162: PutSQL batch update error message should include the cause




> PutSQL batch update error message should include the cause
> --
>
> Key: NIFI-4162
> URL: https://issues.apache.org/jira/browse/NIFI-4162
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> When PutSQL executes SQL as batch mode and an exception is thrown, it logs 
> following error message:
> {code}
> 2017-07-07 18:01:38,646 ERROR [Timer-Driven Process Thread-1] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update. There were a total of 1 FlowFiles that failed, 0 
> that succeeded, and 0 that were not execute and will be routed to retry;
> {code}
> It doesn't use the thrown Exception, so it's difficult for user to understand 
> what caused the error. The exception contains useful information. If we 
> logged the exception, user can see following log:
> {code}
> 2017-07-07 18:05:17,155 ERROR [Timer-Driven Process Thread-4] 
> o.apache.nifi.processors.standard.PutSQL 
> PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
> to a failed batch update, java.sql.BatchUpdateException: Duplicate entry '1' 
> for key 'PRIMARY'. There were a total of 1 FlowFiles that failed, 0 that 
> succeeded, and 0 that were not execute and will be routed to retry; : 
> java.sql.BatchUpdateException: Duplicate entry '1' for key 'PRIMARY'
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1992: NIFI-4162: PutSQL batch update error message should...

2017-07-07 Thread ijokarumawak
GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/1992

NIFI-4162: PutSQL batch update error message should include the cause

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-4162

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1992.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1992


commit b8229fd5cc4ec5c6285478ce5214de1d5963084a
Author: Koji Kawamura 
Date:   2017-07-07T09:11:41Z

NIFI-4162: PutSQL batch update error message should include the cause




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4162) PutSQL batch update error message should include the cause

2017-07-07 Thread Koji Kawamura (JIRA)
Koji Kawamura created NIFI-4162:
---

 Summary: PutSQL batch update error message should include the cause
 Key: NIFI-4162
 URL: https://issues.apache.org/jira/browse/NIFI-4162
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.0.0
Reporter: Koji Kawamura
Assignee: Koji Kawamura


When PutSQL executes SQL as batch mode and an exception is thrown, it logs 
following error message:

{code}
2017-07-07 18:01:38,646 ERROR [Timer-Driven Process Thread-1] 
o.apache.nifi.processors.standard.PutSQL 
PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
to a failed batch update. There were a total of 1 FlowFiles that failed, 0 that 
succeeded, and 0 that were not execute and will be routed to retry;
{code}

It doesn't use the thrown Exception, so it's difficult for user to understand 
what caused the error. The exception contains useful information. If we logged 
the exception, user can see following log:

{code}
2017-07-07 18:05:17,155 ERROR [Timer-Driven Process Thread-4] 
o.apache.nifi.processors.standard.PutSQL 
PutSQL[id=1c3d2a94-015d-1000-397a-250d18d9f4ad] Failed to update database due 
to a failed batch update, java.sql.BatchUpdateException: Duplicate entry '1' 
for key 'PRIMARY'. There were a total of 1 FlowFiles that failed, 0 that 
succeeded, and 0 that were not execute and will be routed to retry; : 
java.sql.BatchUpdateException: Duplicate entry '1' for key 'PRIMARY'
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3218) MockProcessSession should prevent transferring new FlowFile to input queue

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077780#comment-16077780
 ] 

ASF GitHub Bot commented on NIFI-3218:
--

Github user m-hogue commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1988#discussion_r126092307
  
--- Diff: 
nifi-mock/src/main/java/org/apache/nifi/util/MockProcessSession.java ---
@@ -756,6 +756,13 @@ public void transfer(FlowFile flowFile) {
 throw new IllegalArgumentException("I only accept 
MockFlowFile");
 }
 
+// if the flowfile provided was created in this session (i.e. it's 
in currentVersions),
+// then throw an exception indicating that you can't transfer 
flowfiles back to self.
+// this mimics the behavior of StandardProcessSession
+if(currentVersions.get(flowFile.getId()) != null) {
--- End diff --

@jskora thanks for the comment. That makes sense. I'll update the PR 
accordingly. Much appreciated. 


> MockProcessSession should prevent transferring new FlowFile to input queue
> --
>
> Key: NIFI-3218
> URL: https://issues.apache.org/jira/browse/NIFI-3218
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0, 0.8.0
>Reporter: Joe Skora
>Assignee: Michael Hogue
>
> StandardProcessSession.transfer() throws an exception if called with a newly 
> created FlowFile and no relationship.  MockProcessionSession should behave 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1988: NIFI-3218: throw exception in MockProcessSession wh...

2017-07-07 Thread m-hogue
Github user m-hogue commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1988#discussion_r126092307
  
--- Diff: 
nifi-mock/src/main/java/org/apache/nifi/util/MockProcessSession.java ---
@@ -756,6 +756,13 @@ public void transfer(FlowFile flowFile) {
 throw new IllegalArgumentException("I only accept 
MockFlowFile");
 }
 
+// if the flowfile provided was created in this session (i.e. it's 
in currentVersions),
+// then throw an exception indicating that you can't transfer 
flowfiles back to self.
+// this mimics the behavior of StandardProcessSession
+if(currentVersions.get(flowFile.getId()) != null) {
--- End diff --

@jskora thanks for the comment. That makes sense. I'll update the PR 
accordingly. Much appreciated. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3281) Error on passing 'ftp.listing.user' from ListFTP to FetchSFTP

2017-07-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077768#comment-16077768
 ] 

ASF GitHub Bot commented on NIFI-3281:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1974#discussion_r126089783
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListFileTransfer.java
 ---
@@ -94,7 +94,7 @@
 
 @Override
 protected String getPath(final ProcessContext context) {
-return context.getProperty(REMOTE_PATH).getValue();
+return 
context.getProperty(REMOTE_PATH).evaluateAttributeExpressions().getValue();
--- End diff --

You're right, I was thinking it was FileTransfer. Thanks!


> Error on passing 'ftp.listing.user' from ListFTP to FetchSFTP
> -
>
> Key: NIFI-3281
> URL: https://issues.apache.org/jira/browse/NIFI-3281
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Jakhongir Ashrapov
>Assignee: Pierre Villard
>Priority: Minor
>
> Cannot get `ftp.listing.user` as EL in FetchFTP when listing files with 
> ListFTP. Following exception is thrown:
> IOException: Could not login for user ''



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >