[jira] [Commented] (NIFI-2258) Resolve Asciidocs for admin guide formatting warnings

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376359#comment-15376359
 ] 

ASF GitHub Bot commented on NIFI-2258:
--

GitHub user joewitt opened a pull request:

https://github.com/apache/nifi/pull/648

NIFI-2258 resolved formatting issues causing build warnings



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joewitt/incubator-nifi NIFI-2258

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #648


commit 79ef7d6d74bd8cace12d329a62e65e6ab24f8408
Author: joewitt 
Date:   2016-07-14T05:51:57Z

NIFI-2258 resolved formatting issues causing build warnings




> Resolve Asciidocs for admin guide formatting warnings
> -
>
> Key: NIFI-2258
> URL: https://issues.apache.org/jira/browse/NIFI-2258
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Trivial
> Fix For: 1.0.0
>
>
> asciidoctor: WARNING: administration-guide.adoc: line 1139: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1175: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1190: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1201: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1216: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1233: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1246: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1271: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1279: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1288: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1320: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1327: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1347: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1364: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1379: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1403: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1421: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1435: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1456: section title out 
> of sequence: expected level 2, got level 3
> asciidoctor: WARNING: administration-guide.adoc: line 1477: section title out 
> of sequence: expected level 2, got level 3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #648: NIFI-2258 resolved formatting issues causing build w...

2016-07-13 Thread joewitt
GitHub user joewitt opened a pull request:

https://github.com/apache/nifi/pull/648

NIFI-2258 resolved formatting issues causing build warnings



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joewitt/incubator-nifi NIFI-2258

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #648


commit 79ef7d6d74bd8cace12d329a62e65e6ab24f8408
Author: joewitt 
Date:   2016-07-14T05:51:57Z

NIFI-2258 resolved formatting issues causing build warnings




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-2208) Support Custom Properties in Expression Language - 1.x baseline

2016-07-13 Thread Yolanda M. Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yolanda M. Davis updated NIFI-2208:
---
Status: Patch Available  (was: In Progress)

PR submitted for this fix. Blocking issue is included in this fix: 
https://github.com/apache/nifi/pull/571

> Support Custom Properties in Expression Language - 1.x baseline
> ---
>
> Key: NIFI-2208
> URL: https://issues.apache.org/jira/browse/NIFI-2208
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Yolanda M. Davis
>Priority: Blocker
> Fix For: 1.0.0
>
>
> NIFI-1974 addressed this for the 0.x baseline but the PR does not apply 
> cleanly to the 1.x baseline. Creating a separate JIRA for 1.x so that we can 
> close out NIFI-1974 since 0.7.0 is ready to be released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2057) If variable registry cannot be loaded because one file cannot be found, none of the files are loaded

2016-07-13 Thread Yolanda M. Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yolanda M. Davis updated NIFI-2057:
---
Affects Version/s: 1.0.0
   Status: Patch Available  (was: In Progress)

Patch is available in the latest PR which merges NIFI-1974 into 1.0.0: 
https://github.com/apache/nifi/pull/571

> If variable registry cannot be loaded because one file cannot be found, none 
> of the files are loaded
> 
>
> Key: NIFI-2057
> URL: https://issues.apache.org/jira/browse/NIFI-2057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Mark Payne
>Assignee: Yolanda M. Davis
> Fix For: 1.0.0
>
>
> If the value specified in "nifi.variable.registry.properties" contains 
> multiple filenames and one of the files cannot be found, none of the 
> properties files are loaded. This should be modified so that all of the files 
> that can be found and loaded are, and we just log errors about those that 
> cannot be found/accessed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1157) Remove deprecated classes and methods

2016-07-13 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376336#comment-15376336
 ] 

Joseph Witt commented on NIFI-1157:
---

This JIRA NIFI-1157 builds upon the findings in NIFI-1307.  This PR addresses 
both JIRAs then https://github.com/apache/nifi/pull/647

> Remove deprecated classes and methods
> -
>
> Key: NIFI-1157
> URL: https://issues.apache.org/jira/browse/NIFI-1157
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Tony Kurc
>Assignee: Joseph Witt
>Priority: Minor
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1157) Remove deprecated classes and methods

2016-07-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-1157:
--
Status: Patch Available  (was: Open)

https://github.com/apache/nifi/pull/647

> Remove deprecated classes and methods
> -
>
> Key: NIFI-1157
> URL: https://issues.apache.org/jira/browse/NIFI-1157
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Tony Kurc
>Assignee: Joseph Witt
>Priority: Minor
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1307) Un-deprecate FlowFile.getId()

2016-07-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-1307:
--
Status: Patch Available  (was: Reopened)

https://github.com/apache/nifi/pull/647

Will be resolved in NIFI-1157.

> Un-deprecate FlowFile.getId()
> -
>
> Key: NIFI-1307
> URL: https://issues.apache.org/jira/browse/NIFI-1307
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Joseph Witt
> Fix For: 1.0.0
>
>
> In 0.4.0, we deprecated the getId() method of FlowFile because it was no 
> longer being used in any substantive way. However, we have found a bug in 
> same of the Prioritizers (address in NIFI-1279) that is being addressed by 
> using this method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #647: NIFI-1157 NIFI-1307 Resolve deprecated classes and m...

2016-07-13 Thread joewitt
GitHub user joewitt opened a pull request:

https://github.com/apache/nifi/pull/647

NIFI-1157 NIFI-1307 Resolve deprecated classes and methods

The PR is three commits that build on eachother.  I kept them seperate 
because each is a significant but full step toward the goals of the JIRA 
NIFI-1307 and NIFI-1157.  NIFI-1307 removes the deprecation marking for 
FlowFile.getId() which is pretty deeply relied upon by the framework to include 
how data is serialized/deserialized.  NIFI-1157 then takes that NIFI-1307 basis 
and first resolves all deprecation markings in nifi-api and deals with all 
ripple effects throughout the codebase.  Then finally a search was conducted 
across the entire codebase to find deprecated classes and methods in our code 
and remove them and solve any references to them.  There are still some 
deprecated bits but they are left intentionally (as in the case of legacy nifi 
KDF for example) or are 3rd party libraries we use which we presently utilize 
deprecated classes for and were not part of the scope of this effort.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joewitt/incubator-nifi NIFI-1307

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/647.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #647


commit 3f4c3bd1ccc544edac9b6f4937519bfdab32ba44
Author: joewitt 
Date:   2016-07-13T22:42:10Z

NIFI-1307 removed deprecation indication for getId and provided better API 
documentation

commit a2ab663a53150169a754012479df002aa9e43a89
Author: joewitt 
Date:   2016-07-13T23:36:03Z

NIFI-1157 resolved deprecated nifi-api items and ripple effects

commit 9ad505af68c3cb3cf6a316ed6aa6ba4ba991610b
Author: joewitt 
Date:   2016-07-14T04:51:04Z

NIFI-1157 searched for and resolved all remaining references to deprecated 
items that were clearly addressable.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2115) Enhanced About Box Version Information

2016-07-13 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376321#comment-15376321
 ] 

Joseph Witt commented on NIFI-2115:
---

[~jameswing] Where is the sample UI?  Was that to be a png attached someplace?  
If you'd like this to be in 1.0 and believe the PR is close please go ahead and 
put the fix version to 1.0 and set patch submitted.  The only other question I 
have at the time is that this information represents a particular node that was 
queried.  What is the plan when it is a cluster of machines which could have 
different values?

> Enhanced About Box Version Information
> --
>
> Key: NIFI-2115
> URL: https://issues.apache.org/jira/browse/NIFI-2115
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI, Tools and Build
>Affects Versions: 1.0.0
>Reporter: James Wing
>Assignee: James Wing
>Priority: Minor
>
> The UI's About dialog and underlying API provide the version of NiFi, like 
> "0.7.0-SNAPSHOT".  For many bug reports and troubleshooting requests, this is 
> not very precise, especially around rapidly changing code or 
> platform-dependent behavior.  It would help if NiFi captured and displayed 
> additional information:  
> * NiFi build commit hash
> * NiFi build branch
> * NiFi build date/time
> * Java Version
> * Java Vendor (Oracle, OpenJDK)
> * OS
> Then a simple copy/paste from the about box would provide very specific 
> information about a particular NiFi installation and which code it was 
> derived from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2115) Enhanced About Box Version Information

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376316#comment-15376316
 ] 

ASF GitHub Bot commented on NIFI-2115:
--

Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/583
  
I force-pushed a substantial update/rewrite.  Changes from the previous 
implementation include:

* Moving most of the logic to the nifi-assembly pom rather than the parent 
pom
* Including the Tag property with the build information (typically HEAD for 
snapshot builds, RC tags for release builds)
* Handling formal releases by using Tag, ignoring revision and branch.
* More gracefully handling a source build outside git by using empty 
revision and branch
* Updated the about box UI to selectively include information based on 
availability
 - Tag displayed when not HEAD
 - Revision and branch displayed when not empty
* Included a sample UI using the System Diagnostics dialog.  I'm not 
suggesting we need both About and System Diagnostics, just showing sample 
options for discussion.
* Rebased on recent master


> Enhanced About Box Version Information
> --
>
> Key: NIFI-2115
> URL: https://issues.apache.org/jira/browse/NIFI-2115
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI, Tools and Build
>Affects Versions: 1.0.0
>Reporter: James Wing
>Assignee: James Wing
>Priority: Minor
>
> The UI's About dialog and underlying API provide the version of NiFi, like 
> "0.7.0-SNAPSHOT".  For many bug reports and troubleshooting requests, this is 
> not very precise, especially around rapidly changing code or 
> platform-dependent behavior.  It would help if NiFi captured and displayed 
> additional information:  
> * NiFi build commit hash
> * NiFi build branch
> * NiFi build date/time
> * Java Version
> * Java Vendor (Oracle, OpenJDK)
> * OS
> Then a simple copy/paste from the about box would provide very specific 
> information about a particular NiFi installation and which code it was 
> derived from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #583: NIFI-2115 Detailed Version Info for About Box

2016-07-13 Thread jvwing
Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/583
  
I force-pushed a substantial update/rewrite.  Changes from the previous 
implementation include:

* Moving most of the logic to the nifi-assembly pom rather than the parent 
pom
* Including the Tag property with the build information (typically HEAD for 
snapshot builds, RC tags for release builds)
* Handling formal releases by using Tag, ignoring revision and branch.
* More gracefully handling a source build outside git by using empty 
revision and branch
* Updated the about box UI to selectively include information based on 
availability
 - Tag displayed when not HEAD
 - Revision and branch displayed when not empty
* Included a sample UI using the System Diagnostics dialog.  I'm not 
suggesting we need both About and System Diagnostics, just showing sample 
options for discussion.
* Rebased on recent master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-2258) Resolve Asciidocs for admin guide formatting warnings

2016-07-13 Thread Joseph Witt (JIRA)
Joseph Witt created NIFI-2258:
-

 Summary: Resolve Asciidocs for admin guide formatting warnings
 Key: NIFI-2258
 URL: https://issues.apache.org/jira/browse/NIFI-2258
 Project: Apache NiFi
  Issue Type: Bug
  Components: Documentation & Website
Reporter: Joseph Witt
Assignee: Joseph Witt
Priority: Trivial
 Fix For: 1.0.0


asciidoctor: WARNING: administration-guide.adoc: line 1139: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1175: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1190: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1201: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1216: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1233: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1246: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1271: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1279: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1288: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1320: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1327: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1347: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1364: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1379: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1403: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1421: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1435: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1456: section title out 
of sequence: expected level 2, got level 3
asciidoctor: WARNING: administration-guide.adoc: line 1477: section title out 
of sequence: expected level 2, got level 3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2026) nifi-hadoop-libraries-nar should use profiles to point to different hadoop distro artifacts

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376303#comment-15376303
 ] 

ASF GitHub Bot commented on NIFI-2026:
--

Github user tdunning commented on the issue:

https://github.com/apache/nifi/pull/475
  
I got mentioned a few responses ago, so I feel that I ought to answer.

Either approach to supporting different distributions is a really good idea 
since it makes it easier for users, but doesn't impinge on the developers in 
any important way. Whether profiles are used for this or just repositories with 
versions selected by the users is pretty much a toss-up (from the user's point 
of view).

One of the major tenets of Apache is business friendly open source, so 
facilitating consumption of Apache software in this way by vendors (who really 
do add lots of value, frankly) is very much aligned with core Apache goals.



> nifi-hadoop-libraries-nar should use profiles to point to different hadoop 
> distro artifacts
> ---
>
> Key: NIFI-2026
> URL: https://issues.apache.org/jira/browse/NIFI-2026
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
>
> Raising a JIRA issue as discussed with [~mattyb149] as part of PR-475. 
> Users using particular Hadoop versions may struggle to use *HDFS against a 
> cluster running proprietary or particular versions of HDFS.
> Therefore, until we find a cleaner way of BYO hadoop machanism (as suggested 
> in NIFI-710), we should consider introducing a maven profiles to support 
> different hadoop library, enabling users to compile.
> This should cause no changes to default behaviour, just eliminating the need 
> to clone, modify, build and copy NAR bundles over standard NiFi artifacts.
> Unless the profile is explicitly requested, build will still includes just 
> the Apache licensed artifacts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #475: NIFI-2026 - Add Maven profile to compile nifi-hadoop-librar...

2016-07-13 Thread tdunning
Github user tdunning commented on the issue:

https://github.com/apache/nifi/pull/475
  
I got mentioned a few responses ago, so I feel that I ought to answer.

Either approach to supporting different distributions is a really good idea 
since it makes it easier for users, but doesn't impinge on the developers in 
any important way. Whether profiles are used for this or just repositories with 
versions selected by the users is pretty much a toss-up (from the user's point 
of view).

One of the major tenets of Apache is business friendly open source, so 
facilitating consumption of Apache software in this way by vendors (who really 
do add lots of value, frankly) is very much aligned with core Apache goals.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-2257) Refresh UpdateAttribute Processor advanced shell

2016-07-13 Thread Scott Aslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-2257:
--
Status: Patch Available  (was: In Progress)

> Refresh UpdateAttribute Processor advanced shell
> 
>
> Key: NIFI-2257
> URL: https://issues.apache.org/jira/browse/NIFI-2257
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-2257) Refresh UpdateAttribute Processor advanced shell

2016-07-13 Thread Scott Aslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan reassigned NIFI-2257:
-

Assignee: Scott Aslan

> Refresh UpdateAttribute Processor advanced shell
> 
>
> Key: NIFI-2257
> URL: https://issues.apache.org/jira/browse/NIFI-2257
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #646: [NIFI-2257] refresh updateattribute processor advanc...

2016-07-13 Thread scottyaslan
GitHub user scottyaslan opened a pull request:

https://github.com/apache/nifi/pull/646

[NIFI-2257] refresh updateattribute processor advanced shell



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/scottyaslan/nifi responsiveDevBranch

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/646.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #646


commit 767cc424475460cc72d8e680e3b227c7da6ee299
Author: Scott Aslan 
Date:   2016-07-14T03:44:07Z

[NIFI-2257] refresh updateattribute processor advanced shell




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-2257) Refresh UpdateAttribute Processor advanced shell

2016-07-13 Thread Scott Aslan (JIRA)
Scott Aslan created NIFI-2257:
-

 Summary: Refresh UpdateAttribute Processor advanced shell
 Key: NIFI-2257
 URL: https://issues.apache.org/jira/browse/NIFI-2257
 Project: Apache NiFi
  Issue Type: Sub-task
Reporter: Scott Aslan






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2026) nifi-hadoop-libraries-nar should use profiles to point to different hadoop distro artifacts

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376228#comment-15376228
 ] 

ASF GitHub Bot commented on NIFI-2026:
--

Github user trixpan commented on the issue:

https://github.com/apache/nifi/pull/475
  
@mattyb149 agree. Will be happy to update the PR to reflect that 


> nifi-hadoop-libraries-nar should use profiles to point to different hadoop 
> distro artifacts
> ---
>
> Key: NIFI-2026
> URL: https://issues.apache.org/jira/browse/NIFI-2026
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
>
> Raising a JIRA issue as discussed with [~mattyb149] as part of PR-475. 
> Users using particular Hadoop versions may struggle to use *HDFS against a 
> cluster running proprietary or particular versions of HDFS.
> Therefore, until we find a cleaner way of BYO hadoop machanism (as suggested 
> in NIFI-710), we should consider introducing a maven profiles to support 
> different hadoop library, enabling users to compile.
> This should cause no changes to default behaviour, just eliminating the need 
> to clone, modify, build and copy NAR bundles over standard NiFi artifacts.
> Unless the profile is explicitly requested, build will still includes just 
> the Apache licensed artifacts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-2256) Allow comments on processor to be edited while the processor is running

2016-07-13 Thread Stephane Maarek (JIRA)
Stephane Maarek created NIFI-2256:
-

 Summary: Allow comments on processor to be edited while the 
processor is running
 Key: NIFI-2256
 URL: https://issues.apache.org/jira/browse/NIFI-2256
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 0.6.1
Reporter: Stephane Maarek
Priority: Minor


When documenting a flow, it'd be great if the user was able to add / edit 
comments for the processors without interrupting them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1959) TailFile not ingesting data when tailed file is moved with no rolling pattern

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376188#comment-15376188
 ] 

ASF GitHub Bot commented on NIFI-1959:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/490
  
Hey @pvillard31, I'm gonna start reviewing this. I noticed this was 
initially committed back in June, sorry no one has reviewed this yet but could 
you rebase? I know there are no conflicts but a lot has happened on the master 
branch in last month that could have an impact.


> TailFile not ingesting data when tailed file is moved with no rolling pattern
> -
>
> Key: NIFI-1959
> URL: https://issues.apache.org/jira/browse/NIFI-1959
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 0.6.1
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.0.0
>
>
> In case "Rolling Filename Pattern" is not set by user and if the tailed file 
> is moved, then the new file will not be tailed.
> Besides, in such case, the processor will be endlessly triggered without 
> ingesting data: it creates a lot of tasks and consumes CPU. The reason is it 
> never goes in if statement L448.
> A solution is to look at size() and lastUpdated() of the tailed file to 
> detect a "rollover". However it won't allow the processor to ingest the 
> potential data added in the tailed file just before being moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #490: NIFI-1959 Added length and timestamp to detect rollover

2016-07-13 Thread JPercivall
Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/490
  
Hey @pvillard31, I'm gonna start reviewing this. I noticed this was 
initially committed back in June, sorry no one has reviewed this yet but could 
you rebase? I know there are no conflicts but a lot has happened on the master 
branch in last month that could have an impact.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1899) Create ListenSMTP & ExtractEmailAttachment processors

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376184#comment-15376184
 ] 

ASF GitHub Bot commented on NIFI-1899:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/483
  
@trixpan do you by chance have a template that I can use to validate the 
processors with?


> Create ListenSMTP & ExtractEmailAttachment processors
> -
>
> Key: NIFI-1899
> URL: https://issues.apache.org/jira/browse/NIFI-1899
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Andre
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #483: NIFI-1899 - Introduce ExtractEmailAttachments and ExtractEm...

2016-07-13 Thread JPercivall
Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/483
  
@trixpan do you by chance have a template that I can use to validate the 
processors with?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1942) Create a processor to validate CSV against a user-supplied schema

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376183#comment-15376183
 ] 

ASF GitHub Bot commented on NIFI-1942:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/476
  
@pvillard31 I left a couple comments. Do you by chance have a template 
and/or example csv data I can use to validate the processor?


> Create a processor to validate CSV against a user-supplied schema
> -
>
> Key: NIFI-1942
> URL: https://issues.apache.org/jira/browse/NIFI-1942
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.0.0
>
>
> In order to extend the set of "quality control" processors, it would be 
> interesting to have a processor validating CSV formatted flow files against a 
> user-specified schema.
> Flow file validated against schema would be routed to "valid" relationship 
> although flow file not validated against schema would be routed to "invalid" 
> relationship.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #575: NIFI-619: Make MonitorActivity more cluster friendly

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/575#discussion_r70739419
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MonitorActivity.java
 ---
@@ -234,4 +361,10 @@ public void process(final OutputStream out) throws 
IOException {
 }
 }
 }
+
+@OnPrimaryNodeStateChange
+public void onPrimaryNodeChange(final PrimaryNodeState newState) {
+isPrimaryNode = (newState == 
PrimaryNodeState.ELECTED_PRIMARY_NODE);
--- End diff --

(adding for transparency) @ijokarumawak and I talked out of band. Current 
thinking is to add another variable to the Initialization Contexts in order to 
know when the component is created if it is clustered or not and if it is on 
the primary node.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-619) update MonitorActivity processor to be cluster friendly

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376182#comment-15376182
 ] 

ASF GitHub Bot commented on NIFI-619:
-

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/575#discussion_r70739419
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MonitorActivity.java
 ---
@@ -234,4 +361,10 @@ public void process(final OutputStream out) throws 
IOException {
 }
 }
 }
+
+@OnPrimaryNodeStateChange
+public void onPrimaryNodeChange(final PrimaryNodeState newState) {
+isPrimaryNode = (newState == 
PrimaryNodeState.ELECTED_PRIMARY_NODE);
--- End diff --

(adding for transparency) @ijokarumawak and I talked out of band. Current 
thinking is to add another variable to the Initialization Contexts in order to 
know when the component is created if it is clustered or not and if it is on 
the primary node.


> update MonitorActivity processor to be cluster friendly
> ---
>
> Key: NIFI-619
> URL: https://issues.apache.org/jira/browse/NIFI-619
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Brandon DeVries
>Assignee: Koji Kawamura
>Priority: Minor
> Fix For: 1.0.0
>
>
> This processor should be able to be used to monitor activity across the 
> cluster.  In its current state, alerting is based on activity of a single 
> node, not the entire cluster.
> For example, in a 2 node cluster, if system A is getting data from a given 
> flow and system B is not, system B will alert for lack of activity even 
> though the flow is functioning "normally".
> The ideal behavior would be fore an alert to be generated only if both 
> systems did not see data in the specified time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2142) Cache compiled XSLT in TransformXml

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376176#comment-15376176
 ] 

ASF GitHub Bot commented on NIFI-2142:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/609
  
@jfrazee I've made a couple comments. Looks good overall, mainly just 
documentation comments. Do you by chance have a template and/or some example 
xml/XSLT that I could use for testing to verify the changes?


> Cache compiled XSLT in TransformXml
> ---
>
> Key: NIFI-2142
> URL: https://issues.apache.org/jira/browse/NIFI-2142
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joey Frazee
> Fix For: 1.0.0
>
>
> TransformXml appears to be recompiling the XSLT on every onTrigger event, 
> which is slow. It should cache the compiled stylesheets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #609: NIFI-2142 Cache compiled XSLT in TransformXml

2016-07-13 Thread JPercivall
Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/609
  
@jfrazee I've made a couple comments. Looks good overall, mainly just 
documentation comments. Do you by chance have a template and/or some example 
xml/XSLT that I could use for testing to verify the changes?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #609: NIFI-2142 Cache compiled XSLT in TransformXml

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/609#discussion_r70738325
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TransformXml.java
 ---
@@ -76,25 +84,59 @@
 .name("XSLT file name")
 .description("Provides the name (including full path) of the 
XSLT file to apply to the flowfile XML content.")
 .required(true)
+.expressionLanguageSupported(true)
 .addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor INDENT_OUTPUT = new 
PropertyDescriptor.Builder()
+.name("indent-output")
+.displayName("Indent")
+.description("Whether or not to indent the output.")
+.required(true)
+.defaultValue("true")
+.allowableValues("true", "false")
+.addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CACHE_SIZE = new 
PropertyDescriptor.Builder()
+.name("cache-size")
+.displayName("Cache size")
+.description("Maximum size of the stylesheet cache.")
+.required(true)
+.defaultValue("100")
+
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CACHE_DURATION = new 
PropertyDescriptor.Builder()
+.name("cache-duration")
+.displayName("Cache duration")
+.description("How long to keep stylesheets in the cache.")
--- End diff --

This property is misleading. What I initially thought it to be was how long 
the stylesheets will exist in the cache (deleted after 60s). What it actually 
is, is an expiration time where after 60s of it not being used it will be 
removed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #475: NIFI-2026 - Add Maven profile to compile nifi-hadoop-librar...

2016-07-13 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/475
  
Since other Apache projects are taking the stance of including the 
repositories in the top-level POM, I think NiFi should too. These would include 
(at a minimum) the MapR, Cloudera, and Hortonworks repositories. However IMO we 
should the draw the line at profiles, and instead enable variables like 
hadoop.version, hive.version, etc. and add a Wiki page or some other form of 
community documentation showing how to build a vendor-specific distribution of 
NiFi. Thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2026) nifi-hadoop-libraries-nar should use profiles to point to different hadoop distro artifacts

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376159#comment-15376159
 ] 

ASF GitHub Bot commented on NIFI-2026:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/475
  
Since other Apache projects are taking the stance of including the 
repositories in the top-level POM, I think NiFi should too. These would include 
(at a minimum) the MapR, Cloudera, and Hortonworks repositories. However IMO we 
should the draw the line at profiles, and instead enable variables like 
hadoop.version, hive.version, etc. and add a Wiki page or some other form of 
community documentation showing how to build a vendor-specific distribution of 
NiFi. Thoughts?


> nifi-hadoop-libraries-nar should use profiles to point to different hadoop 
> distro artifacts
> ---
>
> Key: NIFI-2026
> URL: https://issues.apache.org/jira/browse/NIFI-2026
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
>
> Raising a JIRA issue as discussed with [~mattyb149] as part of PR-475. 
> Users using particular Hadoop versions may struggle to use *HDFS against a 
> cluster running proprietary or particular versions of HDFS.
> Therefore, until we find a cleaner way of BYO hadoop machanism (as suggested 
> in NIFI-710), we should consider introducing a maven profiles to support 
> different hadoop library, enabling users to compile.
> This should cause no changes to default behaviour, just eliminating the need 
> to clone, modify, build and copy NAR bundles over standard NiFi artifacts.
> Unless the profile is explicitly requested, build will still includes just 
> the Apache licensed artifacts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2157) Add GenerateTableFetch processor

2016-07-13 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-2157:
---
Status: Patch Available  (was: In Progress)

> Add GenerateTableFetch processor
> 
>
> Key: NIFI-2157
> URL: https://issues.apache.org/jira/browse/NIFI-2157
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.0.0
>
>
> This processor would presumably operate like QueryDatabaseTable, except it 
> will contain a "Partition Size" property, and rather than executing the SQL 
> statement(s) to fetch rows, it would generate flow files containing SQL 
> statements that will select rows from a table. If the partition size is 
> indicated, then the SELECT statements will refer to a range of rows, such 
> that each statement will grab only a portion of the table. If max-value 
> columns are specified, then only rows whose observed values for those columns 
> exceed the current maximum will be fetched (i.e. like QueryDatabaseTable). 
> These flow files (due to NIFI-1973) can be passed to ExecuteSQL processors 
> for the actual fetching of rows, and ExecuteSQL can be distributed across 
> cluster nodes and/or multiple tasks. These features enable distributed 
> incremental fetching of rows from database table(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2157) Add GenerateTableFetch processor

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376149#comment-15376149
 ] 

ASF GitHub Bot commented on NIFI-2157:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/645

NIFI-2157: Add GenerateTableFetch processor

Refactored common code out of QueryDatabaseTable into an Abstract base 
class, which involved some refactor of QueryDatabaseTable as well as the added 
GenerateTableFetch processor.

This includes the addition of the DatabaseAdapter interface and its 
implementations, going forward this is to isolate database-specific code behind 
an interface for use by database-related processors.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-2157_new

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/645.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #645


commit 7b8fd3728f21f6a40b91e401d14738ea9a32fc9e
Author: Matt Burgess 
Date:   2016-07-14T01:29:51Z

NIFI-2157: Add GenerateTableFetch processor




> Add GenerateTableFetch processor
> 
>
> Key: NIFI-2157
> URL: https://issues.apache.org/jira/browse/NIFI-2157
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.0.0
>
>
> This processor would presumably operate like QueryDatabaseTable, except it 
> will contain a "Partition Size" property, and rather than executing the SQL 
> statement(s) to fetch rows, it would generate flow files containing SQL 
> statements that will select rows from a table. If the partition size is 
> indicated, then the SELECT statements will refer to a range of rows, such 
> that each statement will grab only a portion of the table. If max-value 
> columns are specified, then only rows whose observed values for those columns 
> exceed the current maximum will be fetched (i.e. like QueryDatabaseTable). 
> These flow files (due to NIFI-1973) can be passed to ExecuteSQL processors 
> for the actual fetching of rows, and ExecuteSQL can be distributed across 
> cluster nodes and/or multiple tasks. These features enable distributed 
> incremental fetching of rows from database table(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #645: NIFI-2157: Add GenerateTableFetch processor

2016-07-13 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/645

NIFI-2157: Add GenerateTableFetch processor

Refactored common code out of QueryDatabaseTable into an Abstract base 
class, which involved some refactor of QueryDatabaseTable as well as the added 
GenerateTableFetch processor.

This includes the addition of the DatabaseAdapter interface and its 
implementations, going forward this is to isolate database-specific code behind 
an interface for use by database-related processors.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-2157_new

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/645.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #645


commit 7b8fd3728f21f6a40b91e401d14738ea9a32fc9e
Author: Matt Burgess 
Date:   2016-07-14T01:29:51Z

NIFI-2157: Add GenerateTableFetch processor




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2142) Cache compiled XSLT in TransformXml

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376145#comment-15376145
 ] 

ASF GitHub Bot commented on NIFI-2142:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/609#discussion_r70736401
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TransformXml.java
 ---
@@ -76,25 +84,59 @@
 .name("XSLT file name")
 .description("Provides the name (including full path) of the 
XSLT file to apply to the flowfile XML content.")
 .required(true)
+.expressionLanguageSupported(true)
 .addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor INDENT_OUTPUT = new 
PropertyDescriptor.Builder()
+.name("indent-output")
+.displayName("Indent")
+.description("Whether or not to indent the output.")
+.required(true)
+.defaultValue("true")
+.allowableValues("true", "false")
+.addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CACHE_SIZE = new 
PropertyDescriptor.Builder()
+.name("cache-size")
+.displayName("Cache size")
+.description("Maximum size of the stylesheet cache.")
+.required(true)
+.defaultValue("100")
+
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
--- End diff --

Actually my new guess is that this is the maximum number of stylesheets 
stored (the description could be added to, lol)


> Cache compiled XSLT in TransformXml
> ---
>
> Key: NIFI-2142
> URL: https://issues.apache.org/jira/browse/NIFI-2142
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joey Frazee
> Fix For: 1.0.0
>
>
> TransformXml appears to be recompiling the XSLT on every onTrigger event, 
> which is slow. It should cache the compiled stylesheets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #609: NIFI-2142 Cache compiled XSLT in TransformXml

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/609#discussion_r70736401
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TransformXml.java
 ---
@@ -76,25 +84,59 @@
 .name("XSLT file name")
 .description("Provides the name (including full path) of the 
XSLT file to apply to the flowfile XML content.")
 .required(true)
+.expressionLanguageSupported(true)
 .addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor INDENT_OUTPUT = new 
PropertyDescriptor.Builder()
+.name("indent-output")
+.displayName("Indent")
+.description("Whether or not to indent the output.")
+.required(true)
+.defaultValue("true")
+.allowableValues("true", "false")
+.addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CACHE_SIZE = new 
PropertyDescriptor.Builder()
+.name("cache-size")
+.displayName("Cache size")
+.description("Maximum size of the stylesheet cache.")
+.required(true)
+.defaultValue("100")
+
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
--- End diff --

Actually my new guess is that this is the maximum number of stylesheets 
stored (the description could be added to, lol)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2142) Cache compiled XSLT in TransformXml

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376136#comment-15376136
 ] 

ASF GitHub Bot commented on NIFI-2142:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/609#discussion_r70735844
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TransformXml.java
 ---
@@ -76,25 +84,59 @@
 .name("XSLT file name")
 .description("Provides the name (including full path) of the 
XSLT file to apply to the flowfile XML content.")
 .required(true)
+.expressionLanguageSupported(true)
 .addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor INDENT_OUTPUT = new 
PropertyDescriptor.Builder()
+.name("indent-output")
+.displayName("Indent")
+.description("Whether or not to indent the output.")
+.required(true)
+.defaultValue("true")
+.allowableValues("true", "false")
+.addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CACHE_SIZE = new 
PropertyDescriptor.Builder()
+.name("cache-size")
+.displayName("Cache size")
+.description("Maximum size of the stylesheet cache.")
+.required(true)
+.defaultValue("100")
+
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
--- End diff --

Also, this is a integer validator but the property is interpreted as a long 
on line 171


> Cache compiled XSLT in TransformXml
> ---
>
> Key: NIFI-2142
> URL: https://issues.apache.org/jira/browse/NIFI-2142
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joey Frazee
> Fix For: 1.0.0
>
>
> TransformXml appears to be recompiling the XSLT on every onTrigger event, 
> which is slow. It should cache the compiled stylesheets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #609: NIFI-2142 Cache compiled XSLT in TransformXml

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/609#discussion_r70735844
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TransformXml.java
 ---
@@ -76,25 +84,59 @@
 .name("XSLT file name")
 .description("Provides the name (including full path) of the 
XSLT file to apply to the flowfile XML content.")
 .required(true)
+.expressionLanguageSupported(true)
 .addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor INDENT_OUTPUT = new 
PropertyDescriptor.Builder()
+.name("indent-output")
+.displayName("Indent")
+.description("Whether or not to indent the output.")
+.required(true)
+.defaultValue("true")
+.allowableValues("true", "false")
+.addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CACHE_SIZE = new 
PropertyDescriptor.Builder()
+.name("cache-size")
+.displayName("Cache size")
+.description("Maximum size of the stylesheet cache.")
+.required(true)
+.defaultValue("100")
+
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
--- End diff --

Also, this is a integer validator but the property is interpreted as a long 
on line 171


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #609: NIFI-2142 Cache compiled XSLT in TransformXml

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/609#discussion_r70735673
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TransformXml.java
 ---
@@ -76,25 +84,59 @@
 .name("XSLT file name")
 .description("Provides the name (including full path) of the 
XSLT file to apply to the flowfile XML content.")
 .required(true)
+.expressionLanguageSupported(true)
 .addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor INDENT_OUTPUT = new 
PropertyDescriptor.Builder()
+.name("indent-output")
+.displayName("Indent")
+.description("Whether or not to indent the output.")
+.required(true)
+.defaultValue("true")
+.allowableValues("true", "false")
+.addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CACHE_SIZE = new 
PropertyDescriptor.Builder()
+.name("cache-size")
+.displayName("Cache size")
+.description("Maximum size of the stylesheet cache.")
+.required(true)
+.defaultValue("100")
+
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
--- End diff --

Is this property a max character length or data size limit?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2163) Nifi Service does not follow LSB Sevice Spec

2016-07-13 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376121#comment-15376121
 ] 

Puspendu Banerjee commented on NIFI-2163:
-

Thanks.
I shall look into the following statement from LSB "start, stop, restart,
force-reload, and status actions shall be supported by all init scripts;
the reload and the try-restart actions are optional."

Will that satisfy your need?

Thanks & Regards,
Puspendu Banerjee
On Jul 13, 2016 7:30 PM, "Edgardo Vega (JIRA)"  wrote:


[
https://issues.apache.org/jira/browse/NIFI-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376064#comment-15376064
]

Edgardo Vega commented on NIFI-2163:


[~puspendu.baner...@gmail.com] Ansible and puppet for starters.

Here is the link to the service module for ansible:
https://github.com/ansible/ansible-modules-core/blob/devel/system/service.py#L582
Here the link to the service type for puppet:
https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type/service.rb#L135

they do not follow the spec for services, whcih causes some configuration
tools not to work as they use the return codes to determine if things are
running, dead, or stopped.
http://refspecs.linuxbase.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


> Nifi Service does not follow LSB Sevice Spec
> 
>
> Key: NIFI-2163
> URL: https://issues.apache.org/jira/browse/NIFI-2163
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.7.0
> Environment: Centos
>Reporter: Edgardo Vega
>Priority: Critical
>
> Trying to use the lastest off master with nifi.sh and nifi-env.sh and they do 
> not follow the spec for services, whcih causes some configuration tools not 
> to work as they use the return codes to determine if things are running, 
> dead, or stopped.
> http://refspecs.linuxbase.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2163) Nifi Service does not follow LSB Sevice Spec

2016-07-13 Thread Edgardo Vega (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376064#comment-15376064
 ] 

Edgardo Vega commented on NIFI-2163:


[~puspendu.baner...@gmail.com] Ansible and puppet for starters.

Here is the link to the service module for ansible: 
https://github.com/ansible/ansible-modules-core/blob/devel/system/service.py#L582
Here the link to the service type for puppet: 
https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type/service.rb#L135

> Nifi Service does not follow LSB Sevice Spec
> 
>
> Key: NIFI-2163
> URL: https://issues.apache.org/jira/browse/NIFI-2163
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.7.0
> Environment: Centos
>Reporter: Edgardo Vega
>Priority: Critical
>
> Trying to use the lastest off master with nifi.sh and nifi-env.sh and they do 
> not follow the spec for services, whcih causes some configuration tools not 
> to work as they use the return codes to determine if things are running, 
> dead, or stopped.
> http://refspecs.linuxbase.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1638) Unable to TAR a file that is larger than 8GB

2016-07-13 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376062#comment-15376062
 ] 

Puspendu Banerjee commented on NIFI-1638:
-

[~john5634] Please share more details, e.g. exact error log etc.

> Unable to TAR a file that is larger than 8GB
> 
>
> Key: NIFI-1638
> URL: https://issues.apache.org/jira/browse/NIFI-1638
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 0.4.0, 0.5.1
> Environment: RHEL 7, Dell 730xd server
>Reporter: John Teabo
>
> I get an error when trying to create a flow file tar v1 of a file that is 
> larger than 8GB. Is there a way to allow the tarring of files larger than 
> 8GB. Due to the nature of our project we need the attributes of split files 
> and their parent files to be kept. I have been tarring all files that come in 
> NiFi and then splitting the packages then taring the splits for 
> reconstitution after being written to a directory on the server, then 
> processed by another application, then the next instance of NiFi Takes over. 
> unpacks the splits and reconstitutes the original package file with its 
> attributes since I packaged it in a flowfile tar v1.
> Any help will be greatly appreciated!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1977) Property descriptors without explicit validators are not recognized

2016-07-13 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376058#comment-15376058
 ] 

Puspendu Banerjee commented on NIFI-1977:
-

[~aperepel] May be, you have already done it, but trying to re-confirm that you 
have included that in getter for PropertyDescriptors.

> Property descriptors without explicit validators are not recognized
> ---
>
> Key: NIFI-1977
> URL: https://issues.apache.org/jira/browse/NIFI-1977
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.6.1
>Reporter: Andrew Grande
>
> Looks like a bug in a testing framework.
> I have a simple test case like this one:
> {code:java}
> @Test
> public void customHeader() {
> final TestRunner runner = 
> TestRunners.newTestRunner(ParseCSVRecord.class);
> runner.setProperty(PROP_CUSTOM_HEADER, "Column 1,Column 2");
> runner.enqueue("row1col1,row1col2\nrow2col1, row2col2");
> runner.run();
> {code}
> When a property is declared without an explicit validator, the test runner 
> fails complaining that Custom Header is not a supported property (yes, I've 
> added it to the descriptors list).
> {code:java}
> public static final PropertyDescriptor PROP_CUSTOM_HEADER = new 
> PropertyDescriptor.Builder()
> .name("Custom Header")
> .description("Use this header (delimited according to set rules) instead of 
> auto-discovering it from schema")
> .required(false)  
>   .expressionLanguageSupported(true)
> .build();
> {code}
> The intent here is to not have a validator declared, but rather implement 
> more complex logic in the customValidate() callback, but the test never gets 
> to that point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2034) NiFi is always loading a specific version of httpcore to classpath

2016-07-13 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376048#comment-15376048
 ] 

Puspendu Banerjee commented on NIFI-2034:
-

It looks pretty interesting. 
[~asanka] Can you please share your code-base if possible. Looks like, somehow 
it's not the correct classloader that is being used.

> NiFi is always loading a specific version of httpcore to classpath
> --
>
> Key: NIFI-2034
> URL: https://issues.apache.org/jira/browse/NIFI-2034
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.5.1, 0.6.1
>Reporter: asanka sanjaya
>
> We have written a custom nifi processor to connect to microsoft exchange 
> server and get emails. It runs perfectly as a standalone application. But 
> nifi always loads httpcore-4.4.1.jar and httpclient-4.4.1.jar to classpath 
> even though the nar file contains httpcore-4.4.4.jar and httpclient-4.5.2.jar.
> Because of that, it throws this error.
> 016-06-15 18:57:17,086 WARN [Timer-Driven Process Thread-4] 
> o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding 
> NiFiJournalJob[id=52616329-d64c-4e14-bcb1-4c799891682a] due to uncaught 
> Exception: java.lang.NoSuchFieldError: INSTANCE
> 2016-06-15 18:57:17,091 WARN [Timer-Driven Process Thread-4] 
> o.a.n.c.t.ContinuallyRunProcessorTask 
> java.lang.NoSuchFieldError: INSTANCE
>   at 
> org.apache.http.impl.io.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:52)
>  ~[httpcore-4.4.1.jar:4.4.1]
>   at 
> org.apache.http.impl.io.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:56)
>  ~[httpcore-4.4.1.jar:4.4.1]
>   at 
> org.apache.http.impl.io.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:46)
>  ~[httpcore-4.4.1.jar:4.4.1]
>   at 
> org.apache.http.impl.conn.ManagedHttpClientConnectionFactory.(ManagedHttpClientConnectionFactory.java:82)
>  ~[httpclient-4.4.1.jar:4.4.1]
>   at 
> org.apache.http.impl.conn.ManagedHttpClientConnectionFactory.(ManagedHttpClientConnectionFactory.java:95)
>  ~[httpclient-4.4.1.jar:4.4.1]
>   at 
> org.apache.http.impl.conn.ManagedHttpClientConnectionFactory.(ManagedHttpClientConnectionFactory.java:104)
>  ~[httpclient-4.4.1.jar:4.4.1]
>   at 
> org.apache.http.impl.conn.ManagedHttpClientConnectionFactory.(ManagedHttpClientConnectionFactory.java:62)
>  ~[httpclient-4.4.1.jar:4.4.1]
>   at 
> org.apache.http.impl.conn.BasicHttpClientConnectionManager.(BasicHttpClientConnectionManager.java:142)
>  ~[httpclient-4.4.1.jar:4.4.1]
>   at 
> org.apache.http.impl.conn.BasicHttpClientConnectionManager.(BasicHttpClientConnectionManager.java:128)
>  ~[httpclient-4.4.1.jar:4.4.1]
>   at 
> org.apache.http.impl.conn.BasicHttpClientConnectionManager.(BasicHttpClientConnectionManager.java:157)
>  ~[httpclient-4.4.1.jar:4.4.1]
>   at 
> microsoft.exchange.webservices.data.core.ExchangeServiceBase.initializeHttpClient(ExchangeServiceBase.java:199)
>  ~[na:na]
>   at 
> microsoft.exchange.webservices.data.core.ExchangeServiceBase.(ExchangeServiceBase.java:174)
>  ~[na:na]
>   at 
> microsoft.exchange.webservices.data.core.ExchangeServiceBase.(ExchangeServiceBase.java:179)
>  ~[na:na]
>   at 
> microsoft.exchange.webservices.data.core.ExchangeService.(ExchangeService.java:3729)
>  ~[na:na]
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2079) Cannot insert ISO dates correctly

2016-07-13 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376040#comment-15376040
 ] 

Puspendu Banerjee commented on NIFI-2079:
-

[~asanka] can you please check it with latest version or mainline 1.0. 
If you can share a test-case that will be great.

> Cannot insert ISO dates correctly
> -
>
> Key: NIFI-2079
> URL: https://issues.apache.org/jira/browse/NIFI-2079
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.6.1
>Reporter: asanka sanjaya
>
> When try to insert an ISO date to a mongo collection, it always saved as a 
> String instead of ISODate object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2163) Nifi Service does not follow LSB Sevice Spec

2016-07-13 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376037#comment-15376037
 ] 

Puspendu Banerjee commented on NIFI-2163:
-

[~evega] Please put more detail on tools those can not detect status of the 
service.

> Nifi Service does not follow LSB Sevice Spec
> 
>
> Key: NIFI-2163
> URL: https://issues.apache.org/jira/browse/NIFI-2163
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.7.0
> Environment: Centos
>Reporter: Edgardo Vega
>Priority: Critical
>
> Trying to use the lastest off master with nifi.sh and nifi-env.sh and they do 
> not follow the spec for services, whcih causes some configuration tools not 
> to work as they use the return codes to determine if things are running, 
> dead, or stopped.
> http://refspecs.linuxbase.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-619) update MonitorActivity processor to be cluster friendly

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376022#comment-15376022
 ] 

ASF GitHub Bot commented on NIFI-619:
-

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/575#discussion_r70728563
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MonitorActivity.java
 ---
@@ -234,4 +361,10 @@ public void process(final OutputStream out) throws 
IOException {
 }
 }
 }
+
+@OnPrimaryNodeStateChange
+public void onPrimaryNodeChange(final PrimaryNodeState newState) {
+isPrimaryNode = (newState == 
PrimaryNodeState.ELECTED_PRIMARY_NODE);
--- End diff --

This is a great catch, thank you!

@JPercivall in different PR I'm working on, #563 also needs similar 
functionality to know whether it's running on a standalone or clustered NiFi, 
and if latter whether it's a primary node.

I'm going to try adding a new `@OnAddedToCluster` lifecycle annotation 
taking one argument `isPrimaryNode`. Using it together with 
`@OnPrimaryNodeStateChange` Processors, Reporting tasks, and Controller 
services will be able to control its behavior well in a clustered environment.

How do you think?


> update MonitorActivity processor to be cluster friendly
> ---
>
> Key: NIFI-619
> URL: https://issues.apache.org/jira/browse/NIFI-619
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Brandon DeVries
>Assignee: Koji Kawamura
>Priority: Minor
> Fix For: 1.0.0
>
>
> This processor should be able to be used to monitor activity across the 
> cluster.  In its current state, alerting is based on activity of a single 
> node, not the entire cluster.
> For example, in a 2 node cluster, if system A is getting data from a given 
> flow and system B is not, system B will alert for lack of activity even 
> though the flow is functioning "normally".
> The ideal behavior would be fore an alert to be generated only if both 
> systems did not see data in the specified time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #575: NIFI-619: Make MonitorActivity more cluster friendly

2016-07-13 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/575#discussion_r70728563
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MonitorActivity.java
 ---
@@ -234,4 +361,10 @@ public void process(final OutputStream out) throws 
IOException {
 }
 }
 }
+
+@OnPrimaryNodeStateChange
+public void onPrimaryNodeChange(final PrimaryNodeState newState) {
+isPrimaryNode = (newState == 
PrimaryNodeState.ELECTED_PRIMARY_NODE);
--- End diff --

This is a great catch, thank you!

@JPercivall in different PR I'm working on, #563 also needs similar 
functionality to know whether it's running on a standalone or clustered NiFi, 
and if latter whether it's a primary node.

I'm going to try adding a new `@OnAddedToCluster` lifecycle annotation 
taking one argument `isPrimaryNode`. Using it together with 
`@OnPrimaryNodeStateChange` Processors, Reporting tasks, and Controller 
services will be able to control its behavior well in a clustered environment.

How do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2228) FlowFileHandlingException should extend RuntimeException, not ProcessException

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376008#comment-15376008
 ] 

ASF GitHub Bot commented on NIFI-2228:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/630


> FlowFileHandlingException should extend RuntimeException, not ProcessException
> --
>
> Key: NIFI-2228
> URL: https://issues.apache.org/jira/browse/NIFI-2228
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.0.0
>
>
> Currently, FlowFileHandlingException extends ProcessException, but it is 
> clearly documented to indicate that this Exception will be thrown only when 
> there is a bug in the processor code. As a result, it should not extend 
> ProcessException because ProcessException is recommended to be handled by 
> processor code, and a processor should not handle an exception indicating 
> that the processor itself is buggy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2228) FlowFileHandlingException should extend RuntimeException, not ProcessException

2016-07-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376007#comment-15376007
 ] 

ASF subversion and git services commented on NIFI-2228:
---

Commit d403254b49ed492a24a55e72a8f6f0499312d3d5 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=d403254 ]

NIFI-2228: Change FlowFileHandlingException to extend from RuntimeException 
instead of ProcessException

This closes #630

Signed-off-by: jpercivall 


> FlowFileHandlingException should extend RuntimeException, not ProcessException
> --
>
> Key: NIFI-2228
> URL: https://issues.apache.org/jira/browse/NIFI-2228
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.0.0
>
>
> Currently, FlowFileHandlingException extends ProcessException, but it is 
> clearly documented to indicate that this Exception will be thrown only when 
> there is a bug in the processor code. As a result, it should not extend 
> ProcessException because ProcessException is recommended to be handled by 
> processor code, and a processor should not handle an exception indicating 
> that the processor itself is buggy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #630: NIFI-2228: Change FlowFileHandlingException to exten...

2016-07-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/630


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #575: NIFI-619: Make MonitorActivity more cluster friendly

2016-07-13 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/575#discussion_r70727312
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MonitorActivity.java
 ---
@@ -168,17 +216,49 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 final long now = System.currentTimeMillis();
 
 final ComponentLog logger = getLogger();
+final String monitoringScope = 
context.getProperty(MONITORING_SCOPE).getValue();
+final boolean copyAttributes = 
context.getProperty(COPY_ATTRIBUTES).asBoolean();
+final boolean isClusterScope = 
SCOPE_CLUSTER.equals(monitoringScope);
 final List flowFiles = session.get(50);
+
+boolean isInactive = false;
+long updatedLatestSuccessTransfer = -1;
+StateMap clusterState = null;
+final boolean shouldThisNodeReport = !isClusterScope
--- End diff --

Good call. I'll move this condition to a function and call it just before 
reporting attempts.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1152) StandardProcessSession and MockProcessSession should handle transfer to unregistered relationship correctly

2016-07-13 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375982#comment-15375982
 ] 

Puspendu Banerjee commented on NIFI-1152:
-

[~markap14] I think, you may close the ticket.

> StandardProcessSession and MockProcessSession should handle transfer to 
> unregistered relationship correctly
> ---
>
> Key: NIFI-1152
> URL: https://issues.apache.org/jira/browse/NIFI-1152
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1
>Reporter: Mark Payne
>Assignee: Joseph Witt
>  Labels: beginner, newbie
> Fix For: 1.0.0
>
> Attachments: 
> 0001-Fix-for-NIFI-1838-NIFI-1152-Code-modification-for-ty.patch
>
>
> If a processor calls ProcessSession.transfer(flowFile, 
> NON_EXISTENT_RELATIONSHIP) the NiFi framework will throw a 
> FlowFileHandlingException. However, the Mock Framework simply allows it and 
> does not throw any sort of Exception. This needs to be addressed so that the 
> Mock framework functions the same way as the normal NiFi framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1152) StandardProcessSession and MockProcessSession should handle transfer to unregistered relationship correctly

2016-07-13 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375978#comment-15375978
 ] 

Puspendu Banerjee commented on NIFI-1152:
-

Sorry for delay in reply. It's all good.

> StandardProcessSession and MockProcessSession should handle transfer to 
> unregistered relationship correctly
> ---
>
> Key: NIFI-1152
> URL: https://issues.apache.org/jira/browse/NIFI-1152
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1
>Reporter: Mark Payne
>Assignee: Joseph Witt
>  Labels: beginner, newbie
> Fix For: 1.0.0
>
> Attachments: 
> 0001-Fix-for-NIFI-1838-NIFI-1152-Code-modification-for-ty.patch
>
>
> If a processor calls ProcessSession.transfer(flowFile, 
> NON_EXISTENT_RELATIONSHIP) the NiFi framework will throw a 
> FlowFileHandlingException. However, the Mock Framework simply allows it and 
> does not throw any sort of Exception. This needs to be addressed so that the 
> Mock framework functions the same way as the normal NiFi framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2228) FlowFileHandlingException should extend RuntimeException, not ProcessException

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375968#comment-15375968
 ] 

ASF GitHub Bot commented on NIFI-2228:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/630
  
+1

Change in line with documentation. Passed contrib-check build. Checked that 
everything starts up properly by running a secure cluster. Also double checked 
that nothing was using FlowFileHandlingException for something other than what 
it was intended for. 

Will merge it in.


> FlowFileHandlingException should extend RuntimeException, not ProcessException
> --
>
> Key: NIFI-2228
> URL: https://issues.apache.org/jira/browse/NIFI-2228
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.0.0
>
>
> Currently, FlowFileHandlingException extends ProcessException, but it is 
> clearly documented to indicate that this Exception will be thrown only when 
> there is a bug in the processor code. As a result, it should not extend 
> ProcessException because ProcessException is recommended to be handled by 
> processor code, and a processor should not handle an exception indicating 
> that the processor itself is buggy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1899) Create ListenSMTP & ExtractEmailAttachment processors

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375957#comment-15375957
 ] 

ASF GitHub Bot commented on NIFI-1899:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/483#discussion_r70723234
  
--- Diff: 
nifi-nar-bundles/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/smtp/handler/SMTPMessageHandlerFactory.java
 ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.email.smtp.handler;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.cert.X509Certificate;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.stream.io.ByteArrayOutputStream;
+import org.apache.nifi.util.StopWatch;
+import org.subethamail.smtp.DropConnectionException;
+import org.subethamail.smtp.MessageContext;
+import org.subethamail.smtp.MessageHandler;
+import org.subethamail.smtp.MessageHandlerFactory;
+import org.subethamail.smtp.RejectException;
+import org.subethamail.smtp.TooMuchDataException;
+import org.subethamail.smtp.server.SMTPServer;
+
+import org.apache.nifi.processors.email.smtp.event.SmtpEvent;
+
+
+public class SMTPMessageHandlerFactory implements MessageHandlerFactory {
+final LinkedBlockingQueue incomingMessages;
+final ComponentLog logger;
+
+public SMTPMessageHandlerFactory(LinkedBlockingQueue 
incomingMessages, ComponentLog logger) {
+this.incomingMessages = incomingMessages;
+this.logger = logger;
+}
+
+@Override
+public MessageHandler create(MessageContext messageContext) {
+return new Handler(messageContext, incomingMessages, logger);
+}
+
+class Handler implements MessageHandler {
+final MessageContext messageContext;
+String from;
+String recipient;
+byte [] messageBody;
+
+
+public Handler(MessageContext messageContext, 
LinkedBlockingQueue incomingMessages, ComponentLog logger){
+this.messageContext = messageContext;
+}
+
+@Override
+public void from(String from) throws RejectException {
+// TODO: possibly whitelist senders?
+this.from = from;
+}
+
+@Override
+public void recipient(String recipient) throws RejectException {
+// TODO: possibly whitelist receivers?
+this.recipient = recipient;
+}
+
+@Override
+public void data(InputStream inputStream) throws RejectException, 
TooMuchDataException, IOException {
+// Start counting the timer...
+
+StopWatch watch = new StopWatch(false);
+
+SMTPServer server = messageContext.getSMTPServer();
+
+ByteArrayOutputStream baos = new ByteArrayOutputStream();
+
+byte [] buffer = new byte[1024];
+int rd;
+
+while ((rd = inputStream.read(buffer, 0, buffer.length)) != 
-1) {
+baos.write(buffer, 0, rd);
+}
+if (baos.getBufferLength() > server.getMaxMessageSize()) {
+throw new TooMuchDataException("Data exceeds the amount 
allowed.");
+}
+
+baos.flush();
+this.messageBody = baos.toByteArray();
+
+
+X509Certificate[] certificates = new X509Certificate[]{};
+
+String remoteIP = messageContext.getRemoteAddress().toString();
+String helo = messageContext.getHelo();
+if (messageContext.getTlsPeerCertificates() != null ){
+certificates = 

[GitHub] nifi pull request #483: NIFI-1899 - Introduce ExtractEmailAttachments and Ex...

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/483#discussion_r70723234
  
--- Diff: 
nifi-nar-bundles/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/smtp/handler/SMTPMessageHandlerFactory.java
 ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.email.smtp.handler;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.cert.X509Certificate;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.stream.io.ByteArrayOutputStream;
+import org.apache.nifi.util.StopWatch;
+import org.subethamail.smtp.DropConnectionException;
+import org.subethamail.smtp.MessageContext;
+import org.subethamail.smtp.MessageHandler;
+import org.subethamail.smtp.MessageHandlerFactory;
+import org.subethamail.smtp.RejectException;
+import org.subethamail.smtp.TooMuchDataException;
+import org.subethamail.smtp.server.SMTPServer;
+
+import org.apache.nifi.processors.email.smtp.event.SmtpEvent;
+
+
+public class SMTPMessageHandlerFactory implements MessageHandlerFactory {
+final LinkedBlockingQueue incomingMessages;
+final ComponentLog logger;
+
+public SMTPMessageHandlerFactory(LinkedBlockingQueue 
incomingMessages, ComponentLog logger) {
+this.incomingMessages = incomingMessages;
+this.logger = logger;
+}
+
+@Override
+public MessageHandler create(MessageContext messageContext) {
+return new Handler(messageContext, incomingMessages, logger);
+}
+
+class Handler implements MessageHandler {
+final MessageContext messageContext;
+String from;
+String recipient;
+byte [] messageBody;
+
+
+public Handler(MessageContext messageContext, 
LinkedBlockingQueue incomingMessages, ComponentLog logger){
+this.messageContext = messageContext;
+}
+
+@Override
+public void from(String from) throws RejectException {
+// TODO: possibly whitelist senders?
+this.from = from;
+}
+
+@Override
+public void recipient(String recipient) throws RejectException {
+// TODO: possibly whitelist receivers?
+this.recipient = recipient;
+}
+
+@Override
+public void data(InputStream inputStream) throws RejectException, 
TooMuchDataException, IOException {
+// Start counting the timer...
+
+StopWatch watch = new StopWatch(false);
+
+SMTPServer server = messageContext.getSMTPServer();
+
+ByteArrayOutputStream baos = new ByteArrayOutputStream();
+
+byte [] buffer = new byte[1024];
+int rd;
+
+while ((rd = inputStream.read(buffer, 0, buffer.length)) != 
-1) {
+baos.write(buffer, 0, rd);
+}
+if (baos.getBufferLength() > server.getMaxMessageSize()) {
+throw new TooMuchDataException("Data exceeds the amount 
allowed.");
+}
+
+baos.flush();
+this.messageBody = baos.toByteArray();
+
+
+X509Certificate[] certificates = new X509Certificate[]{};
+
+String remoteIP = messageContext.getRemoteAddress().toString();
+String helo = messageContext.getHelo();
+if (messageContext.getTlsPeerCertificates() != null ){
+certificates = (X509Certificate[]) 
messageContext.getTlsPeerCertificates().clone();
+}
+
+SmtpEvent message = new SmtpEvent(remoteIP, helo, from, 
recipient, certificates, messageBody);
+try {
+

[jira] [Commented] (NIFI-1899) Create ListenSMTP & ExtractEmailAttachment processors

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375925#comment-15375925
 ] 

ASF GitHub Bot commented on NIFI-1899:
--

Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/483#discussion_r70721418
  
--- Diff: 
nifi-nar-bundles/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/smtp/handler/SMTPMessageHandlerFactory.java
 ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.email.smtp.handler;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.cert.X509Certificate;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.stream.io.ByteArrayOutputStream;
+import org.apache.nifi.util.StopWatch;
+import org.subethamail.smtp.DropConnectionException;
+import org.subethamail.smtp.MessageContext;
+import org.subethamail.smtp.MessageHandler;
+import org.subethamail.smtp.MessageHandlerFactory;
+import org.subethamail.smtp.RejectException;
+import org.subethamail.smtp.TooMuchDataException;
+import org.subethamail.smtp.server.SMTPServer;
+
+import org.apache.nifi.processors.email.smtp.event.SmtpEvent;
+
+
+public class SMTPMessageHandlerFactory implements MessageHandlerFactory {
+final LinkedBlockingQueue incomingMessages;
+final ComponentLog logger;
+
+public SMTPMessageHandlerFactory(LinkedBlockingQueue 
incomingMessages, ComponentLog logger) {
+this.incomingMessages = incomingMessages;
+this.logger = logger;
+}
+
+@Override
+public MessageHandler create(MessageContext messageContext) {
+return new Handler(messageContext, incomingMessages, logger);
+}
+
+class Handler implements MessageHandler {
+final MessageContext messageContext;
+String from;
+String recipient;
+byte [] messageBody;
+
+
+public Handler(MessageContext messageContext, 
LinkedBlockingQueue incomingMessages, ComponentLog logger){
+this.messageContext = messageContext;
+}
+
+@Override
+public void from(String from) throws RejectException {
+// TODO: possibly whitelist senders?
+this.from = from;
+}
+
+@Override
+public void recipient(String recipient) throws RejectException {
+// TODO: possibly whitelist receivers?
+this.recipient = recipient;
+}
+
+@Override
+public void data(InputStream inputStream) throws RejectException, 
TooMuchDataException, IOException {
+// Start counting the timer...
+
+StopWatch watch = new StopWatch(false);
+
+SMTPServer server = messageContext.getSMTPServer();
+
+ByteArrayOutputStream baos = new ByteArrayOutputStream();
+
+byte [] buffer = new byte[1024];
+int rd;
+
+while ((rd = inputStream.read(buffer, 0, buffer.length)) != 
-1) {
+baos.write(buffer, 0, rd);
+}
+if (baos.getBufferLength() > server.getMaxMessageSize()) {
+throw new TooMuchDataException("Data exceeds the amount 
allowed.");
+}
+
+baos.flush();
+this.messageBody = baos.toByteArray();
+
+
+X509Certificate[] certificates = new X509Certificate[]{};
+
+String remoteIP = messageContext.getRemoteAddress().toString();
+String helo = messageContext.getHelo();
+if (messageContext.getTlsPeerCertificates() != null ){
+certificates = 

[GitHub] nifi pull request #483: NIFI-1899 - Introduce ExtractEmailAttachments and Ex...

2016-07-13 Thread trixpan
Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/483#discussion_r70721418
  
--- Diff: 
nifi-nar-bundles/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/smtp/handler/SMTPMessageHandlerFactory.java
 ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.email.smtp.handler;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.cert.X509Certificate;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.stream.io.ByteArrayOutputStream;
+import org.apache.nifi.util.StopWatch;
+import org.subethamail.smtp.DropConnectionException;
+import org.subethamail.smtp.MessageContext;
+import org.subethamail.smtp.MessageHandler;
+import org.subethamail.smtp.MessageHandlerFactory;
+import org.subethamail.smtp.RejectException;
+import org.subethamail.smtp.TooMuchDataException;
+import org.subethamail.smtp.server.SMTPServer;
+
+import org.apache.nifi.processors.email.smtp.event.SmtpEvent;
+
+
+public class SMTPMessageHandlerFactory implements MessageHandlerFactory {
+final LinkedBlockingQueue incomingMessages;
+final ComponentLog logger;
+
+public SMTPMessageHandlerFactory(LinkedBlockingQueue 
incomingMessages, ComponentLog logger) {
+this.incomingMessages = incomingMessages;
+this.logger = logger;
+}
+
+@Override
+public MessageHandler create(MessageContext messageContext) {
+return new Handler(messageContext, incomingMessages, logger);
+}
+
+class Handler implements MessageHandler {
+final MessageContext messageContext;
+String from;
+String recipient;
+byte [] messageBody;
+
+
+public Handler(MessageContext messageContext, 
LinkedBlockingQueue incomingMessages, ComponentLog logger){
+this.messageContext = messageContext;
+}
+
+@Override
+public void from(String from) throws RejectException {
+// TODO: possibly whitelist senders?
+this.from = from;
+}
+
+@Override
+public void recipient(String recipient) throws RejectException {
+// TODO: possibly whitelist receivers?
+this.recipient = recipient;
+}
+
+@Override
+public void data(InputStream inputStream) throws RejectException, 
TooMuchDataException, IOException {
+// Start counting the timer...
+
+StopWatch watch = new StopWatch(false);
+
+SMTPServer server = messageContext.getSMTPServer();
+
+ByteArrayOutputStream baos = new ByteArrayOutputStream();
+
+byte [] buffer = new byte[1024];
+int rd;
+
+while ((rd = inputStream.read(buffer, 0, buffer.length)) != 
-1) {
+baos.write(buffer, 0, rd);
+}
+if (baos.getBufferLength() > server.getMaxMessageSize()) {
+throw new TooMuchDataException("Data exceeds the amount 
allowed.");
+}
+
+baos.flush();
+this.messageBody = baos.toByteArray();
+
+
+X509Certificate[] certificates = new X509Certificate[]{};
+
+String remoteIP = messageContext.getRemoteAddress().toString();
+String helo = messageContext.getHelo();
+if (messageContext.getTlsPeerCertificates() != null ){
+certificates = (X509Certificate[]) 
messageContext.getTlsPeerCertificates().clone();
+}
+
+SmtpEvent message = new SmtpEvent(remoteIP, helo, from, 
recipient, certificates, messageBody);
+try {
+   

[jira] [Updated] (NIFI-1307) Un-deprecate FlowFile.getId()

2016-07-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-1307:
--
Fix Version/s: 1.0.0

> Un-deprecate FlowFile.getId()
> -
>
> Key: NIFI-1307
> URL: https://issues.apache.org/jira/browse/NIFI-1307
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Joseph Witt
> Fix For: 1.0.0
>
>
> In 0.4.0, we deprecated the getId() method of FlowFile because it was no 
> longer being used in any substantive way. However, we have found a bug in 
> same of the Prioritizers (address in NIFI-1279) that is being addressed by 
> using this method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1307) Un-deprecate FlowFile.getId()

2016-07-13 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375908#comment-15375908
 ] 

Joseph Witt commented on NIFI-1307:
---

It does not appear to be worth the risk and does appear to still have some 
value.  So I plan to go ahead and make getId() no longer deprecated rather than 
just letting it linger on in major release.

> Un-deprecate FlowFile.getId()
> -
>
> Key: NIFI-1307
> URL: https://issues.apache.org/jira/browse/NIFI-1307
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Joseph Witt
>
> In 0.4.0, we deprecated the getId() method of FlowFile because it was no 
> longer being used in any substantive way. However, we have found a bug in 
> same of the Prioritizers (address in NIFI-1279) that is being addressed by 
> using this method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1307) Un-deprecate FlowFile.getId()

2016-07-13 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375898#comment-15375898
 ] 

Joseph Witt commented on NIFI-1307:
---

[~markap14] in implementing NIFI-1157 i started removing the FlowFile.getId() 
method which had an amazing amount of ripple effects but that was to be 
expected.  Along the way what became clear is that having a unique identifier 
to a given 'instance of a flowfile' is important for the serialization and 
deserialization of WALI updates at least.  

While it is true that we no longer need to use that identifier for the 
prioritizer comparisons as we had in the past it is still important to have a 
unique identifier for, at the very least, properly aligning edits and updates 
for flow files in WALI and possibly also as a last ditch resort for comparing 
flow files in a default prioritizer.  There is an alternative to using the 
sequence counter identifier which you pointed out earlier which is first 
compare the lineageDate and if that matches then compare the lineageIndex.  So 
we could still avoid flowfile.getId for that case leaving just the concern of 
serde in WALI and alignment to a flow file record/instance.

We could also switch to just using a UUID instead of the long especially since 
we already have a UUID in the attribute map of a flow file.  This approach 
means a net reduction in stuff we have to serialize because today we serialize 
both the flowfile identifier AND the uuid.  The UUID storage in best case is 
128bits/16 bytes whereas the long is 8 bytes.  I'll explore this one a bit 
longer but what do you think?

> Un-deprecate FlowFile.getId()
> -
>
> Key: NIFI-1307
> URL: https://issues.apache.org/jira/browse/NIFI-1307
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Joseph Witt
>
> In 0.4.0, we deprecated the getId() method of FlowFile because it was no 
> longer being used in any substantive way. However, we have found a bug in 
> same of the Prioritizers (address in NIFI-1279) that is being addressed by 
> using this method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2142) Cache compiled XSLT in TransformXml

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375889#comment-15375889
 ] 

ASF GitHub Bot commented on NIFI-2142:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/609
  
@jfrazee the image bundle is no longer in master (replaced by the media 
bundle). Do a "git clean -i -d" to remove any old bundles.


> Cache compiled XSLT in TransformXml
> ---
>
> Key: NIFI-2142
> URL: https://issues.apache.org/jira/browse/NIFI-2142
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joey Frazee
> Fix For: 1.0.0
>
>
> TransformXml appears to be recompiling the XSLT on every onTrigger event, 
> which is slow. It should cache the compiled stylesheets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1413) NiFi nodes should not fail to start just because its templates are not in-sync with the Cluster.

2016-07-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375890#comment-15375890
 ] 

ASF subversion and git services commented on NIFI-1413:
---

Commit 68242d404685d250c166ee0213942037fe8a260d in nifi's branch 
refs/heads/master from [~mattyb149]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=68242d4 ]

Revert "NIFI-1413: Ensure that if a node's templates don't match the clusters 
that we take the following actions: -Local templates remain but aren't shown in 
the cluster's templates. -Any templates from the cluster that don't exist on 
the node are added to the node. -Any conflicting template definitions are 
replaced by those in the cluster"

This reverts commit 6f6e1b32d98af87c335772fc00089a63b23e7bdf.


> NiFi nodes should not fail to start just because its templates are not 
> in-sync with the Cluster.
> 
>
> Key: NIFI-1413
> URL: https://issues.apache.org/jira/browse/NIFI-1413
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Priority: Blocker
> Fix For: 1.0.0
>
>
> If a node tries to join a cluster, the node will fail to join and a 
> notification will indicate that the flow on the node is different than the 
> cluster's flow, just because the templates are different.
> Rather, we should rename the old template to ".standalone" or 
> something of that nature and then write the template to a new file if it 
> conflicts. If the template is simply missing, it should just be downloaded.
> We should make sure that NCM lists only those templates that are available to 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #609: NIFI-2142 Cache compiled XSLT in TransformXml

2016-07-13 Thread JPercivall
Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/609
  
@jfrazee the image bundle is no longer in master (replaced by the media 
bundle). Do a "git clean -i -d" to remove any old bundles.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1942) Create a processor to validate CSV against a user-supplied schema

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375884#comment-15375884
 ] 

ASF GitHub Bot commented on NIFI-1942:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70719085
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
+public class ValidateCsv extends AbstractProcessor {
+
+private final static List allowedOperators = 
Arrays.asList("ParseBigDecimal", "ParseBool", "ParseChar", "ParseDate",
+"ParseDouble", 

[GitHub] nifi pull request #476: NIFI-1942 Processor to validate CSV against user-sup...

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70719085
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
+public class ValidateCsv extends AbstractProcessor {
+
+private final static List allowedOperators = 
Arrays.asList("ParseBigDecimal", "ParseBool", "ParseChar", "ParseDate",
+"ParseDouble", "ParseInt", "ParseLong", "Optional", "DMinMax", 
"Equals", "ForbidSubStr", "LMinMax", "NotNull", "Null",
+"RequireHashCode", "RequireSubStr", "Strlen", "StrMinMax", 
"StrNotNullOrEmpty", "StrRegEx", "Unique",
+

[jira] [Commented] (NIFI-1942) Create a processor to validate CSV against a user-supplied schema

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375880#comment-15375880
 ] 

ASF GitHub Bot commented on NIFI-1942:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70718795
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
+public class ValidateCsv extends AbstractProcessor {
+
+private final static List allowedOperators = 
Arrays.asList("ParseBigDecimal", "ParseBool", "ParseChar", "ParseDate",
+"ParseDouble", 

[jira] [Reopened] (NIFI-1307) Un-deprecate FlowFile.getId()

2016-07-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt reopened NIFI-1307:
---
  Assignee: Joseph Witt

> Un-deprecate FlowFile.getId()
> -
>
> Key: NIFI-1307
> URL: https://issues.apache.org/jira/browse/NIFI-1307
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Joseph Witt
>
> In 0.4.0, we deprecated the getId() method of FlowFile because it was no 
> longer being used in any substantive way. However, we have found a bug in 
> same of the Prioritizers (address in NIFI-1279) that is being addressed by 
> using this method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #476: NIFI-1942 Processor to validate CSV against user-sup...

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70718276
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
+public class ValidateCsv extends AbstractProcessor {
+
+private final static List allowedOperators = 
Arrays.asList("ParseBigDecimal", "ParseBool", "ParseChar", "ParseDate",
+"ParseDouble", "ParseInt", "ParseLong", "Optional", "DMinMax", 
"Equals", "ForbidSubStr", "LMinMax", "NotNull", "Null",
+"RequireHashCode", "RequireSubStr", "Strlen", "StrMinMax", 
"StrNotNullOrEmpty", "StrRegEx", "Unique",
+

[jira] [Commented] (NIFI-1942) Create a processor to validate CSV against a user-supplied schema

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375873#comment-15375873
 ] 

ASF GitHub Bot commented on NIFI-1942:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70718276
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
+public class ValidateCsv extends AbstractProcessor {
+
+private final static List allowedOperators = 
Arrays.asList("ParseBigDecimal", "ParseBool", "ParseChar", "ParseDate",
+"ParseDouble", 

[GitHub] nifi pull request #476: NIFI-1942 Processor to validate CSV against user-sup...

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70718112
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
+public class ValidateCsv extends AbstractProcessor {
+
+private final static List allowedOperators = 
Arrays.asList("ParseBigDecimal", "ParseBool", "ParseChar", "ParseDate",
+"ParseDouble", "ParseInt", "ParseLong", "Optional", "DMinMax", 
"Equals", "ForbidSubStr", "LMinMax", "NotNull", "Null",
+"RequireHashCode", "RequireSubStr", "Strlen", "StrMinMax", 
"StrNotNullOrEmpty", "StrRegEx", "Unique",
+

[GitHub] nifi pull request #476: NIFI-1942 Processor to validate CSV against user-sup...

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70716978
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
+public class ValidateCsv extends AbstractProcessor {
+
+private final static List allowedOperators = 
Arrays.asList("ParseBigDecimal", "ParseBool", "ParseChar", "ParseDate",
+"ParseDouble", "ParseInt", "ParseLong", "Optional", "DMinMax", 
"Equals", "ForbidSubStr", "LMinMax", "NotNull", "Null",
+"RequireHashCode", "RequireSubStr", "Strlen", "StrMinMax", 
"StrNotNullOrEmpty", "StrRegEx", "Unique",
+

[jira] [Commented] (NIFI-1942) Create a processor to validate CSV against a user-supplied schema

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375857#comment-15375857
 ] 

ASF GitHub Bot commented on NIFI-1942:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70716978
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
+public class ValidateCsv extends AbstractProcessor {
+
+private final static List allowedOperators = 
Arrays.asList("ParseBigDecimal", "ParseBool", "ParseChar", "ParseDate",
+"ParseDouble", 

[jira] [Commented] (NIFI-1942) Create a processor to validate CSV against a user-supplied schema

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375849#comment-15375849
 ] 

ASF GitHub Bot commented on NIFI-1942:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70716478
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
+public class ValidateCsv extends AbstractProcessor {
+
+private final static List allowedOperators = 
Arrays.asList("ParseBigDecimal", "ParseBool", "ParseChar", "ParseDate",
+"ParseDouble", 

[GitHub] nifi pull request #476: NIFI-1942 Processor to validate CSV against user-sup...

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70716478
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
+public class ValidateCsv extends AbstractProcessor {
+
+private final static List allowedOperators = 
Arrays.asList("ParseBigDecimal", "ParseBool", "ParseChar", "ParseDate",
+"ParseDouble", "ParseInt", "ParseLong", "Optional", "DMinMax", 
"Equals", "ForbidSubStr", "LMinMax", "NotNull", "Null",
+"RequireHashCode", "RequireSubStr", "Strlen", "StrMinMax", 
"StrNotNullOrEmpty", "StrRegEx", "Unique",
+

[jira] [Commented] (NIFI-1942) Create a processor to validate CSV against a user-supplied schema

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375832#comment-15375832
 ] 

ASF GitHub Bot commented on NIFI-1942:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70715716
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
--- End diff --

Ah duh, I should have just read the additional docs portion. A comment here 
reminding users to checkout the additional details would suffice


> Create a processor to validate CSV against a 

[GitHub] nifi pull request #476: NIFI-1942 Processor to validate CSV against user-sup...

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70715716
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
--- End diff --

Ah duh, I should have just read the additional docs portion. A comment here 
reminding users to checkout the additional details would suffice


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file 

[jira] [Commented] (NIFI-1942) Create a processor to validate CSV against a user-supplied schema

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375823#comment-15375823
 ] 

ASF GitHub Bot commented on NIFI-1942:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70715212
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
--- End diff --

Is there some sort of example CSV schema or other documentation we can 
point to here? I know nothing about CSV validation and if I was given the task 
to add/configure this processor I would be 

[GitHub] nifi pull request #476: NIFI-1942 Processor to validate CSV against user-sup...

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/476#discussion_r70715212
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java
 ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.supercsv.cellprocessor.Optional;
+import org.supercsv.cellprocessor.ParseBigDecimal;
+import org.supercsv.cellprocessor.ParseBool;
+import org.supercsv.cellprocessor.ParseChar;
+import org.supercsv.cellprocessor.ParseDate;
+import org.supercsv.cellprocessor.ParseDouble;
+import org.supercsv.cellprocessor.ParseInt;
+import org.supercsv.cellprocessor.ParseLong;
+import org.supercsv.cellprocessor.constraint.DMinMax;
+import org.supercsv.cellprocessor.constraint.Equals;
+import org.supercsv.cellprocessor.constraint.ForbidSubStr;
+import org.supercsv.cellprocessor.constraint.LMinMax;
+import org.supercsv.cellprocessor.constraint.NotNull;
+import org.supercsv.cellprocessor.constraint.RequireHashCode;
+import org.supercsv.cellprocessor.constraint.RequireSubStr;
+import org.supercsv.cellprocessor.constraint.StrMinMax;
+import org.supercsv.cellprocessor.constraint.StrNotNullOrEmpty;
+import org.supercsv.cellprocessor.constraint.StrRegEx;
+import org.supercsv.cellprocessor.constraint.Strlen;
+import org.supercsv.cellprocessor.constraint.Unique;
+import org.supercsv.cellprocessor.constraint.UniqueHashCode;
+import org.supercsv.cellprocessor.ift.CellProcessor;
+import org.supercsv.exception.SuperCsvCellProcessorException;
+import org.supercsv.io.CsvListReader;
+import org.supercsv.prefs.CsvPreference;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"csv", "schema", "validation"})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified CSV schema")
--- End diff --

Is there some sort of example CSV schema or other documentation we can 
point to here? I know nothing about CSV validation and if I was given the task 
to add/configure this processor I would be completely lost. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not 

[GitHub] nifi pull request #483: NIFI-1899 - Introduce ExtractEmailAttachments and Ex...

2016-07-13 Thread trixpan
Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/483#discussion_r70714800
  
--- Diff: 
nifi-nar-bundles/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/smtp/handler/SMTPMessageHandlerFactory.java
 ---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.email.smtp.handler;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.cert.X509Certificate;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.stream.io.ByteArrayOutputStream;
+import org.apache.nifi.util.StopWatch;
+import org.subethamail.smtp.DropConnectionException;
+import org.subethamail.smtp.MessageContext;
+import org.subethamail.smtp.MessageHandler;
+import org.subethamail.smtp.MessageHandlerFactory;
+import org.subethamail.smtp.RejectException;
+import org.subethamail.smtp.TooMuchDataException;
+import org.subethamail.smtp.server.SMTPServer;
+
+import org.apache.nifi.processors.email.smtp.event.SmtpEvent;
+
+
+public class SMTPMessageHandlerFactory implements MessageHandlerFactory {
+final LinkedBlockingQueue incomingMessages;
+final ComponentLog logger;
+
+
+public SMTPMessageHandlerFactory(LinkedBlockingQueue 
incomingMessages, ComponentLog logger) {
+this.incomingMessages = incomingMessages;
+this.logger = logger;
+
+}
+
+@Override
+public MessageHandler create(MessageContext messageContext) {
+return new Handler(messageContext, incomingMessages, logger);
+}
+
+class Handler implements MessageHandler {
+final MessageContext messageContext;
+String from;
+String recipient;
+ByteArrayOutputStream messageData;
+
+private CountDownLatch latch;
+
+public Handler(MessageContext messageContext, 
LinkedBlockingQueue incomingMessages, ComponentLog logger){
+this.messageContext = messageContext;
+this.latch =  new CountDownLatch(1);
+}
+
+@Override
+public void from(String from) throws RejectException {
+// TODO: possibly whitelist senders?
+this.from = from;
+}
+
+@Override
+public void recipient(String recipient) throws RejectException {
+// TODO: possibly whitelist receivers?
+this.recipient = recipient;
+}
+
+@Override
+public void data(InputStream inputStream) throws RejectException, 
TooMuchDataException {
+// Start counting the timer...
+
+StopWatch watch = new StopWatch(true);
+
+SMTPServer server = messageContext.getSMTPServer();
+
+final long serverTimeout = 
TimeUnit.MILLISECONDS.convert(messageContext.getSMTPServer().getConnectionTimeout(),
 TimeUnit.MILLISECONDS);
+
+ByteArrayOutputStream baos = new ByteArrayOutputStream();
+
+byte [] buffer = new byte[1024];
+
+int rd;
+
+try {
+while ((rd = inputStream.read(buffer, 0, buffer.length)) 
!= -1 ) {
+baos.write(buffer, 0, rd);
+if (baos.getBufferLength() > 
server.getMaxMessageSize() ) {
+throw new TooMuchDataException("Data exceeds the 
amount allowed.");
+}
+}
+baos.flush();
+} catch (IOException e) {
+throw new DropConnectionException(450, "Unexpected error 
processing your message. ");
+}
+
+this.messageData = baos;
+
+

[jira] [Commented] (NIFI-1899) Create ListenSMTP & ExtractEmailAttachment processors

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375815#comment-15375815
 ] 

ASF GitHub Bot commented on NIFI-1899:
--

Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/483#discussion_r70714764
  
--- Diff: 
nifi-nar-bundles/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/smtp/handler/SMTPMessageHandlerFactory.java
 ---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.email.smtp.handler;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.cert.X509Certificate;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.stream.io.ByteArrayOutputStream;
+import org.apache.nifi.util.StopWatch;
+import org.subethamail.smtp.DropConnectionException;
+import org.subethamail.smtp.MessageContext;
+import org.subethamail.smtp.MessageHandler;
+import org.subethamail.smtp.MessageHandlerFactory;
+import org.subethamail.smtp.RejectException;
+import org.subethamail.smtp.TooMuchDataException;
+import org.subethamail.smtp.server.SMTPServer;
+
+import org.apache.nifi.processors.email.smtp.event.SmtpEvent;
+
+
+public class SMTPMessageHandlerFactory implements MessageHandlerFactory {
+final LinkedBlockingQueue incomingMessages;
+final ComponentLog logger;
+
+
+public SMTPMessageHandlerFactory(LinkedBlockingQueue 
incomingMessages, ComponentLog logger) {
+this.incomingMessages = incomingMessages;
+this.logger = logger;
+
+}
+
+@Override
+public MessageHandler create(MessageContext messageContext) {
+return new Handler(messageContext, incomingMessages, logger);
+}
+
+class Handler implements MessageHandler {
+final MessageContext messageContext;
+String from;
+String recipient;
+ByteArrayOutputStream messageData;
+
+private CountDownLatch latch;
+
+public Handler(MessageContext messageContext, 
LinkedBlockingQueue incomingMessages, ComponentLog logger){
+this.messageContext = messageContext;
+this.latch =  new CountDownLatch(1);
+}
+
+@Override
+public void from(String from) throws RejectException {
+// TODO: possibly whitelist senders?
+this.from = from;
+}
+
+@Override
+public void recipient(String recipient) throws RejectException {
+// TODO: possibly whitelist receivers?
+this.recipient = recipient;
+}
+
+@Override
+public void data(InputStream inputStream) throws RejectException, 
TooMuchDataException {
+// Start counting the timer...
+
+StopWatch watch = new StopWatch(true);
+
+SMTPServer server = messageContext.getSMTPServer();
+
+final long serverTimeout = 
TimeUnit.MILLISECONDS.convert(messageContext.getSMTPServer().getConnectionTimeout(),
 TimeUnit.MILLISECONDS);
+
+ByteArrayOutputStream baos = new ByteArrayOutputStream();
+
+byte [] buffer = new byte[1024];
+
+int rd;
+
+try {
+while ((rd = inputStream.read(buffer, 0, buffer.length)) 
!= -1 ) {
+baos.write(buffer, 0, rd);
+if (baos.getBufferLength() > 
server.getMaxMessageSize() ) {
+throw new TooMuchDataException("Data exceeds the 
amount allowed.");
+}
+}
+baos.flush();
+} 

[jira] [Commented] (NIFI-1899) Create ListenSMTP & ExtractEmailAttachment processors

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375816#comment-15375816
 ] 

ASF GitHub Bot commented on NIFI-1899:
--

Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/483#discussion_r70714800
  
--- Diff: 
nifi-nar-bundles/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/smtp/handler/SMTPMessageHandlerFactory.java
 ---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.email.smtp.handler;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.cert.X509Certificate;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.stream.io.ByteArrayOutputStream;
+import org.apache.nifi.util.StopWatch;
+import org.subethamail.smtp.DropConnectionException;
+import org.subethamail.smtp.MessageContext;
+import org.subethamail.smtp.MessageHandler;
+import org.subethamail.smtp.MessageHandlerFactory;
+import org.subethamail.smtp.RejectException;
+import org.subethamail.smtp.TooMuchDataException;
+import org.subethamail.smtp.server.SMTPServer;
+
+import org.apache.nifi.processors.email.smtp.event.SmtpEvent;
+
+
+public class SMTPMessageHandlerFactory implements MessageHandlerFactory {
+final LinkedBlockingQueue incomingMessages;
+final ComponentLog logger;
+
+
+public SMTPMessageHandlerFactory(LinkedBlockingQueue 
incomingMessages, ComponentLog logger) {
+this.incomingMessages = incomingMessages;
+this.logger = logger;
+
+}
+
+@Override
+public MessageHandler create(MessageContext messageContext) {
+return new Handler(messageContext, incomingMessages, logger);
+}
+
+class Handler implements MessageHandler {
+final MessageContext messageContext;
+String from;
+String recipient;
+ByteArrayOutputStream messageData;
+
+private CountDownLatch latch;
+
+public Handler(MessageContext messageContext, 
LinkedBlockingQueue incomingMessages, ComponentLog logger){
+this.messageContext = messageContext;
+this.latch =  new CountDownLatch(1);
+}
+
+@Override
+public void from(String from) throws RejectException {
+// TODO: possibly whitelist senders?
+this.from = from;
+}
+
+@Override
+public void recipient(String recipient) throws RejectException {
+// TODO: possibly whitelist receivers?
+this.recipient = recipient;
+}
+
+@Override
+public void data(InputStream inputStream) throws RejectException, 
TooMuchDataException {
+// Start counting the timer...
+
+StopWatch watch = new StopWatch(true);
+
+SMTPServer server = messageContext.getSMTPServer();
+
+final long serverTimeout = 
TimeUnit.MILLISECONDS.convert(messageContext.getSMTPServer().getConnectionTimeout(),
 TimeUnit.MILLISECONDS);
+
+ByteArrayOutputStream baos = new ByteArrayOutputStream();
+
+byte [] buffer = new byte[1024];
+
+int rd;
+
+try {
+while ((rd = inputStream.read(buffer, 0, buffer.length)) 
!= -1 ) {
+baos.write(buffer, 0, rd);
+if (baos.getBufferLength() > 
server.getMaxMessageSize() ) {
+throw new TooMuchDataException("Data exceeds the 
amount allowed.");
+}
+}
+baos.flush();
+} 

[GitHub] nifi pull request #483: NIFI-1899 - Introduce ExtractEmailAttachments and Ex...

2016-07-13 Thread trixpan
Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/483#discussion_r70714764
  
--- Diff: 
nifi-nar-bundles/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/smtp/handler/SMTPMessageHandlerFactory.java
 ---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.email.smtp.handler;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.cert.X509Certificate;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.stream.io.ByteArrayOutputStream;
+import org.apache.nifi.util.StopWatch;
+import org.subethamail.smtp.DropConnectionException;
+import org.subethamail.smtp.MessageContext;
+import org.subethamail.smtp.MessageHandler;
+import org.subethamail.smtp.MessageHandlerFactory;
+import org.subethamail.smtp.RejectException;
+import org.subethamail.smtp.TooMuchDataException;
+import org.subethamail.smtp.server.SMTPServer;
+
+import org.apache.nifi.processors.email.smtp.event.SmtpEvent;
+
+
+public class SMTPMessageHandlerFactory implements MessageHandlerFactory {
+final LinkedBlockingQueue incomingMessages;
+final ComponentLog logger;
+
+
+public SMTPMessageHandlerFactory(LinkedBlockingQueue 
incomingMessages, ComponentLog logger) {
+this.incomingMessages = incomingMessages;
+this.logger = logger;
+
+}
+
+@Override
+public MessageHandler create(MessageContext messageContext) {
+return new Handler(messageContext, incomingMessages, logger);
+}
+
+class Handler implements MessageHandler {
+final MessageContext messageContext;
+String from;
+String recipient;
+ByteArrayOutputStream messageData;
+
+private CountDownLatch latch;
+
+public Handler(MessageContext messageContext, 
LinkedBlockingQueue incomingMessages, ComponentLog logger){
+this.messageContext = messageContext;
+this.latch =  new CountDownLatch(1);
+}
+
+@Override
+public void from(String from) throws RejectException {
+// TODO: possibly whitelist senders?
+this.from = from;
+}
+
+@Override
+public void recipient(String recipient) throws RejectException {
+// TODO: possibly whitelist receivers?
+this.recipient = recipient;
+}
+
+@Override
+public void data(InputStream inputStream) throws RejectException, 
TooMuchDataException {
+// Start counting the timer...
+
+StopWatch watch = new StopWatch(true);
+
+SMTPServer server = messageContext.getSMTPServer();
+
+final long serverTimeout = 
TimeUnit.MILLISECONDS.convert(messageContext.getSMTPServer().getConnectionTimeout(),
 TimeUnit.MILLISECONDS);
+
+ByteArrayOutputStream baos = new ByteArrayOutputStream();
+
+byte [] buffer = new byte[1024];
+
+int rd;
+
+try {
+while ((rd = inputStream.read(buffer, 0, buffer.length)) 
!= -1 ) {
+baos.write(buffer, 0, rd);
+if (baos.getBufferLength() > 
server.getMaxMessageSize() ) {
+throw new TooMuchDataException("Data exceeds the 
amount allowed.");
+}
+}
+baos.flush();
+} catch (IOException e) {
+throw new DropConnectionException(450, "Unexpected error 
processing your message. ");
+}
+
+this.messageData = baos;
+
+

[jira] [Commented] (NIFI-1413) NiFi nodes should not fail to start just because its templates are not in-sync with the Cluster.

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375808#comment-15375808
 ] 

ASF GitHub Bot commented on NIFI-1413:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/596


> NiFi nodes should not fail to start just because its templates are not 
> in-sync with the Cluster.
> 
>
> Key: NIFI-1413
> URL: https://issues.apache.org/jira/browse/NIFI-1413
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.0.0
>
>
> If a node tries to join a cluster, the node will fail to join and a 
> notification will indicate that the flow on the node is different than the 
> cluster's flow, just because the templates are different.
> Rather, we should rename the old template to ".standalone" or 
> something of that nature and then write the template to a new file if it 
> conflicts. If the template is simply missing, it should just be downloaded.
> We should make sure that NCM lists only those templates that are available to 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1413) NiFi nodes should not fail to start just because its templates are not in-sync with the Cluster.

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375805#comment-15375805
 ] 

ASF GitHub Bot commented on NIFI-1413:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/596
  
+1 LGTM, built and ran with a 3-node cluster. Took one node offline, 
changed it's template information, then restarted and it reconnected 
successfully.  Would be nice to have an integration test for that someday :)

Merging to master, thanks!


> NiFi nodes should not fail to start just because its templates are not 
> in-sync with the Cluster.
> 
>
> Key: NIFI-1413
> URL: https://issues.apache.org/jira/browse/NIFI-1413
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.0.0
>
>
> If a node tries to join a cluster, the node will fail to join and a 
> notification will indicate that the flow on the node is different than the 
> cluster's flow, just because the templates are different.
> Rather, we should rename the old template to ".standalone" or 
> something of that nature and then write the template to a new file if it 
> conflicts. If the template is simply missing, it should just be downloaded.
> We should make sure that NCM lists only those templates that are available to 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1413) NiFi nodes should not fail to start just because its templates are not in-sync with the Cluster.

2016-07-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375806#comment-15375806
 ] 

ASF subversion and git services commented on NIFI-1413:
---

Commit 6f6e1b32d98af87c335772fc00089a63b23e7bdf in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=6f6e1b3 ]

NIFI-1413: Ensure that if a node's templates don't match the clusters that we 
take the following actions: -Local templates remain but aren't shown in the 
cluster's templates. -Any templates from the cluster that don't exist on the 
node are added to the node. -Any conflicting template definitions are replaced 
by those in the cluster

This closes #596


> NiFi nodes should not fail to start just because its templates are not 
> in-sync with the Cluster.
> 
>
> Key: NIFI-1413
> URL: https://issues.apache.org/jira/browse/NIFI-1413
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.0.0
>
>
> If a node tries to join a cluster, the node will fail to join and a 
> notification will indicate that the flow on the node is different than the 
> cluster's flow, just because the templates are different.
> Rather, we should rename the old template to ".standalone" or 
> something of that nature and then write the template to a new file if it 
> conflicts. If the template is simply missing, it should just be downloaded.
> We should make sure that NCM lists only those templates that are available to 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #596: NIFI-1413: Ensure that if a node's templates don't m...

2016-07-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/596


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #596: NIFI-1413: Ensure that if a node's templates don't match th...

2016-07-13 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/596
  
+1 LGTM, built and ran with a 3-node cluster. Took one node offline, 
changed it's template information, then restarted and it reconnected 
successfully.  Would be nice to have an integration test for that someday :)

Merging to master, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #575: NIFI-619: Make MonitorActivity more cluster friendly

2016-07-13 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/575#discussion_r70711686
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MonitorActivity.java
 ---
@@ -234,4 +361,10 @@ public void process(final OutputStream out) throws 
IOException {
 }
 }
 }
+
+@OnPrimaryNodeStateChange
+public void onPrimaryNodeChange(final PrimaryNodeState newState) {
+isPrimaryNode = (newState == 
PrimaryNodeState.ELECTED_PRIMARY_NODE);
--- End diff --

I am running a secure cluster and when I have "primary node only" 
configured for the sending of the messages they never get sent. I believe this 
is because the primary node has been the same since the processor was added and 
it was never changed, therefore this method would never have gotten called. 

I think this needs some way to know that the processor is running on the 
primary node when it gets scheduled or initialized. I'm not sure if that is 
possible at the moment though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2254) UI - Fixing URI when reloading/updating component

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375724#comment-15375724
 ] 

ASF GitHub Bot commented on NIFI-2254:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/644


> UI - Fixing URI when reloading/updating component
> -
>
> Key: NIFI-2254
> URL: https://issues.apache.org/jira/browse/NIFI-2254
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Blocker
> Fix For: 1.0.0
>
>
> The URI moved from the component configuration into the component entity. 
> This was overlooked when reloading/updating a component.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #644: NIFI-2254: Fixing URI when reloading/updating component

2016-07-13 Thread JPercivall
Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/644
  
+1

Code passes contrib check, verified the problem and that the PR fixes the 
problem. Will merge it in


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2254) UI - Fixing URI when reloading/updating component

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375717#comment-15375717
 ] 

ASF GitHub Bot commented on NIFI-2254:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/644
  
+1

Code passes contrib check, verified the problem and that the PR fixes the 
problem. Will merge it in


> UI - Fixing URI when reloading/updating component
> -
>
> Key: NIFI-2254
> URL: https://issues.apache.org/jira/browse/NIFI-2254
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Blocker
> Fix For: 1.0.0
>
>
> The URI moved from the component configuration into the component entity. 
> This was overlooked when reloading/updating a component.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-2255) ConsumeJMS does not detect dead connections

2016-07-13 Thread Christopher McDermott (JIRA)
Christopher McDermott created NIFI-2255:
---

 Summary: ConsumeJMS does not detect dead connections
 Key: NIFI-2255
 URL: https://issues.apache.org/jira/browse/NIFI-2255
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.7.0, 1.0.0
Reporter: Christopher McDermott


Once the connection is dead ConsumeJMS will no longer receive any messages even 
after the broker becomes available because it will not open new connection.

To reproduce: do a ‘reboot –f now’ on the node running the JMS broker and 
ConsumeJMS will not notice a problem until the TCP timeout which could be 
hours, days, or even never. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2252) Unable to add controller service to Controller Level

2016-07-13 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-2252:
--
Assignee: Mark Payne

> Unable to add controller service to Controller Level
> 
>
> Key: NIFI-2252
> URL: https://issues.apache.org/jira/browse/NIFI-2252
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.0.0
>
>
> When I attempted to add a controller service to the Controller level, I got 
> an error indicating that there was a timeout exception on the replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2252) Unable to add controller service to Controller Level

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375517#comment-15375517
 ] 

ASF GitHub Bot commented on NIFI-2252:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/641


> Unable to add controller service to Controller Level
> 
>
> Key: NIFI-2252
> URL: https://issues.apache.org/jira/browse/NIFI-2252
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Mark Payne
>Priority: Blocker
> Fix For: 1.0.0
>
>
> When I attempted to add a controller service to the Controller level, I got 
> an error indicating that there was a timeout exception on the replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2252) Unable to add controller service to Controller Level

2016-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375519#comment-15375519
 ] 

ASF GitHub Bot commented on NIFI-2252:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/641
  
Looks good @markap14! This has been merged to master.


> Unable to add controller service to Controller Level
> 
>
> Key: NIFI-2252
> URL: https://issues.apache.org/jira/browse/NIFI-2252
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Mark Payne
>Priority: Blocker
> Fix For: 1.0.0
>
>
> When I attempted to add a controller service to the Controller level, I got 
> an error indicating that there was a timeout exception on the replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2252) Unable to add controller service to Controller Level

2016-07-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375516#comment-15375516
 ] 

ASF subversion and git services commented on NIFI-2252:
---

Commit 6b87e1ea84d1ec42d7bdb6a7a335c8cc04416779 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=6b87e1e ]

NIFI-2252: Fixed issue where POST to Controller Resource 
createControllerService and also ensure that URI is set on the entity. This 
closes #641


> Unable to add controller service to Controller Level
> 
>
> Key: NIFI-2252
> URL: https://issues.apache.org/jira/browse/NIFI-2252
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Mark Payne
>Priority: Blocker
> Fix For: 1.0.0
>
>
> When I attempted to add a controller service to the Controller level, I got 
> an error indicating that there was a timeout exception on the replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #641: NIFI-2252: Fixed issue where POST to Controller Resource cr...

2016-07-13 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/641
  
Looks good @markap14! This has been merged to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >