[GitHub] nifi issue #1937: NIFI-4105 support the specified Maximum value column and C...

2017-10-17 Thread ggthename
Github user ggthename commented on the issue:

https://github.com/apache/nifi/pull/1937
  
Thank you for your opinion. There were lots of conflicts.. I finished 
rebase opertaion. 


---


[jira] [Commented] (NIFI-4105) support the specified Maximum value column and CSV Stream for Cassandra

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208887#comment-16208887
 ] 

ASF GitHub Bot commented on NIFI-4105:
--

Github user ggthename commented on the issue:

https://github.com/apache/nifi/pull/1937
  
Thank you for your opinion. There were lots of conflicts.. I finished 
rebase opertaion. 


> support the specified Maximum value column and CSV Stream for Cassandra
> ---
>
> Key: NIFI-4105
> URL: https://issues.apache.org/jira/browse/NIFI-4105
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Yoonwon Ko
>
> I'm trying to find a CassandraProcessor to fetch rows whose values in the 
> specified Maximum Value columns are larger than the previously-seen maximum 
> like QueryDatabaseTable.
> But I found only QueryCassandra. It just executes same CQL everytime without 
> keeping maximum value.
> and I think we also need convertToCsvStream option.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3950) Separate AWS ControllerService API

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208807#comment-16208807
 ] 

ASF GitHub Bot commented on NIFI-3950:
--

Github user christophercurrie commented on the issue:

https://github.com/apache/nifi/pull/2140
  
Yes, though I'm not sure what action items are left for me at this point.


> Separate AWS ControllerService API
> --
>
> Key: NIFI-3950
> URL: https://issues.apache.org/jira/browse/NIFI-3950
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: James Wing
>Priority: Minor
>
> The nifi-aws-bundle currently contains the interface for the 
> AWSCredentialsProviderService as well as the service implementation, and 
> dependent abstract classes and processor classes.
> This results in the following warning logged as NiFi loads:
> {quote}
> org.apache.nifi.nar.ExtensionManager Component 
> org.apache.nifi.processors.aws.s3.PutS3Object is bundled with its referenced 
> Controller Service APIs 
> org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderService.
>  The service APIs should not be bundled with component implementations that 
> reference it.
> {quote}
> Some [discussion of this issue and potential solutions occurred on the dev 
> list|http://apache-nifi.1125220.n5.nabble.com/Duplicated-processors-when-using-nifi-processors-dependency-td17038.html].
> We also need a migration plan in addition to the new structure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2140: NIFI-3950 Refactor AWS bundle

2017-10-17 Thread christophercurrie
Github user christophercurrie commented on the issue:

https://github.com/apache/nifi/pull/2140
  
Yes, though I'm not sure what action items are left for me at this point.


---


[jira] [Commented] (NIFI-3950) Separate AWS ControllerService API

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208800#comment-16208800
 ] 

ASF GitHub Bot commented on NIFI-3950:
--

Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/2140
  
@christophercurrie I think we're pretty close on this PR, any interest in 
continuing?


> Separate AWS ControllerService API
> --
>
> Key: NIFI-3950
> URL: https://issues.apache.org/jira/browse/NIFI-3950
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: James Wing
>Priority: Minor
>
> The nifi-aws-bundle currently contains the interface for the 
> AWSCredentialsProviderService as well as the service implementation, and 
> dependent abstract classes and processor classes.
> This results in the following warning logged as NiFi loads:
> {quote}
> org.apache.nifi.nar.ExtensionManager Component 
> org.apache.nifi.processors.aws.s3.PutS3Object is bundled with its referenced 
> Controller Service APIs 
> org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderService.
>  The service APIs should not be bundled with component implementations that 
> reference it.
> {quote}
> Some [discussion of this issue and potential solutions occurred on the dev 
> list|http://apache-nifi.1125220.n5.nabble.com/Duplicated-processors-when-using-nifi-processors-dependency-td17038.html].
> We also need a migration plan in addition to the new structure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2140: NIFI-3950 Refactor AWS bundle

2017-10-17 Thread jvwing
Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/2140
  
@christophercurrie I think we're pretty close on this PR, any interest in 
continuing?


---


[jira] [Created] (NIFI-4494) Add a FetchOracleRow processor

2017-10-17 Thread Fred Liu (JIRA)
Fred Liu created NIFI-4494:
--

 Summary: Add a FetchOracleRow processor
 Key: NIFI-4494
 URL: https://issues.apache.org/jira/browse/NIFI-4494
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
 Environment: oracle
Reporter: Fred Liu


We encounter a lot of demand, poor data quality, no primary key, no time stamp, 
and even a lot of duplicate data. But the customer requires a high performance 
and accuracy.

Using GenerateTableFetch or QueryDatabaseTable, we can not meet the functional 
and performance requirements. So we want to add a new processor, it is 
specifically for the oracle database, able to ingest very poor quality data and 
have better performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFIREG-36) Add Jackson JsonInclude Annotations to NiFi Registry Data Model classes

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-36?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208730#comment-16208730
 ] 

ASF GitHub Bot commented on NIFIREG-36:
---

GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/23

NIFIREG-36: Add Jackson JsonInclude filters for null fields in data m…

…odel classes

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-36

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/23.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #23


commit 9f0b2f87b02899de319de54a9c9f0b23073d8c7f
Author: Kevin Doran 
Date:   2017-10-18T02:33:56Z

NIFIREG-36: Add Jackson JsonInclude filters for null fields in data model 
classes




> Add Jackson JsonInclude Annotations to NiFi Registry Data Model classes
> ---
>
> Key: NIFIREG-36
> URL: https://issues.apache.org/jira/browse/NIFIREG-36
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Kevin Doran
> Fix For: 0.0.1
>
>
> Currently, NiFi Registry responses include null values for optional fields in 
> the serialized Json. This ticket is to add Jackson annotations that prevent 
> explicit "null" from being serialized for optional fields, instead just 
> omitting the optional fields, as not all clients/frameworks can interpret 
> null correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #23: NIFIREG-36: Add Jackson JsonInclude filters ...

2017-10-17 Thread kevdoran
GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/23

NIFIREG-36: Add Jackson JsonInclude filters for null fields in data m…

…odel classes

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-36

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/23.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #23


commit 9f0b2f87b02899de319de54a9c9f0b23073d8c7f
Author: Kevin Doran 
Date:   2017-10-18T02:33:56Z

NIFIREG-36: Add Jackson JsonInclude filters for null fields in data model 
classes




---


[jira] [Created] (NIFIREG-36) Add Jackson JsonInclude Annotations to NiFi Registry Data Model classes

2017-10-17 Thread Kevin Doran (JIRA)
Kevin Doran created NIFIREG-36:
--

 Summary: Add Jackson JsonInclude Annotations to NiFi Registry Data 
Model classes
 Key: NIFIREG-36
 URL: https://issues.apache.org/jira/browse/NIFIREG-36
 Project: NiFi Registry
  Issue Type: Improvement
Reporter: Kevin Doran
Assignee: Kevin Doran
 Fix For: 0.0.1


Currently, NiFi Registry responses include null values for optional fields in 
the serialized Json. This ticket is to add Jackson annotations that prevent 
explicit "null" from being serialized for optional fields, instead just 
omitting the optional fields, as not all clients/frameworks can interpret null 
correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-72) Add tar and compression support for MergeContent

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-72?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208625#comment-16208625
 ] 

ASF GitHub Bot commented on MINIFICPP-72:
-

Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/146
  
@apiri @phrocker rebased.



> Add tar and compression support for MergeContent
> 
>
> Key: MINIFICPP-72
> URL: https://issues.apache.org/jira/browse/MINIFICPP-72
> Project: NiFi MiNiFi C++
>  Issue Type: New Feature
>Affects Versions: 1.0.0
>Reporter: bqiu
> Fix For: 1.0.0
>
>
> Add tar and compression support for MergeContent
> will use the https://www.libarchive.org



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #146: MINIFICPP-72: Add Tar and Zip Support for MergeC...

2017-10-17 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/146
  
@apiri @phrocker rebased.



---


[jira] [Reopened] (MINIFICPP-256) ExecuteProcess script uses wrong path

2017-10-17 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo reopened MINIFICPP-256:
--

> ExecuteProcess script uses wrong path
> -
>
> Key: MINIFICPP-256
> URL: https://issues.apache.org/jira/browse/MINIFICPP-256
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Fredrick Stakem
>Assignee: marco polo
> Fix For: 0.3.0
>
> Attachments: 1.csv, 2.csv, basic_minifi_test.xml, config.yml, 
> process.py
>
>
> I am running a test using nifi to create a flow and then import this flow 
> into minifi c++. The flow seems to work as expected on nifi. 
> The flow takes a file from input directory and places it into the processing 
> directory. In the background another ExectureProcess processor runs a simple 
> python script to look at the processing directory, get any files, parse the 
> files, and export to an output directory.
> As stated before everything works as expected nifi, but in minifi c++ the 
> files end up in the root folder of minifi c++ and not the output directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Moved] (MINIFICPP-260) C2NullConfiguration fails due to a segfault on travis.

2017-10-17 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri moved MINIFI-406 to MINIFICPP-260:
--

Key: MINIFICPP-260  (was: MINIFI-406)
Project: NiFi MiNiFi C++  (was: Apache NiFi MiNiFi)

> C2NullConfiguration fails due to a segfault on travis. 
> ---
>
> Key: MINIFICPP-260
> URL: https://issues.apache.org/jira/browse/MINIFICPP-260
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Assignee: marco polo
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-256) ExecuteProcess script uses wrong path

2017-10-17 Thread Fredrick Stakem (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208295#comment-16208295
 ] 

Fredrick Stakem commented on MINIFICPP-256:
---

Yes, I was never able to resolve this. I have been busy on other issues for a 
week and on vacation but will try to get back to it this week.

> ExecuteProcess script uses wrong path
> -
>
> Key: MINIFICPP-256
> URL: https://issues.apache.org/jira/browse/MINIFICPP-256
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Fredrick Stakem
>Assignee: marco polo
> Fix For: 0.3.0
>
> Attachments: 1.csv, 2.csv, basic_minifi_test.xml, config.yml, 
> process.py
>
>
> I am running a test using nifi to create a flow and then import this flow 
> into minifi c++. The flow seems to work as expected on nifi. 
> The flow takes a file from input directory and places it into the processing 
> directory. In the background another ExectureProcess processor runs a simple 
> python script to look at the processing directory, get any files, parse the 
> files, and export to an output directory.
> As stated before everything works as expected nifi, but in minifi c++ the 
> files end up in the root folder of minifi c++ and not the output directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #22: Initial NiFi Registry Client

2017-10-17 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi-registry/pull/22

Initial NiFi Registry Client

Adds a new module - nifi-registry-client - which uses Jersey client to 
interact with the REST API.

Refactored a couple of classes to the data-model project to be shared by 
client and framework.

I left this as two commits for now since the second commit adds exception 
handling, and in case we wanted to change that approach it would easier to go 
back to the previous commit.

Manual test class is in src/test/java - TestJerseyNiFiRegistryClient.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi-registry client

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/22.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #22


commit d3fa8a798258345d3daedfb33ac1649c3c056c5b
Author: Bryan Bende 
Date:   2017-10-13T20:32:01Z

NIFIREG-35 Initial commit of nifi-registry-client

commit 215cc9a6ba0b1b04b2c168ca1e0875ad9a96bf23
Author: Bryan Bende 
Date:   2017-10-17T19:03:36Z

NIFIREG-35 Adding exception handling




---


[jira] [Created] (MINIFICPP-259) Readability and optimization improvement for GetTCP processor

2017-10-17 Thread Steven Imle (JIRA)
Steven Imle created MINIFICPP-259:
-

 Summary: Readability and optimization improvement for GetTCP 
processor
 Key: MINIFICPP-259
 URL: https://issues.apache.org/jira/browse/MINIFICPP-259
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Affects Versions: 0.3.0
Reporter: Steven Imle


Should use one line return statement in `SocketAfterExecute::isCancelled`.

Code is functionally correct but may compile down to something less efficient 
than intended.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFIREG-35) Implement a client for the REST API

2017-10-17 Thread Bryan Bende (JIRA)
Bryan Bende created NIFIREG-35:
--

 Summary: Implement a client for the REST API
 Key: NIFIREG-35
 URL: https://issues.apache.org/jira/browse/NIFIREG-35
 Project: NiFi Registry
  Issue Type: Improvement
Reporter: Bryan Bende
Assignee: Bryan Bende
 Fix For: 0.0.1


It would be helpful to offer a basic client for interacting with the REST API 
and save everyone the work of setting up the plumbing for Jersey client or some 
other library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-39) Create FocusArchive processor

2017-10-17 Thread Andrew Christianson (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208156#comment-16208156
 ] 

Andrew Christianson commented on MINIFICPP-39:
--

Thanks. Will check it out and get back to you.

> Create FocusArchive processor
> -
>
> Key: MINIFICPP-39
> URL: https://issues.apache.org/jira/browse/MINIFICPP-39
> Project: NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Minor
>
> Create an FocusArchive processor which implements a lens over an archive 
> (tar, etc.). A concise, though informal, definition of a lens is as follows:
> "Essentially, they represent the act of “peering into” or “focusing in on” 
> some particular piece/path of a complex data object such that you can more 
> precisely target particular operations without losing the context or 
> structure of the overall data you’re working with." 
> https://medium.com/@dtipson/functional-lenses-d1aba9e52254#.hdgsvbraq
> Why an FocusArchive in MiNiFi? Simply put, it will enable us to "focus in on" 
> an entry in the archive, perform processing *in-context* of that entry, then 
> re-focus on the overall archive. This allows for transformation or other 
> processing of an entry in the archive without losing the overall context of 
> the archive.
> Initial format support is tar, due to its simplicity and ubiquity.
> Attributes:
> - Path (the path in the archive to focus; "/" to re-focus the overall archive)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-1140) Allow for attributes to be marked sensitive when being added to FlowFile

2017-10-17 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-1140:

Labels: attributes provenance security sensitive  (was: )

> Allow for attributes to be marked sensitive when being added to FlowFile
> 
>
> Key: NIFI-1140
> URL: https://issues.apache.org/jira/browse/NIFI-1140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Andy LoPresto
>  Labels: attributes, provenance, security, sensitive
>
> We should allows attributes to be marked as sensitive so that Provenance 
> Events that are viewed do not contain sensitive information. A good example 
> of this is PII data that should not be exposed when viewing a Provenance 
> Event.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-1140) Allow for attributes to be marked sensitive when being added to FlowFile

2017-10-17 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto reassigned NIFI-1140:
---

Assignee: Andy LoPresto

> Allow for attributes to be marked sensitive when being added to FlowFile
> 
>
> Key: NIFI-1140
> URL: https://issues.apache.org/jira/browse/NIFI-1140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Andy LoPresto
>
> We should allows attributes to be marked as sensitive so that Provenance 
> Events that are viewed do not contain sensitive information. A good example 
> of this is PII data that should not be exposed when viewing a Provenance 
> Event.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-39) Create FocusArchive processor

2017-10-17 Thread Caleb Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208033#comment-16208033
 ] 

Caleb Johnson commented on MINIFICPP-39:


[~achristianson], check the MINIFI-244-rc branch for what I think is ready to 
PR. It has full tests via PutFile, and has been rebased, squashed, and linted 
(is that what you call a clean linter pass?).

I can't get rocksdb to build on cloud9 for some reason, but the Un/FocusArchive 
tests build and run. Unfortunately, cloud9 doesn't have support for Docker.

> Create FocusArchive processor
> -
>
> Key: MINIFICPP-39
> URL: https://issues.apache.org/jira/browse/MINIFICPP-39
> Project: NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Minor
>
> Create an FocusArchive processor which implements a lens over an archive 
> (tar, etc.). A concise, though informal, definition of a lens is as follows:
> "Essentially, they represent the act of “peering into” or “focusing in on” 
> some particular piece/path of a complex data object such that you can more 
> precisely target particular operations without losing the context or 
> structure of the overall data you’re working with." 
> https://medium.com/@dtipson/functional-lenses-d1aba9e52254#.hdgsvbraq
> Why an FocusArchive in MiNiFi? Simply put, it will enable us to "focus in on" 
> an entry in the archive, perform processing *in-context* of that entry, then 
> re-focus on the overall archive. This allows for transformation or other 
> processing of an entry in the archive without losing the overall context of 
> the archive.
> Initial format support is tar, due to its simplicity and ubiquity.
> Attributes:
> - Path (the path in the archive to focus; "/" to re-focus the overall archive)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-256) ExecuteProcess script uses wrong path

2017-10-17 Thread marco polo (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207989#comment-16207989
 ] 

marco polo commented on MINIFICPP-256:
--

[~fstakem] As I understand it this issue still exists for you, correct? If so 
we will re-open it. 

> ExecuteProcess script uses wrong path
> -
>
> Key: MINIFICPP-256
> URL: https://issues.apache.org/jira/browse/MINIFICPP-256
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Fredrick Stakem
>Assignee: marco polo
> Fix For: 0.3.0
>
> Attachments: 1.csv, 2.csv, basic_minifi_test.xml, config.yml, 
> process.py
>
>
> I am running a test using nifi to create a flow and then import this flow 
> into minifi c++. The flow seems to work as expected on nifi. 
> The flow takes a file from input directory and places it into the processing 
> directory. In the background another ExectureProcess processor runs a simple 
> python script to look at the processing directory, get any files, parse the 
> files, and export to an output directory.
> As stated before everything works as expected nifi, but in minifi c++ the 
> files end up in the root folder of minifi c++ and not the output directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (MINIFICPP-256) ExecuteProcess script uses wrong path

2017-10-17 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated MINIFICPP-256:
--
Fix Version/s: 0.3.0

> ExecuteProcess script uses wrong path
> -
>
> Key: MINIFICPP-256
> URL: https://issues.apache.org/jira/browse/MINIFICPP-256
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Fredrick Stakem
>Assignee: marco polo
> Fix For: 0.3.0
>
> Attachments: 1.csv, 2.csv, basic_minifi_test.xml, config.yml, 
> process.py
>
>
> I am running a test using nifi to create a flow and then import this flow 
> into minifi c++. The flow seems to work as expected on nifi. 
> The flow takes a file from input directory and places it into the processing 
> directory. In the background another ExectureProcess processor runs a simple 
> python script to look at the processing directory, get any files, parse the 
> files, and export to an output directory.
> As stated before everything works as expected nifi, but in minifi c++ the 
> files end up in the root folder of minifi c++ and not the output directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (MINIFICPP-258) C2NullConfiguration fails due to a segfault on travis.

2017-10-17 Thread marco polo (JIRA)
marco polo created MINIFICPP-258:


 Summary: C2NullConfiguration fails due to a segfault on travis. 
 Key: MINIFICPP-258
 URL: https://issues.apache.org/jira/browse/MINIFICPP-258
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: marco polo
Assignee: marco polo






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207902#comment-16207902
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145189423
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

OK let's change that to MapRecord, i'm sure it's reliable, because we use 
it on dev plateform, for the "instanceof Map" you suggest i'm not sure of the 
impact. 


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207903#comment-16207903
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145189535
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

and if you also tests it , it's OK


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread frett27
Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145189535
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

and if you also tests it , it's OK


---


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread frett27
Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145189423
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

OK let's change that to MapRecord, i'm sure it's reliable, because we use 
it on dev plateform, for the "instanceof Map" you suggest i'm not sure of the 
impact. 


---


[GitHub] nifi-minifi-cpp pull request #147: MINIFI-256: Resolve Putfile name and ensu...

2017-10-17 Thread phrocker
Github user phrocker closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/147


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207881#comment-16207881
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145185126
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

I have the following schema in my test flow: 
`{
 "type": "record",
 "name": "A","fields": [
   {"name": "a", "type": "string"},
   {"name": "c", "type": [ "null", {"type" : "map","values" : "string"} ] } 
 ]
}`

When I debug through the processor to AvroTypeUtil, I get a MapRecord as 
the type of "value", not Map. If it could be either, we could just check for 
either?


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207882#comment-16207882
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145185287
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

I can post my flow and sample Avro file if you'd like to see what I mean


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145185287
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

I can post my flow and sample Avro file if you'd like to see what I mean


---


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145185126
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

I have the following schema in my test flow: 
`{
 "type": "record",
 "name": "A","fields": [
   {"name": "a", "type": "string"},
   {"name": "c", "type": [ "null", {"type" : "map","values" : "string"} ] } 
 ]
}`

When I debug through the processor to AvroTypeUtil, I get a MapRecord as 
the type of "value", not Map. If it could be either, we could just check for 
either?


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207847#comment-16207847
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145175950
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -691,6 +697,10 @@ private static boolean isCompatibleDataType(final 
Object value, final DataType d
 return true;
 }
 break;
+case MAP:
+if (value instanceof Map) {
--- End diff --

in debugging initially, the given value was MapRecord, 


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-113) Move from LevelDB to Rocks DB for all repositories.

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207846#comment-16207846
 ] 

ASF GitHub Bot commented on MINIFICPP-113:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/142


> Move from LevelDB to Rocks DB for all repositories. 
> 
>
> Key: MINIFICPP-113
> URL: https://issues.apache.org/jira/browse/MINIFICPP-113
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Assignee: marco polo
>Priority: Minor
>
> Can also be used as a file system repo where we want to minimize the number 
> of inodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread frett27
Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145175950
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -691,6 +697,10 @@ private static boolean isCompatibleDataType(final 
Object value, final DataType d
 return true;
 }
 break;
+case MAP:
+if (value instanceof Map) {
--- End diff --

in debugging initially, the given value was MapRecord, 


---


[GitHub] nifi-minifi-cpp pull request #142: MINIFI-372: Replace leveldb with RocksDB

2017-10-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/142


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207845#comment-16207845
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145175558
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

my uderstanding was that the map presence cause the issue in union type,
 whether or not, the map is filled, (instance of map will be returned)


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread frett27
Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145175558
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

my uderstanding was that the map presence cause the issue in union type,
 whether or not, the map is filled, (instance of map will be returned)


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207844#comment-16207844
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145175249
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -691,6 +697,10 @@ private static boolean isCompatibleDataType(final 
Object value, final DataType d
 return true;
 }
 break;
+case MAP:
+if (value instanceof Map) {
--- End diff --

The MapRecord was my initial post, you suggested to open to Map , type. 
the probleme was when the map is present in the union ("null", "record").


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread frett27
Github user frett27 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145175249
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -691,6 +697,10 @@ private static boolean isCompatibleDataType(final 
Object value, final DataType d
 return true;
 }
 break;
+case MAP:
+if (value instanceof Map) {
--- End diff --

The MapRecord was my initial post, you suggested to open to Map , type. 
the probleme was when the map is present in the union ("null", "record").


---


[jira] [Commented] (NIFI-4492) Add a AnonymizeRecord processor

2017-10-17 Thread Matt Burgess (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207840#comment-16207840
 ] 

Matt Burgess commented on NIFI-4492:


Just thinking that this capability might alternatively be implemented as a 
AnonymizeRecordSetWriter, which can be configured to use another 
RecordSetWriter for the actual output. This would allow you to just use 
ConvertRecord instead of a new processor. I'm good with whatever approach makes 
the most sense.

> Add a AnonymizeRecord processor
> ---
>
> Key: NIFI-4492
> URL: https://issues.apache.org/jira/browse/NIFI-4492
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>
> It may be desired or necessary to anonymize the data in flow files before 
> sending them to an external system. An example of such data is Personally 
> Identifiable Information (PII), and an example of such a system is a 
> HIPAA-regulated system where certain information cannot be present in a 
> certain form.
> It would be nice to have a record-aware processor that could anonymize 
> various fields in a record.  One possible implementation could leverage 
> [ARX|http://arx.deidentifier.org/], an Apache-licensed data anonymization 
> library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #147: MINIFI-256: Resolve Putfile name and ensure that...

2017-10-17 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/147
  
@phrocker merge to apache, please close the PR.


---


[jira] [Resolved] (MINIFICPP-256) ExecuteProcess script uses wrong path

2017-10-17 Thread bqiu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bqiu resolved MINIFICPP-256.

Resolution: Fixed

merge to apache main

> ExecuteProcess script uses wrong path
> -
>
> Key: MINIFICPP-256
> URL: https://issues.apache.org/jira/browse/MINIFICPP-256
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Fredrick Stakem
>Assignee: marco polo
> Attachments: 1.csv, 2.csv, basic_minifi_test.xml, config.yml, 
> process.py
>
>
> I am running a test using nifi to create a flow and then import this flow 
> into minifi c++. The flow seems to work as expected on nifi. 
> The flow takes a file from input directory and places it into the processing 
> directory. In the background another ExectureProcess processor runs a simple 
> python script to look at the processing directory, get any files, parse the 
> files, and export to an output directory.
> As stated before everything works as expected nifi, but in minifi c++ the 
> files end up in the root folder of minifi c++ and not the output directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #147: MINIFI-256: Resolve Putfile name and ensure that...

2017-10-17 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/147
  

https://github.com/apache/nifi-minifi-cpp/commit/49ed5094552fe98c289d15168587ff3e63042309


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207793#comment-16207793
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145169320
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

Also it could check the records to make sure the field values are what you 
expect (null vs not-null, e.g.)


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207794#comment-16207794
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145169617
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -691,6 +697,10 @@ private static boolean isCompatibleDataType(final 
Object value, final DataType d
 return true;
 }
 break;
+case MAP:
+if (value instanceof Map) {
--- End diff --

What kind of flow (which processors, e.g.) did you use to test this with? 
When I use ConvertRecord, value is a MapRecord not a Map, which causes this not 
to work. Perhaps we should check for both here?


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207792#comment-16207792
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145169184
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

Can you explain more about what's going on here, including what is in the 
data.avro file? When I run avro-tools tojson on it, I get the following:

```
java -jar avro-tools-1.8.1.jar tojson datasets/data.avro
{"a.A":{"o":{"a.O":{"hash":{"map":{}}
```

Perhaps it would be good to have a test file that has a record with a 
non-null value for hash, as well as a record with a null value for hash?


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145169320
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

Also it could check the records to make sure the field values are what you 
expect (null vs not-null, e.g.)


---


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145169617
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -691,6 +697,10 @@ private static boolean isCompatibleDataType(final 
Object value, final DataType d
 return true;
 }
 break;
+case MAP:
+if (value instanceof Map) {
--- End diff --

What kind of flow (which processors, e.g.) did you use to test this with? 
When I use ConvertRecord, value is a MapRecord not a Map, which causes this not 
to work. Perhaps we should check for both here?


---


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2207#discussion_r145169184
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/test/java/org/apache/nifi/avro/TestAvroTypeUtil.java
 ---
@@ -239,4 +243,20 @@ public void testComplicatedRecursiveSchema() {
 Assert.assertEquals(recordASchema, 
((RecordDataType)recordBParentField.get().getDataType()).getChildSchema());
 }
 
+@Test
+public void testMapWithNullSchema() throws IOException {
+
+Schema recursiveSchema = new 
Schema.Parser().parse(getClass().getResourceAsStream("schema.json"));
+
+// Make sure the following doesn't throw an exception
+RecordSchema recordASchema = 
AvroTypeUtil.createSchema(recursiveSchema.getTypes().get(0));
+
+// check the fix with the proper file
+try(DataFileStream r = new 
DataFileStream<>(getClass().getResourceAsStream("data.avro"),
--- End diff --

Can you explain more about what's going on here, including what is in the 
data.avro file? When I run avro-tools tojson on it, I get the following:

```
java -jar avro-tools-1.8.1.jar tojson datasets/data.avro
{"a.A":{"o":{"a.O":{"hash":{"map":{}}
```

Perhaps it would be good to have a test file that has a record with a 
non-null value for hash, as well as a record with a null value for hash?


---


[jira] [Updated] (NIFI-4493) PutCassandraQL Option to Disable Prepared Statements

2017-10-17 Thread Ben Thorner (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Thorner updated NIFI-4493:
--
Labels: cassandra putcassandraql  (was: )

> PutCassandraQL Option to Disable Prepared Statements
> 
>
> Key: NIFI-4493
> URL: https://issues.apache.org/jira/browse/NIFI-4493
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.4.0
>Reporter: Ben Thorner
>Priority: Minor
>  Labels: cassandra, putcassandraql
>
> Cassandra complains when using this processor to perform large numbers of 
> changing queries. In our scenario, we are using batch statements to insert 
> incoming data.
> INFO  [ScheduledTasks:1] 2017-10-17 16:13:35,213 QueryProcessor.java:134 - 
> 3849 prepared statements discarded in the last minute because cache limit 
> reached (66453504 bytes)
> In this scenario, I don't think it's feasible to use prepared statements, as 
> the number of ? parameters is impractical. Could we instead have an option to 
> disable prepared statements?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4493) PutCassandraQL Option to Disable Prepared Statements

2017-10-17 Thread Ben Thorner (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Thorner updated NIFI-4493:
--
Component/s: (was: Core Framework)
 Extensions

> PutCassandraQL Option to Disable Prepared Statements
> 
>
> Key: NIFI-4493
> URL: https://issues.apache.org/jira/browse/NIFI-4493
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.4.0
>Reporter: Ben Thorner
>Priority: Minor
>
> Cassandra complains when using this processor to perform large numbers of 
> changing queries. In our scenario, we are using batch statements to insert 
> incoming data.
> INFO  [ScheduledTasks:1] 2017-10-17 16:13:35,213 QueryProcessor.java:134 - 
> 3849 prepared statements discarded in the last minute because cache limit 
> reached (66453504 bytes)
> In this scenario, I don't think it's feasible to use prepared statements, as 
> the number of ? parameters is impractical. Could we instead have an option to 
> disable prepared statements?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4493) PutCassandraQL Option to Disable Prepared Statements

2017-10-17 Thread Ben Thorner (JIRA)
Ben Thorner created NIFI-4493:
-

 Summary: PutCassandraQL Option to Disable Prepared Statements
 Key: NIFI-4493
 URL: https://issues.apache.org/jira/browse/NIFI-4493
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.4.0
Reporter: Ben Thorner
Priority: Minor


Cassandra complains when using this processor to perform large numbers of 
changing queries. In our scenario, we are using batch statements to insert 
incoming data.

INFO  [ScheduledTasks:1] 2017-10-17 16:13:35,213 QueryProcessor.java:134 - 3849 
prepared statements discarded in the last minute because cache limit reached 
(66453504 bytes)

In this scenario, I don't think it's feasible to use prepared statements, as 
the number of ? parameters is impractical. Could we instead have an option to 
disable prepared statements?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4276) Add Provenance Migration section to User Guide

2017-10-17 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim updated NIFI-4276:
-
Fix Version/s: 1.4.0

> Add Provenance Migration section to User Guide
> --
>
> Key: NIFI-4276
> URL: https://issues.apache.org/jira/browse/NIFI-4276
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
> Fix For: 1.4.0
>
>
> WriteAheadProvenanceRepository configuration was introduced in version 1.2.0 
> (NIFI-3356).  Should add a section to the documentation that covers how to 
> migrate from default configuration of PersistentProvenantRepository and 
> recommend any system property value changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #147: MINIFI-256: Resolve Putfile name and ensure that...

2017-10-17 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/147
  
@phrocker looks good.


---


[jira] [Commented] (NIFI-3689) TestWriteAheadStorePartition frequently causes travis failures

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207695#comment-16207695
 ] 

ASF GitHub Bot commented on NIFI-3689:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2214

NIFI-3689: Fixed threading bug in TestWriteAheadStorePartition - mult…

…iple threads were simultaneously attempting to update HashMap. Changed 
impl to ConcurrentHashMap.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-3689

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2214.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2214


commit c39bc0286703f9f958434a24ae19946292029f07
Author: Mark Payne 
Date:   2017-10-17T14:16:06Z

NIFI-3689: Fixed threading bug in TestWriteAheadStorePartition - multiple 
threads were simultaneously attempting to update HashMap. Changed impl to 
ConcurrentHashMap.




> TestWriteAheadStorePartition frequently causes travis failures
> --
>
> Key: NIFI-3689
> URL: https://issues.apache.org/jira/browse/NIFI-3689
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Andre F de Miranda
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Apologies if this happens to be a duplicate (I did a search but could not 
> find anything similar)
> I notice a number of travis builds seem to fail during the execution of 
> TestWriteAheadStorePartition
> https://travis-ci.org/apache/nifi/jobs/220619894
> https://travis-ci.org/apache/nifi/jobs/220671755
> https://travis-ci.org/apache/nifi/jobs/217435194



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3689) TestWriteAheadStorePartition frequently causes travis failures

2017-10-17 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-3689:
-
 Assignee: Mark Payne
Fix Version/s: 1.5.0
   Status: Patch Available  (was: Open)

> TestWriteAheadStorePartition frequently causes travis failures
> --
>
> Key: NIFI-3689
> URL: https://issues.apache.org/jira/browse/NIFI-3689
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Andre F de Miranda
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Apologies if this happens to be a duplicate (I did a search but could not 
> find anything similar)
> I notice a number of travis builds seem to fail during the execution of 
> TestWriteAheadStorePartition
> https://travis-ci.org/apache/nifi/jobs/220619894
> https://travis-ci.org/apache/nifi/jobs/220671755
> https://travis-ci.org/apache/nifi/jobs/217435194



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2214: NIFI-3689: Fixed threading bug in TestWriteAheadSto...

2017-10-17 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2214

NIFI-3689: Fixed threading bug in TestWriteAheadStorePartition - mult…

…iple threads were simultaneously attempting to update HashMap. Changed 
impl to ConcurrentHashMap.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-3689

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2214.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2214


commit c39bc0286703f9f958434a24ae19946292029f07
Author: Mark Payne 
Date:   2017-10-17T14:16:06Z

NIFI-3689: Fixed threading bug in TestWriteAheadStorePartition - multiple 
threads were simultaneously attempting to update HashMap. Changed impl to 
ConcurrentHashMap.




---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207686#comment-16207686
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2207
  
Reviewing...


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2207: NIFI-4441 patch avro maps in union types

2017-10-17 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2207
  
Reviewing...


---


[jira] [Commented] (NIFI-4473) Add support for large result sets and normalizing Avro names to SelectHiveQL

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207682#comment-16207682
 ] 

ASF GitHub Bot commented on NIFI-4473:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2212#discussion_r145140092
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -243,95 +284,152 @@ private void onTrigger(final ProcessContext context, 
final ProcessSession sessio
 // If the query is not set, then an incoming flow file is 
required, and expected to contain a valid SQL select query.
 // If there is no incoming connection, onTrigger will not be 
called as the processor will fail when scheduled.
 final StringBuilder queryContents = new StringBuilder();
-session.read(fileToProcess, new InputStreamCallback() {
-@Override
-public void process(InputStream in) throws IOException {
-queryContents.append(IOUtils.toString(in));
-}
-});
+session.read(fileToProcess, in -> 
queryContents.append(IOUtils.toString(in, charset)));
 selectQuery = queryContents.toString();
 }
 
 
+final Integer fetchSize = 
context.getProperty(FETCH_SIZE).evaluateAttributeExpressions().asInteger();
--- End diff --

No reason, just a copy-paste from QueryDatabaseTable which doesn't accept 
incoming connections. I will update that and any of the others. Not sure if 
it's a valid use case either, but I can't see why we shouldn't, just in case.


> Add support for large result sets and normalizing Avro names to SelectHiveQL
> 
>
> Key: NIFI-4473
> URL: https://issues.apache.org/jira/browse/NIFI-4473
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> A number of enhancements were made to processors like QueryDatabaseTable to 
> allow for such things as:
> - Splitting result sets into multiple flow files (i.e. Max Rows Per Flowfile 
> property)
> - Max number of splits/rows returned (Max fragments)
> - Normalizing names to be Avro-compatible
> The RDBMS processors also now support Avro logical types, but the version of 
> Avro needed by the current version of Hive (1.2.1) is Avro 1.7.7, which does 
> not support logical types.
> These enhancements were made to JdbcCommon, but not to HiveJdbcCommon (the 
> Hive version of the JDBC utils class). Since Hive queries can return even 
> larger result sets than traditional RDBMS, these properties/enhancements are 
> at least as valuable to have for SelectHiveQL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2212: NIFI-4473: Add support for large result sets and no...

2017-10-17 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2212#discussion_r145140092
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -243,95 +284,152 @@ private void onTrigger(final ProcessContext context, 
final ProcessSession sessio
 // If the query is not set, then an incoming flow file is 
required, and expected to contain a valid SQL select query.
 // If there is no incoming connection, onTrigger will not be 
called as the processor will fail when scheduled.
 final StringBuilder queryContents = new StringBuilder();
-session.read(fileToProcess, new InputStreamCallback() {
-@Override
-public void process(InputStream in) throws IOException {
-queryContents.append(IOUtils.toString(in));
-}
-});
+session.read(fileToProcess, in -> 
queryContents.append(IOUtils.toString(in, charset)));
 selectQuery = queryContents.toString();
 }
 
 
+final Integer fetchSize = 
context.getProperty(FETCH_SIZE).evaluateAttributeExpressions().asInteger();
--- End diff --

No reason, just a copy-paste from QueryDatabaseTable which doesn't accept 
incoming connections. I will update that and any of the others. Not sure if 
it's a valid use case either, but I can't see why we shouldn't, just in case.


---


[jira] [Created] (NIFI-4492) Add a AnonymizeRecord processor

2017-10-17 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-4492:
--

 Summary: Add a AnonymizeRecord processor
 Key: NIFI-4492
 URL: https://issues.apache.org/jira/browse/NIFI-4492
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Matt Burgess


It may be desired or necessary to anonymize the data in flow files before 
sending them to an external system. An example of such data is Personally 
Identifiable Information (PII), and an example of such a system is a 
HIPAA-regulated system where certain information cannot be present in a certain 
form.

It would be nice to have a record-aware processor that could anonymize various 
fields in a record.  One possible implementation could leverage 
[ARX|http://arx.deidentifier.org/], an Apache-licensed data anonymization 
library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4175) Add Proxy Properties to SFTP Processors

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207668#comment-16207668
 ] 

ASF GitHub Bot commented on NIFI-4175:
--

Github user jugi92 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2018#discussion_r145136408
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java
 ---
@@ -92,6 +93,39 @@
 .defaultValue("true")
 .required(true)
 .build();
+public static final PropertyDescriptor PROXY_HOST = new 
PropertyDescriptor.Builder()
--- End diff --

Hi,
We would really like to see that the development goes into the direction 
that SOCKS Proxy with authentication will also be available.
In many bigger companies it is a basic requirement and for us it is not 
possible to provide those parameters through the -D Options as mentioned in 
this comment: 
https://community.hortonworks.com/questions/30339/how-to-configure-proxy-server-details-with-user-an.html
As JSCH offers all the possibilities and a reference implementation it 
should be possible to integrate it and broaden the functionality of Nifi and 
thus make it more attractive to companies.
Please feel free to discuss my opinion.
Best regards


> Add Proxy Properties to SFTP Processors
> ---
>
> Key: NIFI-4175
> URL: https://issues.apache.org/jira/browse/NIFI-4175
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Grant Langlois
>Assignee: Andre F de Miranda
>Priority: Minor
>
> Add proxy server configuration as properties to the Nifi SFTP components. 
> Specifically add properties for:
> Proxy Type: JSCH supported proxies including SOCKS4, SOCKS5 and HTTP
> Proxy Host
> Proxy Port
> Proxy Username
> Proxy Password
> This would allow these properties to be configured for each processor. These 
> properties would align with what is configurable for the JSCH session and 
> shouldn't require any additional dependencies.
> This proposal is similar to what is already implemented for the FTP processors



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2018: NIFI-4175 - Add HTTP proxy support to *SFTP process...

2017-10-17 Thread jugi92
Github user jugi92 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2018#discussion_r145136408
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java
 ---
@@ -92,6 +93,39 @@
 .defaultValue("true")
 .required(true)
 .build();
+public static final PropertyDescriptor PROXY_HOST = new 
PropertyDescriptor.Builder()
--- End diff --

Hi,
We would really like to see that the development goes into the direction 
that SOCKS Proxy with authentication will also be available.
In many bigger companies it is a basic requirement and for us it is not 
possible to provide those parameters through the -D Options as mentioned in 
this comment: 
https://community.hortonworks.com/questions/30339/how-to-configure-proxy-server-details-with-user-an.html
As JSCH offers all the possibilities and a reference implementation it 
should be possible to integrate it and broaden the functionality of Nifi and 
thus make it more attractive to companies.
Please feel free to discuss my opinion.
Best regards


---


[jira] [Commented] (NIFI-4175) Add Proxy Properties to SFTP Processors

2017-10-17 Thread Julian Gimbel (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207664#comment-16207664
 ] 

Julian Gimbel commented on NIFI-4175:
-

Hi,

We would really like to see that the development goes into the direction that 
SOCKS Proxy with authentication will also be available.
In many bigger companies it is a basic requirement and for us it is not 
possible to provide those parameters through the -D Options as mentioned in 
this comment: 
https://community.hortonworks.com/questions/30339/how-to-configure-proxy-server-details-with-user-an.html
As JSCH offers all the possibilities and a reference implementation it should 
be possible to integrate it and broaden the functionality of Nifi and thus make 
it more attractive to companies.
Please feel free to discuss my opinion.

Best regards


> Add Proxy Properties to SFTP Processors
> ---
>
> Key: NIFI-4175
> URL: https://issues.apache.org/jira/browse/NIFI-4175
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Grant Langlois
>Assignee: Andre F de Miranda
>Priority: Minor
>
> Add proxy server configuration as properties to the Nifi SFTP components. 
> Specifically add properties for:
> Proxy Type: JSCH supported proxies including SOCKS4, SOCKS5 and HTTP
> Proxy Host
> Proxy Port
> Proxy Username
> Proxy Password
> This would allow these properties to be configured for each processor. These 
> properties would align with what is configurable for the JSCH session and 
> shouldn't require any additional dependencies.
> This proposal is similar to what is already implemented for the FTP processors



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2180: Added GetMongoAggregation to support running Mongo aggrega...

2017-10-17 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2180
  
@mattyb149 @markap14 @milanchandna Do any of you have some time to do a 
quick look to see if this can get merged?


---


[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207278#comment-16207278
 ] 

ASF GitHub Bot commented on NIFI-4325:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2113
  
@mattyb149 Ok. Changes are made. I refactored the commit to be based on a 
controller service. For now, that service only handles a single function: basic 
search. However, there is now a (hopefully) extensible base via a controller 
service to move forward. The controller service has integration tests only; the 
processor has junit tests with a mock service.


> Create a new ElasticSearch processor that supports the JSON DSL
> ---
>
> Key: NIFI-4325
> URL: https://issues.apache.org/jira/browse/NIFI-4325
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> The existing ElasticSearch processors use the Lucene-style syntax for 
> querying, not the JSON DSL. A new processor is needed that can take a full 
> JSON query and execute it. It should also support aggregation queries in this 
> syntax. A user needs to be able to take a query as-is from Kibana and drop it 
> into NiFi and have it just run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.

2017-10-17 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2113
  
@mattyb149 Ok. Changes are made. I refactored the commit to be based on a 
controller service. For now, that service only handles a single function: basic 
search. However, there is now a (hopefully) extensible base via a controller 
service to move forward. The controller service has integration tests only; the 
processor has junit tests with a mock service.


---


[jira] [Commented] (NIFI-4473) Add support for large result sets and normalizing Avro names to SelectHiveQL

2017-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207136#comment-16207136
 ] 

ASF GitHub Bot commented on NIFI-4473:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2212#discussion_r145050335
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -243,95 +284,152 @@ private void onTrigger(final ProcessContext context, 
final ProcessSession sessio
 // If the query is not set, then an incoming flow file is 
required, and expected to contain a valid SQL select query.
 // If there is no incoming connection, onTrigger will not be 
called as the processor will fail when scheduled.
 final StringBuilder queryContents = new StringBuilder();
-session.read(fileToProcess, new InputStreamCallback() {
-@Override
-public void process(InputStream in) throws IOException {
-queryContents.append(IOUtils.toString(in));
-}
-});
+session.read(fileToProcess, in -> 
queryContents.append(IOUtils.toString(in, charset)));
 selectQuery = queryContents.toString();
 }
 
 
+final Integer fetchSize = 
context.getProperty(FETCH_SIZE).evaluateAttributeExpressions().asInteger();
--- End diff --

Any reason not to use the incoming flow file for evaluation? (not sure it'd 
be a valid use case though)


> Add support for large result sets and normalizing Avro names to SelectHiveQL
> 
>
> Key: NIFI-4473
> URL: https://issues.apache.org/jira/browse/NIFI-4473
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> A number of enhancements were made to processors like QueryDatabaseTable to 
> allow for such things as:
> - Splitting result sets into multiple flow files (i.e. Max Rows Per Flowfile 
> property)
> - Max number of splits/rows returned (Max fragments)
> - Normalizing names to be Avro-compatible
> The RDBMS processors also now support Avro logical types, but the version of 
> Avro needed by the current version of Hive (1.2.1) is Avro 1.7.7, which does 
> not support logical types.
> These enhancements were made to JdbcCommon, but not to HiveJdbcCommon (the 
> Hive version of the JDBC utils class). Since Hive queries can return even 
> larger result sets than traditional RDBMS, these properties/enhancements are 
> at least as valuable to have for SelectHiveQL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2212: NIFI-4473: Add support for large result sets and no...

2017-10-17 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2212#discussion_r145050335
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -243,95 +284,152 @@ private void onTrigger(final ProcessContext context, 
final ProcessSession sessio
 // If the query is not set, then an incoming flow file is 
required, and expected to contain a valid SQL select query.
 // If there is no incoming connection, onTrigger will not be 
called as the processor will fail when scheduled.
 final StringBuilder queryContents = new StringBuilder();
-session.read(fileToProcess, new InputStreamCallback() {
-@Override
-public void process(InputStream in) throws IOException {
-queryContents.append(IOUtils.toString(in));
-}
-});
+session.read(fileToProcess, in -> 
queryContents.append(IOUtils.toString(in, charset)));
 selectQuery = queryContents.toString();
 }
 
 
+final Integer fetchSize = 
context.getProperty(FETCH_SIZE).evaluateAttributeExpressions().asInteger();
--- End diff --

Any reason not to use the incoming flow file for evaluation? (not sure it'd 
be a valid use case though)


---


[jira] [Resolved] (NIFI-4465) ConvertExcelToCSV Data Formatting and Delimiters

2017-10-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-4465.
--
Resolution: Fixed

> ConvertExcelToCSV Data Formatting and Delimiters
> 
>
> Key: NIFI-4465
> URL: https://issues.apache.org/jira/browse/NIFI-4465
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.5.0
>
>
> The ConvertExcelToCSV Processor does not output cell values using the 
> formatting set in Excel.
> There are also no delimiter options available for column/record delimiting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4465) ConvertExcelToCSV Data Formatting and Delimiters

2017-10-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4465:
-
Component/s: (was: Core Framework)
 Extensions

> ConvertExcelToCSV Data Formatting and Delimiters
> 
>
> Key: NIFI-4465
> URL: https://issues.apache.org/jira/browse/NIFI-4465
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.5.0
>
>
> The ConvertExcelToCSV Processor does not output cell values using the 
> formatting set in Excel.
> There are also no delimiter options available for column/record delimiting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4488) PutMongoRecord processor has misspelling in Description

2017-10-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4488:
-
Fix Version/s: 1.5.0

> PutMongoRecord processor has misspelling in Description
> ---
>
> Key: NIFI-4488
> URL: https://issues.apache.org/jira/browse/NIFI-4488
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.4.0
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Trivial
> Fix For: 1.5.0
>
>
> MongoDB is misspelled as MonogDB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)