[jira] [Updated] (NIFI-5292) Rename existing ElasticSearch client service impl to specify it is for 5.X

2018-06-11 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5292:
---
Affects Version/s: 1.7.0

> Rename existing ElasticSearch client service impl to specify it is for 5.X
> --
>
> Key: NIFI-5292
> URL: https://issues.apache.org/jira/browse/NIFI-5292
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.0
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Labels: Migration
>
> The current version of the impl is 5.X, but has a generic name that will be 
> confusing down the road.
> Add an ES 6.X client service as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5292) Rename existing ElasticSearch client service impl to specify it is for 5.X

2018-06-11 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5292:
---
Labels: Migration  (was: )

> Rename existing ElasticSearch client service impl to specify it is for 5.X
> --
>
> Key: NIFI-5292
> URL: https://issues.apache.org/jira/browse/NIFI-5292
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.0
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Labels: Migration
>
> The current version of the impl is 5.X, but has a generic name that will be 
> confusing down the road.
> Add an ES 6.X client service as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5287) LookupRecord should supply flowfile attributes to the lookup service

2018-06-11 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5287:
---
Description: 
-LookupRecord should supply the flowfile attributes to the lookup service. It 
should be done as follows:-
 # -Provide a regular expression to choose which attributes are used.-
 # -The chosen attributes should be foundation of the coordinates map used for 
the lookup.-
 # -If a configured key collides with a flowfile attribute, it should override 
the flowfile attribute in the coordinate map.-

Mark had the right idea:

 

I would propose an alternative approach, which would be to add a new method to 
the interface that has a default implementation:

{{default Optional lookup(Map coordinates, Map context) throws LookupFailureException \{ return lookup(coordinates); } 
}}

Where {{context}} is used for the FlowFile attributes (I'm referring to it as 
{{context}} instead of {{attributes}} because there may well be a case where we 
want to provide some other value that is not specifically a FlowFile 
attribute). Here is why I am suggesting this:
 * It provides a clean interface that properly separates the data's coordinates 
from FlowFile attributes.
 * It prevents any collisions between FlowFile attribute names and coordinates.
 * It maintains backward compatibility, and we know that it won't change the 
behavior of existing services or processors/components using those services - 
even those that may have been implemented by others outside of the Apache realm.
 * If attributes are passed in by a Processor, those attributes will be ignored 
anyway unless the Controller Service is specifically updated to make use of 
those attributes, such as via Expression Language. In such a case, the 
Controller Service can simply be updated at that time to make use of the new 
method instead of the existing method.

  was:
LookupRecord should supply the flowfile attributes to the lookup service. It 
should be done as follows:
 # Provide a regular expression to choose which attributes are used.
 # The chosen attributes should be foundation of the coordinates map used for 
the lookup.
 # If a configured key collides with a flowfile attribute, it should override 
the flowfile attribute in the coordinate map.


> LookupRecord should supply flowfile attributes to the lookup service
> 
>
> Key: NIFI-5287
> URL: https://issues.apache.org/jira/browse/NIFI-5287
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> -LookupRecord should supply the flowfile attributes to the lookup service. It 
> should be done as follows:-
>  # -Provide a regular expression to choose which attributes are used.-
>  # -The chosen attributes should be foundation of the coordinates map used 
> for the lookup.-
>  # -If a configured key collides with a flowfile attribute, it should 
> override the flowfile attribute in the coordinate map.-
> Mark had the right idea:
>  
> I would propose an alternative approach, which would be to add a new method 
> to the interface that has a default implementation:
> {{default Optional lookup(Map coordinates, Map String> context) throws LookupFailureException \{ return lookup(coordinates); 
> } }}
> Where {{context}} is used for the FlowFile attributes (I'm referring to it as 
> {{context}} instead of {{attributes}} because there may well be a case where 
> we want to provide some other value that is not specifically a FlowFile 
> attribute). Here is why I am suggesting this:
>  * It provides a clean interface that properly separates the data's 
> coordinates from FlowFile attributes.
>  * It prevents any collisions between FlowFile attribute names and 
> coordinates.
>  * It maintains backward compatibility, and we know that it won't change the 
> behavior of existing services or processors/components using those services - 
> even those that may have been implemented by others outside of the Apache 
> realm.
>  * If attributes are passed in by a Processor, those attributes will be 
> ignored anyway unless the Controller Service is specifically updated to make 
> use of those attributes, such as via Expression Language. In such a case, the 
> Controller Service can simply be updated at that time to make use of the new 
> method instead of the existing method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5292) Rename existing ElasticSearch client service impl to specify it is for 5.X

2018-06-11 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508615#comment-16508615
 ] 

Mike Thomsen commented on NIFI-5292:


[~pvillard] is there an established way of doing that that will be easy to pick 
up in the release notes and migration guide?

> Rename existing ElasticSearch client service impl to specify it is for 5.X
> --
>
> Key: NIFI-5292
> URL: https://issues.apache.org/jira/browse/NIFI-5292
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> The current version of the impl is 5.X, but has a generic name that will be 
> confusing down the road.
> Add an ES 6.X client service as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5292) Rename existing ElasticSearch client service impl to specify it is for 5.X

2018-06-11 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5292:
---
Description: 
The current version of the impl is 5.X, but has a generic name that will be 
confusing down the road.

Add an ES 6.X client service as well.

  was:The current version of the impl is 5.X, but has a generic name that will 
be confusing down the road.


> Rename existing ElasticSearch client service impl to specify it is for 5.X
> --
>
> Key: NIFI-5292
> URL: https://issues.apache.org/jira/browse/NIFI-5292
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> The current version of the impl is 5.X, but has a generic name that will be 
> confusing down the road.
> Add an ES 6.X client service as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5292) Rename existing ElasticSearch client service impl to specify it is for 5.X

2018-06-11 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5292:
--

 Summary: Rename existing ElasticSearch client service impl to 
specify it is for 5.X
 Key: NIFI-5292
 URL: https://issues.apache.org/jira/browse/NIFI-5292
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen
Assignee: Mike Thomsen


The current version of the impl is 5.X, but has a generic name that will be 
confusing down the road.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5289) NoClassDefFoundError for org.junit.Assert When Using nifi-mock

2018-06-11 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16507905#comment-16507905
 ] 

Mike Thomsen commented on NIFI-5289:


What were you trying to do that was impacted by this?

> NoClassDefFoundError for org.junit.Assert When Using nifi-mock
> --
>
> Key: NIFI-5289
> URL: https://issues.apache.org/jira/browse/NIFI-5289
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Martin Payne
>Priority: Minor
>
> When using the NiFi Mock framework but not using JUnit 4, tests fail with a 
> NoClassDefFoundError for org.junit.Assert. This is because nifi-mock sets the 
> scope of junit to "provided", which means it's not pulled into consuming 
> projects as a transitive dependency. It should be set to "compile" so that 
> users don't have to set an explicit JUnit dependency in their projects.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5284) RunMongoAggregation uses ObjectIdSerializer & SimpleDateFormat

2018-06-10 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-5284.

   Resolution: Fixed
Fix Version/s: 1.7.0

> RunMongoAggregation uses ObjectIdSerializer & SimpleDateFormat
> --
>
> Key: NIFI-5284
> URL: https://issues.apache.org/jira/browse/NIFI-5284
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Zambonilli
>Assignee: Sivaprasanna Sethuraman
>Priority: Minor
> Fix For: 1.7.0
>
>
> The RunMongoAggregation processor uses Jackson to serialize the document to 
> JSON. However, the default serialization for Jackson on Mongo ObjectId and 
> dates leaves a lot to be desired. The ObjectId's are serialized into the 
> decimal representation of each component of the ObjectId instead of the hex 
> string of the full byte array. Mongo dates are being serialized as unix time 
> as opposed to ISO8601 zulu string.
> It looks like the GetMongo processor has set the correct serializer flags on 
> Jackson to fix this. The fix for GetMongo is here. 
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java#L213



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5104) Create new processor PutFoundationDB

2018-06-10 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen reassigned NIFI-5104:
--

Assignee: (was: Mike Thomsen)

> Create new processor PutFoundationDB
> 
>
> Key: NIFI-5104
> URL: https://issues.apache.org/jira/browse/NIFI-5104
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Priority: Major
>
> A processor capable of putting data transactionally into FoundationDB is 
> needed. It should be able to at least define key value pairs in a file 
> separated by a configurable pair separator and a configurable separator for 
> the key value pieces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5288) PutMongoRecord cannot handle arrays

2018-06-10 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5288:
---
Summary: PutMongoRecord cannot handle arrays  (was: PutMongoDBRecord cannot 
handle arrays)

> PutMongoRecord cannot handle arrays
> ---
>
> Key: NIFI-5288
> URL: https://issues.apache.org/jira/browse/NIFI-5288
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> From the mailing list:
>  
> My json document is \{"nom":"HAMEL","prenom":"YVES","tab":["aa","bb"]}
>  My mecord reader use the schema (generated by InferAvroSchema):
>  {
>    "type" : "record",
>    "name" : "Test",
>    "fields" : [ {
>      "name" : "nom",
>      "type" : "string",
>      "doc" : "Type inferred from '\"HAMEL\"'"
>    }, {
>      "name" : "prenom",
>      "type" : "string",
>      "doc" : "Type inferred from '\"YVES\"'"
>    }, {
>      "name" : "tab",
>      "type" : {
>        "type" : "array",
>        "items" : "string"
>      },
>      "doc" : "Type inferred from '[\"aa\",\"bb\"]'"
>    } ]
>  }
>  
>  I did a little debug and I think I get this exeception because
>  PuMongoRecord maps json array to java array. But the mongodb java drivers
>  doesn't support java array but only support List.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5288) PutMongoDBRecord cannot handle arrays

2018-06-10 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5288:
---
Summary: PutMongoDBRecord cannot handle arrays  (was: PutMongoRecordDB 
cannot handle arrays)

> PutMongoDBRecord cannot handle arrays
> -
>
> Key: NIFI-5288
> URL: https://issues.apache.org/jira/browse/NIFI-5288
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> From the mailing list:
>  
> My json document is \{"nom":"HAMEL","prenom":"YVES","tab":["aa","bb"]}
>  My mecord reader use the schema (generated by InferAvroSchema):
>  {
>    "type" : "record",
>    "name" : "Test",
>    "fields" : [ {
>      "name" : "nom",
>      "type" : "string",
>      "doc" : "Type inferred from '\"HAMEL\"'"
>    }, {
>      "name" : "prenom",
>      "type" : "string",
>      "doc" : "Type inferred from '\"YVES\"'"
>    }, {
>      "name" : "tab",
>      "type" : {
>        "type" : "array",
>        "items" : "string"
>      },
>      "doc" : "Type inferred from '[\"aa\",\"bb\"]'"
>    } ]
>  }
>  
>  I did a little debug and I think I get this exeception because
>  PuMongoRecord maps json array to java array. But the mongodb java drivers
>  doesn't support java array but only support List.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5288) PutMongoRecordDB cannot handle arrays

2018-06-10 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5288:
---
Summary: PutMongoRecordDB cannot handle arrays  (was: PutMongoDB cannot 
handle arrays)

> PutMongoRecordDB cannot handle arrays
> -
>
> Key: NIFI-5288
> URL: https://issues.apache.org/jira/browse/NIFI-5288
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> From the mailing list:
>  
> My json document is \{"nom":"HAMEL","prenom":"YVES","tab":["aa","bb"]}
>  My mecord reader use the schema (generated by InferAvroSchema):
>  {
>    "type" : "record",
>    "name" : "Test",
>    "fields" : [ {
>      "name" : "nom",
>      "type" : "string",
>      "doc" : "Type inferred from '\"HAMEL\"'"
>    }, {
>      "name" : "prenom",
>      "type" : "string",
>      "doc" : "Type inferred from '\"YVES\"'"
>    }, {
>      "name" : "tab",
>      "type" : {
>        "type" : "array",
>        "items" : "string"
>      },
>      "doc" : "Type inferred from '[\"aa\",\"bb\"]'"
>    } ]
>  }
>  
>  I did a little debug and I think I get this exeception because
>  PuMongoRecord maps json array to java array. But the mongodb java drivers
>  doesn't support java array but only support List.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5288) PutMongoDB cannot handle arrays

2018-06-09 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5288:
---
Affects Version/s: 1.6.0

> PutMongoDB cannot handle arrays
> ---
>
> Key: NIFI-5288
> URL: https://issues.apache.org/jira/browse/NIFI-5288
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> From the mailing list:
>  
> My json document is \{"nom":"HAMEL","prenom":"YVES","tab":["aa","bb"]}
>  My mecord reader use the schema (generated by InferAvroSchema):
>  {
>    "type" : "record",
>    "name" : "Test",
>    "fields" : [ {
>      "name" : "nom",
>      "type" : "string",
>      "doc" : "Type inferred from '\"HAMEL\"'"
>    }, {
>      "name" : "prenom",
>      "type" : "string",
>      "doc" : "Type inferred from '\"YVES\"'"
>    }, {
>      "name" : "tab",
>      "type" : {
>        "type" : "array",
>        "items" : "string"
>      },
>      "doc" : "Type inferred from '[\"aa\",\"bb\"]'"
>    } ]
>  }
>  
>  I did a little debug and I think I get this exeception because
>  PuMongoRecord maps json array to java array. But the mongodb java drivers
>  doesn't support java array but only support List.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5288) PutMongoDB cannot handle arrays

2018-06-09 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5288:
--

 Summary: PutMongoDB cannot handle arrays
 Key: NIFI-5288
 URL: https://issues.apache.org/jira/browse/NIFI-5288
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mike Thomsen
Assignee: Mike Thomsen


>From the mailing list:

 

My json document is \{"nom":"HAMEL","prenom":"YVES","tab":["aa","bb"]}
 My mecord reader use the schema (generated by InferAvroSchema):
 {
   "type" : "record",
   "name" : "Test",
   "fields" : [ {
     "name" : "nom",
     "type" : "string",
     "doc" : "Type inferred from '\"HAMEL\"'"
   }, {
     "name" : "prenom",
     "type" : "string",
     "doc" : "Type inferred from '\"YVES\"'"
   }, {
     "name" : "tab",
     "type" : {
       "type" : "array",
       "items" : "string"
     },
     "doc" : "Type inferred from '[\"aa\",\"bb\"]'"
   } ]
 }
 
 I did a little debug and I think I get this exeception because
 PuMongoRecord maps json array to java array. But the mongodb java drivers
 doesn't support java array but only support List.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5286) Update FasterXML Jackson version to 2.9.5

2018-06-09 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-5286.

   Resolution: Fixed
Fix Version/s: 1.7.0

> Update FasterXML Jackson version to 2.9.5
> -
>
> Key: NIFI-5286
> URL: https://issues.apache.org/jira/browse/NIFI-5286
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Affects Versions: 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>  Labels: CVE, security
> Fix For: 1.7.0
>
>
> The current version of FasterXML Jackson-databind used is 2.9.4 which was 
> supposed to fix several critical vulnerabilities but wasn't completely 
> addressed. A fix to address them was introduced in 2.9.5.
> More details about the vulnerability can be found at : 
> https://nvd.nist.gov/vuln/detail/CVE-2018-7489
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5286) Update FasterXML Jackson version to 2.9.5

2018-06-09 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16507106#comment-16507106
 ] 

Mike Thomsen commented on NIFI-5286:


I checked one of the affected modules and the new dependency didn't bring in 
any new transitive dependencies outside of other jackson libs.

> Update FasterXML Jackson version to 2.9.5
> -
>
> Key: NIFI-5286
> URL: https://issues.apache.org/jira/browse/NIFI-5286
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Affects Versions: 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>  Labels: CVE, security
> Fix For: 1.7.0
>
>
> The current version of FasterXML Jackson-databind used is 2.9.4 which was 
> supposed to fix several critical vulnerabilities but wasn't completely 
> addressed. A fix to address them was introduced in 2.9.5.
> More details about the vulnerability can be found at : 
> https://nvd.nist.gov/vuln/detail/CVE-2018-7489
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5287) LookupRecord should supply flowfile attributes to the lookup service

2018-06-09 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen reassigned NIFI-5287:
--

Assignee: Mike Thomsen

> LookupRecord should supply flowfile attributes to the lookup service
> 
>
> Key: NIFI-5287
> URL: https://issues.apache.org/jira/browse/NIFI-5287
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> LookupRecord should supply the flowfile attributes to the lookup service. It 
> should be done as follows:
>  # Provide a regular expression to choose which attributes are used.
>  # The chosen attributes should be foundation of the coordinates map used for 
> the lookup.
>  # If a configured key collides with a flowfile attribute, it should override 
> the flowfile attribute in the coordinate map.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5287) LookupRecord should supply flowfile attributes to the lookup service

2018-06-09 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506958#comment-16506958
 ] 

Mike Thomsen commented on NIFI-5287:


[~ijokarumawak] update as you see fit.

> LookupRecord should supply flowfile attributes to the lookup service
> 
>
> Key: NIFI-5287
> URL: https://issues.apache.org/jira/browse/NIFI-5287
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Major
>
> LookupRecord should supply the flowfile attributes to the lookup service. It 
> should be done as follows:
>  # Provide a regular expression to choose which attributes are used.
>  # The chosen attributes should be foundation of the coordinates map used for 
> the lookup.
>  # If a configured key collides with a flowfile attribute, it should override 
> the flowfile attribute in the coordinate map.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5287) LookupRecord should supply flowfile attributes to the lookup service

2018-06-09 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5287:
--

 Summary: LookupRecord should supply flowfile attributes to the 
lookup service
 Key: NIFI-5287
 URL: https://issues.apache.org/jira/browse/NIFI-5287
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen


LookupRecord should supply the flowfile attributes to the lookup service. It 
should be done as follows:
 # Provide a regular expression to choose which attributes are used.
 # The chosen attributes should be foundation of the coordinates map used for 
the lookup.
 # If a configured key collides with a flowfile attribute, it should override 
the flowfile attribute in the coordinate map.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5239) Make MongoDBControllerService able to act as a configuration source for MongoDB processors

2018-06-06 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16503566#comment-16503566
 ] 

Mike Thomsen commented on NIFI-5239:


For reference purposes, [according to the 
docs|http://mongodb.github.io/mongo-java-driver/3.5/driver/getting-started/quick-start/#make-a-connection],
 MongoClient is verified thread-safe.

> Make MongoDBControllerService able to act as a configuration source for 
> MongoDB processors
> --
>
> Key: NIFI-5239
> URL: https://issues.apache.org/jira/browse/NIFI-5239
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> The MongoDBControllerService should be able to provide the getDatabase and 
> getCollection functionality that are built into the MongoDB processors 
> through AbstractMongoDBProcessor. Using the controller service with the 
> processors should be optional in the first release it's added and then 
> mandatory going forward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5271) Move JSON Validator to a separate package and verify license/notice

2018-06-05 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502709#comment-16502709
 ] 

Mike Thomsen commented on NIFI-5271:


[~joewitt] I have the files moved and almost ready for a PR. May need your help 
tomorrow crossing the Ts and dotting the Is on setting up the license and 
notice.

> Move JSON Validator to a separate package and verify license/notice
> ---
>
> Key: NIFI-5271
> URL: https://issues.apache.org/jira/browse/NIFI-5271
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> The JSON Validator in 5261 did not get its L verified and it should be 
> moved to a separate validator module to keep Gson from being bundled where 
> it's not needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5271) Move JSON Validator to a separate package and verify license/notice

2018-06-05 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502664#comment-16502664
 ] 

Mike Thomsen commented on NIFI-5271:


[~sivaprasanna] I'll take responsibility since this was my own dumb fault for 
not checking the L Since you're now a committer, once you review it and 
approve I'll merge if you haven't gotten your permissions updated.

> Move JSON Validator to a separate package and verify license/notice
> ---
>
> Key: NIFI-5271
> URL: https://issues.apache.org/jira/browse/NIFI-5271
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> The JSON Validator in 5261 did not get its L verified and it should be 
> moved to a separate validator module to keep Gson from being bundled where 
> it's not needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5271) Move JSON Validator to a separate package and verify license/notice

2018-06-05 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5271:
--

 Summary: Move JSON Validator to a separate package and verify 
license/notice
 Key: NIFI-5271
 URL: https://issues.apache.org/jira/browse/NIFI-5271
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen
Assignee: Mike Thomsen


The JSON Validator in 5261 did not get its L verified and it should be moved 
to a separate validator module to keep Gson from being bundled where it's not 
needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5261) Create a JSON validator

2018-06-05 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502660#comment-16502660
 ] 

Mike Thomsen commented on NIFI-5261:


[~joewitt]. Ok. I'll move to a separate package and address the L issue under 
a different ticket.

> Create a JSON validator
> ---
>
> Key: NIFI-5261
> URL: https://issues.apache.org/jira/browse/NIFI-5261
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Minor
> Fix For: 1.7.0
>
>
> Create a StandardValidator that validates PropertyDescriptors that take a 
> JSON input.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5261) Create a JSON validator

2018-06-05 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-5261.

   Resolution: Fixed
Fix Version/s: 1.7.0

> Create a JSON validator
> ---
>
> Key: NIFI-5261
> URL: https://issues.apache.org/jira/browse/NIFI-5261
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Minor
> Fix For: 1.7.0
>
>
> Create a StandardValidator that validates PropertyDescriptors that take a 
> JSON input.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5248) Create new put processors that use the ElasticSearch client service

2018-06-03 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16499353#comment-16499353
 ] 

Mike Thomsen commented on NIFI-5248:


In my opinion, no they cannot be updated. The problem with ElasticSearch is 
that it has been very unstable in its Java API compared to Solr between v2, v5 
and the roadmap beyond v5. The processor bundles didn't really take much of 
that into account. The processor bundle that is unversioned in its name is 
primarily for v2. There's a separate, but incomplete, bundle for v5 that uses 
the transport API (which is deprecated from the client in v6 onward).

The new "restapi" processor pack focuses on client services that use the 
official–and allegedly stable–new high level REST API. It can be used with V5 
as a substitute for the transport API, so that means in the long run we can 
just deprecate the existing bundles in favor of the new bundle once it's 
feature-complete.

> Create new put processors that use the ElasticSearch client service
> ---
>
> Key: NIFI-5248
> URL: https://issues.apache.org/jira/browse/NIFI-5248
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> Two new processors:
>  * PutElasticsearchJson - put raw JSON.
>  * PutElasticsearchRecord - put records.
> Both of them should support the general bulk load API and be able to do 
> things like insert into multiple indexes from one payload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (NIFI-5145) MockPropertyValue.evaluateExpressionLanguage(FlowFile) cannot handle null inputs

2018-05-31 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen reopened NIFI-5145:


> MockPropertyValue.evaluateExpressionLanguage(FlowFile) cannot handle null 
> inputs
> 
>
> Key: NIFI-5145
> URL: https://issues.apache.org/jira/browse/NIFI-5145
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
> Fix For: 1.7.0
>
>
> The method mentioned in the title line cannot handle null inputs, even though 
> the main NiFi execution classes can handle that scenario. This forces hack to 
> pass testing with nulls that looks like this:
> String val = flowFile != null ? 
> context.getProperty(PROP).evaluateExpressionLanguage(flowfile).getValue() : 
> context.getProperty(PROP).evaluateExpressionLanguage(new 
> HashMap()).getValue();



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5254) Upgrade to Groovy 2.5.0

2018-05-31 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496383#comment-16496383
 ] 

Mike Thomsen commented on NIFI-5254:


Apparently the jar isn't in maven central yet, but the artifacts are. So this 
will be on hold until that happens.

> Upgrade to Groovy 2.5.0
> ---
>
> Key: NIFI-5254
> URL: https://issues.apache.org/jira/browse/NIFI-5254
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> Groovy 2.5 has been released and support for it should be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5254) Upgrade to Groovy 2.5.0

2018-05-31 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen reassigned NIFI-5254:
--

Assignee: Mike Thomsen

> Upgrade to Groovy 2.5.0
> ---
>
> Key: NIFI-5254
> URL: https://issues.apache.org/jira/browse/NIFI-5254
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> Groovy 2.5 has been released and support for it should be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5024) Deadlock in ExecuteStreamCommand processor

2018-05-31 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-5024.

   Resolution: Fixed
Fix Version/s: 1.7.0

> Deadlock in ExecuteStreamCommand processor
> --
>
> Key: NIFI-5024
> URL: https://issues.apache.org/jira/browse/NIFI-5024
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Nicolas Sanglard
>Priority: Minor
> Fix For: 1.7.0
>
> Attachments: Screen Shot 2018-03-28 at 15.34.36.png, Screen Shot 
> 2018-03-28 at 15.36.02.png
>
>
> Whenever a process is producing too much output on stderr, the current 
> implementation will run into a deadlock between the JVM and the unix process 
> started by the ExecuteStreamCommand.
> This is a known issue that is fully described here: 
> [http://java-monitor.com/forum/showthread.php?t=4067]
> In short:
>  * If the process produces too much stderr that is not consumed by 
> ExecuteStreamCommand, it will block until data is read.
>  * The current processor implementation is reading from stderr only after 
> having called process.waitFor()
>  * Thus, the two processes are waiting for each other and fall into a deadlock
>  
>  
> The following setup will lead to a deadlock:
>  
> A jar containing the following Main application:
> {code:java}
> object Main extends App {
>   import scala.collection.JavaConverters._
>   val str = 
> Source.fromInputStream(this.getClass.getResourceAsStream("/1mb.txt")).mkString
>   System.err.println(str)
> }
> {code}
> The following NiFi Flow:
>  
> !Screen Shot 2018-03-28 at 15.34.36.png!
>  
> Configuration for ExecuteStreamCommand:
>  
> !Screen Shot 2018-03-28 at 15.36.02.png!
>  
> The script is simply containing a call to the jar: 
> {code:java}
> java -jar stderr.jar
> {code}
>  
> Once the processor calls the script, it appears as "processing" indefinitely 
> and can only be stopped by restarting NiFi.
>  
> I already have a running solution that I will publish as soon as possible.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5254) Upgrade to Groovy 2.5.0

2018-05-30 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5254:
--

 Summary: Upgrade to Groovy 2.5.0
 Key: NIFI-5254
 URL: https://issues.apache.org/jira/browse/NIFI-5254
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen


Groovy 2.5 has been released and support for it should be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5245) SimpleCSVFileLookupService should take account of charset.

2018-05-30 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-5245.

   Resolution: Fixed
Fix Version/s: 1.7.0

> SimpleCSVFileLookupService should  take account of  charset.
> 
>
> Key: NIFI-5245
> URL: https://issues.apache.org/jira/browse/NIFI-5245
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
> Environment: Windows 10
>Reporter: Seiji Sogabe
>Priority: Critical
> Fix For: 1.7.0
>
>
> If charset of csv file is not default encoding on platform, 
> SimpleCSVFileLookupService will lead to garbled characters.
> CSVRecordLookupService also has the same problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5122) Add record writer to S2S Reporting Tasks

2018-05-30 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5122:
---
   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> Add record writer to S2S Reporting Tasks
> 
>
> Key: NIFI-5122
> URL: https://issues.apache.org/jira/browse/NIFI-5122
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.7.0
>
>
> Just like we have the option to specify a record writer for the new Site To 
> Site Metrics Reporting Task, there should be the possibility to specify an 
> optional record writer for the other S2S reporting tasks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5248) Create new put processors that use the ElasticSearch client service

2018-05-29 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5248:
--

 Summary: Create new put processors that use the ElasticSearch 
client service
 Key: NIFI-5248
 URL: https://issues.apache.org/jira/browse/NIFI-5248
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen
Assignee: Mike Thomsen


Two new processors:
 * PutElasticsearchJson - put raw JSON.
 * PutElasticsearchRecord - put records.

Both of them should support the general bulk load API and be able to do things 
like insert into multiple indexes from one payload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5242) Elasticsearch REST API client has LGPL dependency which must be removed

2018-05-27 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5242:
---
   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> Elasticsearch REST API client has LGPL dependency which must be removed
> ---
>
> Key: NIFI-5242
> URL: https://issues.apache.org/jira/browse/NIFI-5242
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.7.0
>
>
> NIFI-4325 introduced a processor that made use of the Elasticsearch 5.x REST 
> API client (which replaced the older native transport client). However they 
> neglected to make the JTS dependency optional, as it is LGPL-licensed, and 
> thus it is included as a transitive dependency in NiFi, which is a violation 
> of the Apache Software Foundation guidelines.
> As it was apparently intended to be an optional dependency (see 
> [https://github.com/elastic/elasticsearch/issues/28899)], we should be able 
> to exclude it from the NiFi Maven build, but we'll need to run regression 
> tests to make sure nothing gets broken as a result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5244) MockSchemaRegistry retrieveSchemaByName is broken

2018-05-27 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5244:
--

 Summary: MockSchemaRegistry retrieveSchemaByName is broken
 Key: NIFI-5244
 URL: https://issues.apache.org/jira/browse/NIFI-5244
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mike Thomsen
Assignee: Mike Thomsen


retrieveSchemaByName uses an optional, but never calls get() on the optional so 
the hashmap always returns null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5239) Make MongoDBControllerService able to act as a configuration source for MongoDB processors

2018-05-25 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5239:
--

 Summary: Make MongoDBControllerService able to act as a 
configuration source for MongoDB processors
 Key: NIFI-5239
 URL: https://issues.apache.org/jira/browse/NIFI-5239
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen
Assignee: Mike Thomsen


The MongoDBControllerService should be able to provide the getDatabase and 
getCollection functionality that are built into the MongoDB processors through 
AbstractMongoDBProcessor. Using the controller service with the processors 
should be optional in the first release it's added and then mandatory going 
forward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5233) Enable expression language in Hadoop Configuration Files property of Hbase Client Service

2018-05-25 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5233:
---
   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> Enable expression language in Hadoop Configuration Files property of Hbase 
> Client Service
> -
>
> Key: NIFI-5233
> URL: https://issues.apache.org/jira/browse/NIFI-5233
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Gergely Devai
>Assignee: Pierre Villard
>Priority: Minor
>  Labels: easyfix
> Fix For: 1.7.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In some Hadoop related processors (e.g. GetHDFS, DeleteHDFS) the "Hadoop 
> Configuration Files" property supports expression language. This is 
> convenient, as the lengthy paths to the config files can be stored in a 
> property file loaded by Nifi at startup or in an environment variable, and 
> the name of the property/environment variable can be referenced in the 
> processors' configuration.
> The controller service HBase_1_1_2_ClientService also has the "Hadoop 
> Configuration Files" property, but it does not support expression language. 
> For the convenience reasons described above and for uniformity, it is 
> desirable to allow expression language in that property as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5229) Implement a DBCPConnectionPool that dynamically selects a connection pool

2018-05-25 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5229:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Implement a DBCPConnectionPool that dynamically selects a connection pool
> -
>
> Key: NIFI-5229
> URL: https://issues.apache.org/jira/browse/NIFI-5229
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Priority: Major
> Fix For: 1.7.0
>
>
> In NIFI-5121 we modified the DBCPConnectionPool interface to allow passing a 
> map of attributes (https://issues.apache.org/jira/browse/NIFI-5121).
> We should implement a DBCPConnectionPool that lets you register multiple 
> other connection pools via dynamic properties, and then selects one based on 
> looking for an incoming attribute.
> For example, lets say you create two regular connection pools A and B, then 
> in the new connection pool you would register:
> a = pool1
> b = pool2
> and then the new connection pool will look in the attribute map for an 
> attribute like "database.id" and select the pool with the given id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5229) Implement a DBCPConnectionPool that dynamically selects a connection pool

2018-05-25 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5229:
---
Fix Version/s: 1.7.0

> Implement a DBCPConnectionPool that dynamically selects a connection pool
> -
>
> Key: NIFI-5229
> URL: https://issues.apache.org/jira/browse/NIFI-5229
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Priority: Major
> Fix For: 1.7.0
>
>
> In NIFI-5121 we modified the DBCPConnectionPool interface to allow passing a 
> map of attributes (https://issues.apache.org/jira/browse/NIFI-5121).
> We should implement a DBCPConnectionPool that lets you register multiple 
> other connection pools via dynamic properties, and then selects one based on 
> looking for an incoming attribute.
> For example, lets say you create two regular connection pools A and B, then 
> in the new connection pool you would register:
> a = pool1
> b = pool2
> and then the new connection pool will look in the attribute map for an 
> attribute like "database.id" and select the pool with the given id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5224) Add SolrClientService

2018-05-23 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487407#comment-16487407
 ] 

Mike Thomsen commented on NIFI-5224:


That would work for Solr. It's not like ES where there are now 3+ ways people 
try to do CRUD with it.

> Add SolrClientService
> -
>
> Key: NIFI-5224
> URL: https://issues.apache.org/jira/browse/NIFI-5224
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Johannes Peter
>Assignee: Johannes Peter
>Priority: Major
>
> The Solr CRUD functions that are currently included in SolrUtils should be 
> moved to a controller service. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5232) HttpConnectionService controller service

2018-05-23 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487336#comment-16487336
 ] 

Mike Thomsen commented on NIFI-5232:


I think it might be worthwhile to take this over to the developer list.

> HttpConnectionService controller service
> 
>
> Key: NIFI-5232
> URL: https://issues.apache.org/jira/browse/NIFI-5232
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Priority: Major
>
> The functionality of InvokeHttp and related processors should be copied over 
> to a controller service that can do much the same thing. This controller 
> service would be able to handle all of the common scenarios with HTTP 
> connections from processors going forward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5230) InvokeScriptedProcessor can issue a NullPointerException in customValidate

2018-05-23 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5230:
---
Fix Version/s: 1.7.0

> InvokeScriptedProcessor can issue a NullPointerException in customValidate
> --
>
> Key: NIFI-5230
> URL: https://issues.apache.org/jira/browse/NIFI-5230
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.7.0
>
>
> NIFI-4968 improved the behavior of InvokeScriptedProcessor when the script 
> has an error during parsing/interpretation/compilation, and an improvement 
> was made to not output the same validation errors over and over again until 
> manual action was taken. As part of that improvement though, a bug was 
> introduced where a NullPointerException can occur.
> Proposed fix is not to set the validation results to null on "clear", but 
> rather to an empty Set (which the code is expecting)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5230) InvokeScriptedProcessor can issue a NullPointerException in customValidate

2018-05-23 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5230:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> InvokeScriptedProcessor can issue a NullPointerException in customValidate
> --
>
> Key: NIFI-5230
> URL: https://issues.apache.org/jira/browse/NIFI-5230
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.7.0
>
>
> NIFI-4968 improved the behavior of InvokeScriptedProcessor when the script 
> has an error during parsing/interpretation/compilation, and an improvement 
> was made to not output the same validation errors over and over again until 
> manual action was taken. As part of that improvement though, a bug was 
> introduced where a NullPointerException can occur.
> Proposed fix is not to set the validation results to null on "clear", but 
> rather to an empty Set (which the code is expecting)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5232) HttpConnectionService controller service

2018-05-23 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487285#comment-16487285
 ] 

Mike Thomsen commented on NIFI-5232:


[~ottobackwards] feel free to edit this and add your thoughts.

> HttpConnectionService controller service
> 
>
> Key: NIFI-5232
> URL: https://issues.apache.org/jira/browse/NIFI-5232
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Priority: Major
>
> The functionality of InvokeHttp and related processors should be copied over 
> to a controller service that can do much the same thing. This controller 
> service would be able to handle all of the common scenarios with HTTP 
> connections from processors going forward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5232) HttpConnectionService controller service

2018-05-23 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5232:
--

 Summary: HttpConnectionService controller service
 Key: NIFI-5232
 URL: https://issues.apache.org/jira/browse/NIFI-5232
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen


The functionality of InvokeHttp and related processors should be copied over to 
a controller service that can do much the same thing. This controller service 
would be able to handle all of the common scenarios with HTTP connections from 
processors going forward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5224) Add SolrClientService

2018-05-23 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487284#comment-16487284
 ] 

Mike Thomsen commented on NIFI-5224:


[~joewitt] [~bende] [~ijokarumawak]

Should we do this and plan to refactor the solr processors to use it?

> Add SolrClientService
> -
>
> Key: NIFI-5224
> URL: https://issues.apache.org/jira/browse/NIFI-5224
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Johannes Peter
>Assignee: Johannes Peter
>Priority: Major
>
> The Solr CRUD functions that are currently included in SolrUtils should be 
> moved to a controller service. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5231) Record stats processor

2018-05-23 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5231:
--

 Summary: Record stats processor
 Key: NIFI-5231
 URL: https://issues.apache.org/jira/browse/NIFI-5231
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen
Assignee: Mike Thomsen


Should the following:

 
 # Take a record reader.
 # Count the # of records and add a record_count attribute to the flowfile.
 # Allow user-defined properties that do the following:
 ## Map attribute name -> record path.
 ## Provide aggregate value counts for each record path statement.
 ## Provide total count for record path operation.
 ## Put those values on the flowfile as attributes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-05-23 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-4637.

Resolution: Fixed

Merged fix from [~ijokarumawak]

> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
> Fix For: 1.7.0
>
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5223) Allow the usage of expression language for properties of RecordSetWriters

2018-05-21 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483228#comment-16483228
 ] 

Mike Thomsen commented on NIFI-5223:


[~jope] yeah, the dev list would be a good place to get some attention. I was 
saying that the particular feature in question hadn't really set in as the way 
of doing things, so it shouldn't be a blocker to the merge.

> Allow the usage of expression language for properties of RecordSetWriters
> -
>
> Key: NIFI-5223
> URL: https://issues.apache.org/jira/browse/NIFI-5223
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Johannes Peter
>Assignee: Johannes Peter
>Priority: Major
>
> To allow the usage of expression language for properties of RecordSetWriters, 
> the method createWriter of the interface RecordSetWriterFactory has to be 
> enhanced by a parameter to provide a map containing variables of a FlowFile. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4175) Add Proxy Properties to SFTP Processors

2018-05-20 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-4175:
---
Fix Version/s: 1.7.0

> Add Proxy Properties to SFTP Processors
> ---
>
> Key: NIFI-4175
> URL: https://issues.apache.org/jira/browse/NIFI-4175
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Grant Langlois
>Assignee: Andre F de Miranda
>Priority: Minor
> Fix For: 1.7.0
>
>
> Add proxy server configuration as properties to the Nifi SFTP components. 
> Specifically add properties for:
> Proxy Type: JSCH supported proxies including SOCKS4, SOCKS5 and HTTP
> Proxy Host
> Proxy Port
> Proxy Username
> Proxy Password
> This would allow these properties to be configured for each processor. These 
> properties would align with what is configurable for the JSCH session and 
> shouldn't require any additional dependencies.
> This proposal is similar to what is already implemented for the FTP processors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4196) *S3 processors do not expose Proxy Authentication settings

2018-05-20 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-4196:
---
Fix Version/s: 1.7.0

> *S3 processors do not expose Proxy Authentication settings
> --
>
> Key: NIFI-4196
> URL: https://issues.apache.org/jira/browse/NIFI-4196
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre F de Miranda
>Assignee: Andre F de Miranda
>Priority: Major
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4199) NiFi processors should be able to share proxy settings

2018-05-20 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-4199:
---
   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> NiFi processors should be able to share proxy settings
> --
>
> Key: NIFI-4199
> URL: https://issues.apache.org/jira/browse/NIFI-4199
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre F de Miranda
>Assignee: Koji Kawamura
>Priority: Major
> Fix For: 1.7.0
>
>
> Currently, configuring proxy settings for NiFi processors may be a tedious 
> process that requires the DFM to set proxy settings on individual processors. 
> This leads to:
> * Duplication of work
> * Management overhead (as password must be changed on multiple locations)
> * Lower security (as proxy credentials must be known by "n" DFMs)
> Ideally, NiFi should offer a way to minimise duplication of work by offering 
> a something similar to the Standard SSL Context services. This way, the DFM 
> could set the proxy settings once an all authorised users could tap into 
> those settings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4196) *S3 processors do not expose Proxy Authentication settings

2018-05-20 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-4196.

Resolution: Fixed

Resolved as part of 4199.

> *S3 processors do not expose Proxy Authentication settings
> --
>
> Key: NIFI-4196
> URL: https://issues.apache.org/jira/browse/NIFI-4196
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre F de Miranda
>Assignee: Andre F de Miranda
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4175) Add Proxy Properties to SFTP Processors

2018-05-20 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482057#comment-16482057
 ] 

Mike Thomsen commented on NIFI-4175:


[~trixpan] I approved [~ijokarumawak]'s PR that merged this and 4196. Hope you 
don't mind.

> Add Proxy Properties to SFTP Processors
> ---
>
> Key: NIFI-4175
> URL: https://issues.apache.org/jira/browse/NIFI-4175
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Grant Langlois
>Assignee: Andre F de Miranda
>Priority: Minor
>
> Add proxy server configuration as properties to the Nifi SFTP components. 
> Specifically add properties for:
> Proxy Type: JSCH supported proxies including SOCKS4, SOCKS5 and HTTP
> Proxy Host
> Proxy Port
> Proxy Username
> Proxy Password
> This would allow these properties to be configured for each processor. These 
> properties would align with what is configurable for the JSCH session and 
> shouldn't require any additional dependencies.
> This proposal is similar to what is already implemented for the FTP processors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4175) Add Proxy Properties to SFTP Processors

2018-05-20 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-4175.

Resolution: Fixed

> Add Proxy Properties to SFTP Processors
> ---
>
> Key: NIFI-4175
> URL: https://issues.apache.org/jira/browse/NIFI-4175
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Grant Langlois
>Assignee: Andre F de Miranda
>Priority: Minor
>
> Add proxy server configuration as properties to the Nifi SFTP components. 
> Specifically add properties for:
> Proxy Type: JSCH supported proxies including SOCKS4, SOCKS5 and HTTP
> Proxy Host
> Proxy Port
> Proxy Username
> Proxy Password
> This would allow these properties to be configured for each processor. These 
> properties would align with what is configurable for the JSCH session and 
> shouldn't require any additional dependencies.
> This proposal is similar to what is already implemented for the FTP processors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5214) Add a REST lookup service

2018-05-18 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5214:
--

 Summary: Add a REST lookup service
 Key: NIFI-5214
 URL: https://issues.apache.org/jira/browse/NIFI-5214
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen
Assignee: Mike Thomsen


* Should have reader API support
 * Should be able to drill down through complex XML and JSON responses to a 
nested record.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5202) TestListDatabaseTables timing issue

2018-05-17 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5202:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> TestListDatabaseTables timing issue
> ---
>
> Key: NIFI-5202
> URL: https://issues.apache.org/jira/browse/NIFI-5202
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.7.0
>
>
> I sometimes see TestListDatabaseTables fail in Travis-CI:
> {code:java}
> [INFO] Results:
> [INFO] 
> [ERROR] Failures: 
> [ERROR]   TestListDatabaseTables.testListTablesMultipleRefresh:218 
> expected:<1> but was:<2>
> [INFO] {code}
> It appears to be a timing issue. When I run it on my laptop, it always 
> succeeds, as-is. However, I can easily reproduce the error above if I update 
> the unit test by adding a Thread.sleep at line 214:
> {code:java}
>         assertEquals("2", 
> results.get(0).getAttribute(ListDatabaseTables.DB_TABLE_COUNT));
>         runner.clearTransferState();
>         Thread.sleep(500);
>         // Add another table immediately, the first table should not be 
> listed again but the second should
>         stmt.execute("create table TEST_TABLE2 (id integer not null, val1 
> integer, val2 integer, constraint my_pk2 primary key (id))");
>         stmt.close();{code}
> With that Thread.sleep(500) added in, it always fails on my laptop. This 
> essentially is mimicking a slower environment, such as travis-ci.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5202) TestListDatabaseTables timing issue

2018-05-17 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5202:
---
Fix Version/s: 1.7.0

> TestListDatabaseTables timing issue
> ---
>
> Key: NIFI-5202
> URL: https://issues.apache.org/jira/browse/NIFI-5202
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.7.0
>
>
> I sometimes see TestListDatabaseTables fail in Travis-CI:
> {code:java}
> [INFO] Results:
> [INFO] 
> [ERROR] Failures: 
> [ERROR]   TestListDatabaseTables.testListTablesMultipleRefresh:218 
> expected:<1> but was:<2>
> [INFO] {code}
> It appears to be a timing issue. When I run it on my laptop, it always 
> succeeds, as-is. However, I can easily reproduce the error above if I update 
> the unit test by adding a Thread.sleep at line 214:
> {code:java}
>         assertEquals("2", 
> results.get(0).getAttribute(ListDatabaseTables.DB_TABLE_COUNT));
>         runner.clearTransferState();
>         Thread.sleep(500);
>         // Add another table immediately, the first table should not be 
> listed again but the second should
>         stmt.execute("create table TEST_TABLE2 (id integer not null, val1 
> integer, val2 integer, constraint my_pk2 primary key (id))");
>         stmt.close();{code}
> With that Thread.sleep(500) added in, it always fails on my laptop. This 
> essentially is mimicking a slower environment, such as travis-ci.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5175) NiFi built with Java 1.8 needs to run on Java 9

2018-05-17 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5175:
---
Fix Version/s: 1.7.0

> NiFi built with Java 1.8 needs to run on Java 9
> ---
>
> Key: NIFI-5175
> URL: https://issues.apache.org/jira/browse/NIFI-5175
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
> Fix For: 1.7.0
>
>
> The following issues have been encountered while attempting to run a Java 
> 1.8-built NiFi on Java 9:
> ||Issue||Solution||Status||
> |JAXB classes cannot be found on the classpath|Add 
> "--add-modules=java.xml.bind" to the commant that starts NiFi|Done|
> |NiFI boostrap not able to determine PID, restarts nifi after nifi.sh 
> stop|Detect if NiFi is running on Java 9, and reflectively invoke 
> Process.pid(), which was newly added to the Process API in Java 9|Done|
>  
> 
>  
> ||Unaddressed issues/warnings with NiFi compiled on Java 1.8 running on Java 
> 9+||Description||Solution||
> |WARNING: An illegal reflective access operation has occurred
>  ..._specific class usage snipped_...
>  WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
>  WARNING: All illegal access operations will be denied in a future 
> release|Reflective invocations are common in the code used in NiFi and its 
> dependencies in Java 1.8|Full compliant migration to Java 9 and use 
> dependencies that are Java 9 compliant|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5175) NiFi built with Java 1.8 needs to run on Java 9

2018-05-17 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-5175.

Resolution: Fixed

> NiFi built with Java 1.8 needs to run on Java 9
> ---
>
> Key: NIFI-5175
> URL: https://issues.apache.org/jira/browse/NIFI-5175
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
> Fix For: 1.7.0
>
>
> The following issues have been encountered while attempting to run a Java 
> 1.8-built NiFi on Java 9:
> ||Issue||Solution||Status||
> |JAXB classes cannot be found on the classpath|Add 
> "--add-modules=java.xml.bind" to the commant that starts NiFi|Done|
> |NiFI boostrap not able to determine PID, restarts nifi after nifi.sh 
> stop|Detect if NiFi is running on Java 9, and reflectively invoke 
> Process.pid(), which was newly added to the Process API in Java 9|Done|
>  
> 
>  
> ||Unaddressed issues/warnings with NiFi compiled on Java 1.8 running on Java 
> 9+||Description||Solution||
> |WARNING: An illegal reflective access operation has occurred
>  ..._specific class usage snipped_...
>  WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
>  WARNING: All illegal access operations will be denied in a future 
> release|Reflective invocations are common in the code used in NiFi and its 
> dependencies in Java 1.8|Full compliant migration to Java 9 and use 
> dependencies that are Java 9 compliant|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5172) PutElasticSearchRecord does to fail individual recods

2018-05-16 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477693#comment-16477693
 ] 

Mike Thomsen commented on NIFI-5172:


[~mattyb149] AFAIK ES won't tell you which one failed, so if we only fail 
individual records it would involve hammering ES with 10k - $bad_num individual 
puts. ES and Solr *really* don't like that unless nothing else is going on with 
them, particularly WRT ingestion.

> PutElasticSearchRecord does to fail individual recods
> -
>
> Key: NIFI-5172
> URL: https://issues.apache.org/jira/browse/NIFI-5172
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: Juan C. Sequeiros
>Priority: Minor
>
> My observation and not sure if working as expected but when I send my output 
> from MergeRecord ( set to 1 ) max number of records and one of those 
> records has an invalid timestamp "bogusdata" value ES rejects it, rightly so 
> since on ES we have a schema template more granular and is expecting 
> timestamp as type "date".
>  
> From USER forums Matt Burgess:
> Yes the current behavior is to move the entire input flowfile to
> failure if any errors occur. Some other record-aware processors create
> separate flow files for failed and successful records, but
> PutElasticsearchHttpRecord does not (yet) do that. Please feel free to
> write a Jira for this improvement.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5200) Nested ProcessSession.read resulting in outer stream being closed.

2018-05-16 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5200:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Nested ProcessSession.read resulting in outer stream being closed.
> --
>
> Key: NIFI-5200
> URL: https://issues.apache.org/jira/browse/NIFI-5200
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Peter Radden
>Assignee: Mark Payne
>Priority: Minor
> Fix For: 1.7.0
>
>
> Consider this example processor:
> {code:java}
> FlowFile ff1 = session.write(session.create(),
> (out) -> { out.write(new byte[]{ 'A', 'B' }); });
> FlowFile ff2 = session.write(session.create(),
> (out) -> { out.write('C'); });
> session.read(ff1,
> (in1) -> {
> int a = in1.read();
> session.read(ff2, (in2) -> { int c = in2.read(); });
> int b = in1.read();
> });
> session.transfer(ff1, REL_SUCCESS);
> session.transfer(ff2, REL_SUCCESS);{code}
> The expectation is that a='A', b='B' and c='C'.
> The actual result is that the final call to in1.read() throws due to the 
> underlying stream being closed by the previous session.read on ff2.
> A workaround seems to be to pass the optional parameter to session.read of 
> allowSessionStreamManagement=true.
> Is this expected that nested reads used in this way will not work?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5197) Fix invalid expression language scope declarations

2018-05-15 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5197:
--

 Summary: Fix invalid expression language scope declarations
 Key: NIFI-5197
 URL: https://issues.apache.org/jira/browse/NIFI-5197
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mike Thomsen
Assignee: Mike Thomsen


Some of the expression language scope declarations are wrong, such as declaring 
VARIABLE_REGISTRY and then passing in a flowfile as the argument in the 
processor. These need to be hunted down and cleaned up in one single commit. 
Each set of unit and integration tests should be run to verify where problems 
reside.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5169) Upgrade to JsonPath 2.4.0

2018-05-15 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen reassigned NIFI-5169:
--

Assignee: Mike Thomsen

> Upgrade to JsonPath 2.4.0
> -
>
> Key: NIFI-5169
> URL: https://issues.apache.org/jira/browse/NIFI-5169
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 1.6.0
>Reporter: Dennis Dahlmann
>Assignee: Mike Thomsen
>Priority: Major
>  Labels: JSON
> Fix For: 1.7.0
>
>
> A newer version (2.4.0) of JsonPath is availabel at  
> [github|[https://github.com/json-path/JsonPath].]
> With this version a currently existing bug is fixed, take this JSON
> {"Epoch timestamp [s]":"1486373924","temperature [C]":"20"}
> and try to get the value of "Epoch timestamp [s]" with $.['Epoch timestamp 
> [s]'] this will result in an empty result with version 2.0.0 which is 
> currently, but with version 2.4.0 you get the right value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4971) ReportLineageToAtlas 'complete path' strategy can miss one-time lineages

2018-05-15 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-4971:
---
   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> ReportLineageToAtlas 'complete path' strategy can miss one-time lineages
> 
>
> Key: NIFI-4971
> URL: https://issues.apache.org/jira/browse/NIFI-4971
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
> Fix For: 1.7.0
>
>
> For the simplest example, with GetFlowFIle (GFF) -> PutFlowFile (PFF), where 
> GFF gets files and PFF saves those files into a different directory, then 
> following provenance events will be generated:
>  # GFF RECEIVE file1
>  # PFF SEND file2
> From above provenance events, following entities and lineages should be 
> created in Atlas, labels in brackets are Atlas type names:
> {code}
> file1 (fs_path) -> GFF, PFF (nifi_flow_path) -> file2 (fs_path)
> {code}
> Entities shown in above graph are created. However, the 'nifi_flow_path' 
> entity do not have inputs/outputs referencing 'fs_path', so lineage can not 
> be seen in Atlas UI.
> This issue was discovered by [~nayakmahesh616]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5189) If a schema is accessed using 'Use 'Schema Text' Property', the name of the schema is not available

2018-05-14 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-5189.

   Resolution: Fixed
Fix Version/s: 1.7.0

Based on the description in the linked issue, and what I saw in the patch it 
looks correct. Merged.

> If a schema is accessed using 'Use 'Schema Text' Property', the name of the 
> schema is not available
> ---
>
> Key: NIFI-5189
> URL: https://issues.apache.org/jira/browse/NIFI-5189
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Johannes Peter
>Assignee: Johannes Peter
>Priority: Major
> Fix For: 1.7.0
>
>
> If a schema is accessed using 'Use 'Schema Text' Property', the Avro schema 
> object will be transformed to a RecordSchema using the method 
> AvroTypeUtil.create(Schema avroSchema). This method returns a RecordSchema 
> with an empty SchemaIdentifier. Therefore, the name of the schema cannot be 
> accessed. The method should at least return a RecordSchema with a 
> SchemaIdentifier containing the name of the schema. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5114) Support Basic Authentication at WebSocket components

2018-05-14 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5114:
---
Fix Version/s: 1.7.0

> Support Basic Authentication at WebSocket components
> 
>
> Key: NIFI-5114
> URL: https://issues.apache.org/jira/browse/NIFI-5114
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
> Fix For: 1.7.0
>
>
> Some WebSocket server uses Basic Authentication for WebSocket upgrade 
> requests. In order to connect such endpoints, NiFi should allow users to 
> configure user name and password at WebSocket client component. Specifically 
> JettyWebSocketClientService.
> In addition to that, if JettyWebSocketServerService can be configured to use 
> Basic Authentication, it makes NiFi WebSocket server more secure and also 
> useful to test Basic Auth between NiFi WebSocket client components. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5114) Support Basic Authentication at WebSocket components

2018-05-14 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5114:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Checked it over, looked good and all tests ran.

> Support Basic Authentication at WebSocket components
> 
>
> Key: NIFI-5114
> URL: https://issues.apache.org/jira/browse/NIFI-5114
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
>
> Some WebSocket server uses Basic Authentication for WebSocket upgrade 
> requests. In order to connect such endpoints, NiFi should allow users to 
> configure user name and password at WebSocket client component. Specifically 
> JettyWebSocketClientService.
> In addition to that, if JettyWebSocketServerService can be configured to use 
> Basic Authentication, it makes NiFi WebSocket server more secure and also 
> useful to test Basic Auth between NiFi WebSocket client components. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5170) Update Grok to 0.1.9

2018-05-14 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5170:
---
Fix Version/s: 1.7.0

> Update Grok to 0.1.9
> 
>
> Key: NIFI-5170
> URL: https://issues.apache.org/jira/browse/NIFI-5170
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Otto Fowler
>Assignee: Otto Fowler
>Priority: Major
> Fix For: 1.7.0
>
>
> Grok 0.1.9 has been released, including work for empty capture support.
>  
> https://github.com/thekrakken/java-grok#maven-repository



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5170) Update Grok to 0.1.9

2018-05-13 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-5170.

Resolution: Fixed

> Update Grok to 0.1.9
> 
>
> Key: NIFI-5170
> URL: https://issues.apache.org/jira/browse/NIFI-5170
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Otto Fowler
>Assignee: Otto Fowler
>Priority: Major
>
> Grok 0.1.9 has been released, including work for empty capture support.
>  
> https://github.com/thekrakken/java-grok#maven-repository



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4494) Add a FetchOracleRow processor

2018-05-12 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473295#comment-16473295
 ] 

Mike Thomsen commented on NIFI-4494:


[~fred_liu_2017] can you elaborate on the use case? From the sounds of this, it 
doesn't sound like it's limited to Oracle as a general problem. Also, we can't 
build an Oracle-specific processor AFAIK because the client driver is 
proprietary and thus prohibited by the ASF.

> Add a FetchOracleRow processor
> --
>
> Key: NIFI-4494
> URL: https://issues.apache.org/jira/browse/NIFI-4494
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
> Environment: oracle
>Reporter: Fred Liu
>Priority: Major
>
> We encounter a lot of demand, poor data quality, no primary key, no time 
> stamp, and even a lot of duplicate data. But the customer requires a high 
> performance and accuracy.
> Using GenerateTableFetch or QueryDatabaseTable, we can not meet the 
> functional and performance requirements. So we want to add a new processor, 
> it is specifically for the oracle database, able to ingest very poor quality 
> data and have better performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4736) New Processor for Fetch MongoDB

2018-05-12 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-4736.

   Resolution: Fixed
Fix Version/s: 1.6.0

> New Processor for Fetch MongoDB
> ---
>
> Key: NIFI-4736
> URL: https://issues.apache.org/jira/browse/NIFI-4736
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.4.0
>Reporter: Preetham Uchil
>Priority: Major
> Fix For: 1.6.0
>
>
> Raising an JIRA for this very issue highlighted in the link below. I have the 
> same requirement as Pablo stated:
> "I've just managed to get the PutMongo processor work successfully. 
> However, I just realized that you can't use the GetMongo processor to 
> retrieve data based on the input from another flowfile or attribute. It has 
> no input. That leaves the Mongo database that you I sent data to a bit orphan 
> if you can't retrieve data based on another source. 
> Does anybody have an alternative on how to get a specific record from MongoDB 
> based on an input?"
> http://apache-nifi-users-list.2361937.n4.nabble.com/GetMongo-Processor-Alternative-td702.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4736) New Processor for Fetch MongoDB

2018-05-12 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473294#comment-16473294
 ] 

Mike Thomsen commented on NIFI-4736:


[~upreetham] closing this out because your concern here is now addressed in 
1.6.0:
{quote}However, I just realized that you can't use the GetMongo processor to 
retrieve data based on the input from another flowfile or attribute. It has no 
input.
{quote}

> New Processor for Fetch MongoDB
> ---
>
> Key: NIFI-4736
> URL: https://issues.apache.org/jira/browse/NIFI-4736
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.4.0
>Reporter: Preetham Uchil
>Priority: Major
>
> Raising an JIRA for this very issue highlighted in the link below. I have the 
> same requirement as Pablo stated:
> "I've just managed to get the PutMongo processor work successfully. 
> However, I just realized that you can't use the GetMongo processor to 
> retrieve data based on the input from another flowfile or attribute. It has 
> no input. That leaves the Mongo database that you I sent data to a bit orphan 
> if you can't retrieve data based on another source. 
> Does anybody have an alternative on how to get a specific record from MongoDB 
> based on an input?"
> http://apache-nifi-users-list.2361937.n4.nabble.com/GetMongo-Processor-Alternative-td702.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4845) Add JanusGraph put processor

2018-05-12 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473293#comment-16473293
 ] 

Mike Thomsen commented on NIFI-4845:


[~liufucai-inspur] a discussion related to this just popped up on the developer 
mailing list. If you're interested, join the list and the discussion about 
graph DBs because there might be some real interest and momentum starting to 
build up.

> Add JanusGraph put processor
> 
>
> Key: NIFI-4845
> URL: https://issues.apache.org/jira/browse/NIFI-4845
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Fred Liu
>Assignee: Fred Liu
>Priority: Major
>
> Create processor for Reading records from an incoming FlowFile using the 
> provided Record Reader, and writting those records to JanusGraph. And using a 
> JanusGraphControllerService is good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4904) PutElasticSearch5 should support higher than elasticsearch 5.0.0

2018-05-12 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473292#comment-16473292
 ] 

Mike Thomsen commented on NIFI-4904:


The transport protocol is being deprecated in favor of HTTP/REST by Elastic, so 
new ES functionality is being steadily developed around the official client 
APIs for REST. 
[Source|https://www.elastic.co/guide/en/elasticsearch/client/java-api/master/transport-client.html]

> PutElasticSearch5 should support higher than elasticsearch 5.0.0
> 
>
> Key: NIFI-4904
> URL: https://issues.apache.org/jira/browse/NIFI-4904
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: Ubuntu
>Reporter: Dye357
>Priority: Trivial
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Currently the PutElasticSearch5 component is using the following transport 
> artifact
> 
>  org.elasticsearch.client
>  transport
>  ${es.version}
>  
> Where es.version is 5.0.1. Upgrading to the highest 5.x dependency would 
> enable this component to be compatible with later 5.x versions of elastic 
> search as well as early versions of elastic search 6.x.
> Here is Nifi 1.5.0 connecting to ES 6.2.1 on port 9300:
> [2018-02-23T01:41:04,162][WARN ][o.e.t.n.Netty4Transport ] [uQSW8O8] 
> exception caught on transport layer 
> [NettyTcpChannel\{localAddress=/127.0.0.1:9300, 
> remoteAddress=/127.0.0.1:57457}], closing connection
> java.lang.IllegalStateException: Received message from unsupported version: 
> [5.0.0] minimal compatible version is: [5.6.0]
>  at 
> org.elasticsearch.transport.TcpTransport.ensureVersionCompatibility(TcpTransport.java:1430)
>  ~[elasticsearch-6.2.1.jar:6.2.1]
>  at 
> org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1377)
>  ~[elasticsearch-6.2.1.jar:6.2.1]
>  at 
> org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:64)
>  ~[transport-netty4-6.2.1.jar:6.2.1]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
>  [netty-codec-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297)
>  [netty-codec-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413)
>  [netty-codec-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
>  [netty-codec-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) 
> [netty-handler-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  [netty-transport-4.1.16.Final.jar:4.1.16.Final]
>  at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
>  

[jira] [Commented] (NIFI-4964) Add bulk lookup feature in LookupRecord

2018-05-12 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473287#comment-16473287
 ] 

Mike Thomsen commented on NIFI-4964:


The HBase service can use a lot of Gets at once to do that, but the other ones 
don't have any good way that I can think of to make one bulk request to the 
external system that won't confuse things badly.

> Add bulk lookup feature in LookupRecord
> ---
>
> Key: NIFI-4964
> URL: https://issues.apache.org/jira/browse/NIFI-4964
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Priority: Major
>
> When having a flow file with a large number of records it would be much more 
> efficient to parse the whole flow file once to list all the coordinates to 
> look for, then call a new method (lookupAll?) in the lookup service to get 
> all the results, and then parse the file one more time to update the records.
> It should be added in the CS description/annotations that this approach could 
> hold in memory a large number of objects but could result in better 
> performances for lookup services accessing external systems (Mongo, HBase, 
> etc).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5130) ExecuteInfluxDBQuery processor chunking support

2018-05-12 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-5130.

Resolution: Fixed

Merged.

> ExecuteInfluxDBQuery processor chunking support
> ---
>
> Key: NIFI-5130
> URL: https://issues.apache.org/jira/browse/NIFI-5130
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Michał Misiewicz
>Priority: Minor
>
> Many production InfluxDB installation has limited number of rows returned in 
> a single query (by default 10k). In case of huge collections, 10k rows can 
> correspond to less than 1 minute of events, which make usage of 
> ExecuteInfluxDBQuery processor inconvenient. I suggest adding support for 
> chunking queries. Chunking can be used to return results in a stream of 
> smaller batches (each has a partial results up to a chunk size) rather than 
> as a single response. Chunking query can return an unlimited number of rows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5049) Fix handling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-05-11 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-5049.

Resolution: Fixed

> Fix handling of Phonenix datetime columns in QueryDatabaseTable and 
> GenerateTableFetch
> --
>
> Key: NIFI-5049
> URL: https://issues.apache.org/jira/browse/NIFI-5049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Gardella Juan Pablo
>Assignee: Matt Burgess
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> QueryDatabaseAdapter does not work against Phoenix DB if it should convert 
> TIMESTAMP. The error is described below:
> [https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase]
> Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work. 
> See 
> [https://lists.apache.org/thread.html/%3cca+kifscje8ay+uxt_d_vst4qgzf4jxwovboynjgztt4dsbs...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5121) DBCPService should support passing in an attribute map when obtaining a connection

2018-05-11 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5121:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> DBCPService should support passing in an attribute map when obtaining a 
> connection
> --
>
> Key: NIFI-5121
> URL: https://issues.apache.org/jira/browse/NIFI-5121
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Matt Burgess
>Priority: Minor
>
> Many users have asked for a way to obtain dynamic database connections. 
> Essentially being able to use the existing SQL processors like PutSQL, etc, 
> and be able to pass in flow file attributes to the DBCPService to obtain a 
> connection based on the attributes.
> The current DBCPService interface has a single method:
> {code:java}
> Connection getConnection(){code}
> Since there is no way for a processor to pass in any information, we can add 
> an additional method to this interface and make the interface like this:
> {code:java}
> Connection getConnection(Map attributes)
> default Connection getConnection() {
>   return getConnection(Collections.emptyMap());
> }{code}
> This would leave it up to the implementations of DBCPService interface to 
> decide if they want to use the attributes map for anything.
> The DBCPConnectionPool would not use the attributes map and would continue to 
> provide a fixed connection pool against a single data source.
> A new implementation can then be created that somehow maintains multiple 
> connection pools, or creates connections on the fly.
> The PropertyDescriptors in each implementation should indicate how they use 
> expression language.
> For example, since DBCPConnectionPool will not use the attribute map, it's 
> property descriptors will indicate expression language scope as variable 
> registry only:
> {code:java}
> .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY){code}
> The dynamic implementations should indicate:
> {code:java}
> .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBURES){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4393) AbstractDatabaseFetchProcessor handles SQL Server brackets incorrectly

2018-05-03 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-4393:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I checked it over, and things LGTM as well. Merged.

> AbstractDatabaseFetchProcessor handles SQL Server brackets incorrectly
> --
>
> Key: NIFI-4393
> URL: https://issues.apache.org/jira/browse/NIFI-4393
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.4.0
>Reporter: Mikołaj Siedlarek
>Assignee: Koji Kawamura
>Priority: Major
> Attachments: 
> 0001-Handle-SQL-Server-square-brackets-in-AbstractDatabas.patch
>
>
> SQL Server allows column names to contain whitespace, in which case they are 
> written in SQL queries inside square brackets. When using processors based on 
> {{AbstractDatabaseFetchProcessor}}, such as {{QueryDatabaseTable}} they have 
> to be specified in  "Maximum-value Columns" in square brackets, because 
> otherwise they would break a SELECT query. However, when such column name is 
> retrieved from ResultSetMetaData, the driver returns it without square 
> brackets. This causes a mismatch between the key used to save last seen 
> maximum value in processor's state and the one used to search for the value 
> later.
> I'm not sure whether the attached patch is very elegant, but it fixes the 
> issue in a simplest way possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5145) MockPropertyValue.evaluateExpressionLanguage(FlowFile) cannot handle null inputs

2018-05-03 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5145:
--

 Summary: MockPropertyValue.evaluateExpressionLanguage(FlowFile) 
cannot handle null inputs
 Key: NIFI-5145
 URL: https://issues.apache.org/jira/browse/NIFI-5145
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mike Thomsen
Assignee: Mike Thomsen


The method mentioned in the title line cannot handle null inputs, even though 
the main NiFi execution classes can handle that scenario. This forces hack to 
pass testing with nulls that looks like this:

String val = flowFile != null ? 
context.getProperty(PROP).evaluateExpressionLanguage(flowfile).getValue() : 
context.getProperty(PROP).evaluateExpressionLanguage(new HashMap()).getValue();



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5127) Create JSON/java.util.Map to RecordSchema helper capability

2018-04-27 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5127:
--

 Summary: Create JSON/java.util.Map to RecordSchema helper 
capability
 Key: NIFI-5127
 URL: https://issues.apache.org/jira/browse/NIFI-5127
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen


The ElasticSearchLookupService has the start of a JSON -> RecordSchema 
conversion utility method that could be useful to others. It should be extended 
and tested to provide a flexible, generic helper method that can look at a JSON 
string or a java.util.Map and roughly build a RecordSchema from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5104) Create new processor PutFoundationDB

2018-04-20 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5104:
--

 Summary: Create new processor PutFoundationDB
 Key: NIFI-5104
 URL: https://issues.apache.org/jira/browse/NIFI-5104
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen
Assignee: Mike Thomsen


A processor capable of putting data transactionally into FoundationDB is 
needed. It should be able to at least define key value pairs in a file 
separated by a configurable pair separator and a configurable separator for the 
key value pieces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-04-10 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433079#comment-16433079
 ] 

Mike Thomsen commented on NIFI-5059:


Done [~mattyb149]

> MongoDBLookupService should be able to determine a schema or have one provided
> --
>
> Key: NIFI-5059
> URL: https://issues.apache.org/jira/browse/NIFI-5059
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> MongoDBLookupService should have two schema handling modes:
>  # Where a schema is provided as a configuration parameter to be applied to 
> the Record object generated from the result document.
>  # A schema will be generated by examining the result object and building one 
> that roughly translates from BSON into the Record API.
> In both cases, the schema will be applied to the Mongo result Document object 
> that is returned if one comes back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-04-10 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5059:
---
Description: 
MongoDBLookupService should have two schema handling modes:
 # Where a schema is provided as a configuration parameter to be applied to the 
Record object generated from the result document.
 # A schema will be generated by examining the result object and building one 
that roughly translates from BSON into the Record API.

In both cases, the schema will be applied to the Mongo result Document object 
that is returned if one comes back.

> MongoDBLookupService should be able to determine a schema or have one provided
> --
>
> Key: NIFI-5059
> URL: https://issues.apache.org/jira/browse/NIFI-5059
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> MongoDBLookupService should have two schema handling modes:
>  # Where a schema is provided as a configuration parameter to be applied to 
> the Record object generated from the result document.
>  # A schema will be generated by examining the result object and building one 
> that roughly translates from BSON into the Record API.
> In both cases, the schema will be applied to the Mongo result Document object 
> that is returned if one comes back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-04-09 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5059:
--

 Summary: MongoDBLookupService should be able to determine a schema 
or have one provided
 Key: NIFI-5059
 URL: https://issues.apache.org/jira/browse/NIFI-5059
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen
Assignee: Mike Thomsen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5053) Docker image should provide parameters for removing NARs

2018-04-07 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5053:
--

 Summary: Docker image should provide parameters for removing NARs
 Key: NIFI-5053
 URL: https://issues.apache.org/jira/browse/NIFI-5053
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen


There should be an option that can be used to allow the user to specify a list 
of NARs to remove in order to lighten a Dockerized deployment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-07 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5052:
--

 Summary: Create a "delete by query" ElasticSearch processor
 Key: NIFI-5052
 URL: https://issues.apache.org/jira/browse/NIFI-5052
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen
Assignee: Mike Thomsen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5051) Create a LookupService that uses ElasticSearch

2018-04-07 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5051:
--

 Summary: Create a LookupService that uses ElasticSearch
 Key: NIFI-5051
 URL: https://issues.apache.org/jira/browse/NIFI-5051
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen
Assignee: Mike Thomsen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5047) PutMongo checks for query key when mode is insert

2018-04-06 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5047:
--

 Summary: PutMongo checks for query key when mode is insert
 Key: NIFI-5047
 URL: https://issues.apache.org/jira/browse/NIFI-5047
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mike Thomsen
Assignee: Mike Thomsen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5040) JsonPathExpressionValidator uses internal APIs that break with library upgrade

2018-04-04 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5040:
--

 Summary: JsonPathExpressionValidator uses internal APIs that break 
with library upgrade
 Key: NIFI-5040
 URL: https://issues.apache.org/jira/browse/NIFI-5040
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen


JsonPathExpressionValidator uses a number of internal APIs from 
com.jayway.jsonpath. When I tried to upgrade from 2.0 to 2.2, 
JsonPathExpressionValidator broke due to several of the classes no longer being 
publicly accessible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4989) PutMongo cannot specify nested fields or multiple lookup keys

2018-03-16 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-4989:
--

 Summary: PutMongo cannot specify nested fields or multiple lookup 
keys
 Key: NIFI-4989
 URL: https://issues.apache.org/jira/browse/NIFI-4989
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen
Assignee: Mike Thomsen


>From the user mailing list:
{quote}{color:#22}I am using PutMongo processor to update documents in a 
mongoDB collection.{color}
{color:#22}I would like to specify nested field and multiple key in the 
update Key.{color}

{color:#22}For exemple :{color}
{color:#22}- my mongodb documents are {"nom":"HAMEL", 
"prenom":"Yves",{color}
{color:#22}"ids":\{"idSoc":"1234", "idInterne":"788"}}{color}
{color:#22}- I would'd like to set update query key to something like 
{"ids.idSoc" ,{color}
{color:#22}"nom"}{color}
{color:#22}- this would means that I update document with ids.idSoc and nom 
equals to{color}
{color:#22}my mew document.{color}
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2018-03-14 Thread Mike Thomsen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399074#comment-16399074
 ] 

Mike Thomsen commented on NIFI-4975:


[~VijayJain]

I actually just submitted the PR which contains the processors for this ticket. 
If you would like to contribute, you can do a code review 
[here|https://github.com/apache/nifi/pull/2546].

> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4975) Add support for MongoDB GridFS

2018-03-14 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-4975:
--

 Summary: Add support for MongoDB GridFS
 Key: NIFI-4975
 URL: https://issues.apache.org/jira/browse/NIFI-4975
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen
Assignee: Mike Thomsen


[An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.

Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4949) Convert MongoDB lookup service unit tests to integration tests (where appropriate)

2018-03-08 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-4949:
--

 Summary: Convert MongoDB lookup service unit tests to integration 
tests (where appropriate)
 Key: NIFI-4949
 URL: https://issues.apache.org/jira/browse/NIFI-4949
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen
Assignee: Mike Thomsen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4929) Convert existing MongoDB unit tests to integration tests

2018-03-03 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-4929:
--

 Summary: Convert existing MongoDB unit tests to integration tests
 Key: NIFI-4929
 URL: https://issues.apache.org/jira/browse/NIFI-4929
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen
Assignee: Mike Thomsen


Most of the existing MongoDB unit tests are actually integration tests that 
require a live installation of MongoDB to run. They're marked with @Ignore 
which is a very suboptimal solution for testing and makes it easy for a 
reviewer to run the tests and think everything passed if they're not looking 
for that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-4743) Suppress Nulls for PutElasticsearchHttpRecord

2018-03-01 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen reassigned NIFI-4743:
--

Assignee: Mike Thomsen

> Suppress Nulls for PutElasticsearchHttpRecord
> -
>
> Key: NIFI-4743
> URL: https://issues.apache.org/jira/browse/NIFI-4743
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Robert Bruno
>Assignee: Mike Thomsen
>Priority: Minor
> Attachments: NullSuppression.java, PutElasticsearchHttpRecord.java
>
>
> Would be useful for PutElasticsearchHttpRecord to allow you to suppress NULL 
> values in the JSON that is inserted into ES much like the JsonRecordSetWriter 
> allows you to do.  Perhaps PutElasticsearchHttpRecord could some how make use 
> of JsonRecordSetWriter so it would inherit this functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4867) Have HBase Client Service periodically poll the HBase cluster for the master's identity

2018-02-11 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-4867:
--

 Summary: Have HBase Client Service periodically poll the HBase 
cluster for the master's identity
 Key: NIFI-4867
 URL: https://issues.apache.org/jira/browse/NIFI-4867
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen


The change introduced in #4866 caches the identity of the HBase master. It 
should have a configurable refresh period where the client verifies that the 
master has not changed, and if it has, changes the information cached so 
provenance can stay up to date.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-02-03 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-4838:
---
Description: 
It shouldn't wait until the end to do a commit() call because the effect is 
that GetMongo looks like it has hung to a user who is pulling a very large data 
set.

It should also have an option for running a count query to get the current 
approximate count of documents that would match the query and append an 
attribute that indicates where a flowfile stands in the total result count. Ex:

query.progress.point.start = 2500

query.progress.point.end = 5000

query.count.estimate = 17,568,231

  was:It shouldn't wait until the end to do a commit() call because the effect 
is that GetMongo looks like it has hung to a user who is pulling a very large 
data set.

Summary: Make GetMongo support multiple commits and give some progress 
indication  (was: Make GetMongo support multiple commits)

> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    4   5   6   7   8   9   10   >