[GitHub] nifi issue #2089: Nifi-ldap-iaa support for PasswordComparisonAuthenticator

2017-10-06 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2089
  
I would suggest adding user notes and clear documentation to ensure users 
are aware that while `SHA-1` and salted SHA-1 (`SSHA`) hashing are supported 
for compatibility with existing systems, [RFC 
2307](https://tools.ietf.org/html/rfc2307#section-7) notes that these 
algorithms are old, and [SHA-1 has seen collisions in the 
wild](https://shattered.io/). 

There is [extended 
support](https://www.redpill-linpro.com/techblog/2016/08/16/ldap-password-hash.html)
 using the OS `crypt` module for stronger hashing algorithms like `SHA-512` to 
be used with additional rounds, effectively creating a key stretching algorithm 
similar to PBKDF2. 

Additional reference: [OpenLDAP 
FAQ](http://www.openldap.org/faq/data/cache/347.html)


---


[jira] [Assigned] (NIFI-4293) Nifi-ldap-iaa support for PasswordComparisonAuthenticator

2017-10-06 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto reassigned NIFI-4293:
---

Assignee: Andy LoPresto

> Nifi-ldap-iaa support for PasswordComparisonAuthenticator
> -
>
> Key: NIFI-4293
> URL: https://issues.apache.org/jira/browse/NIFI-4293
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Felix Albani
>Assignee: Andy LoPresto
>Priority: Critical
>
> Nfii uses BindAuthenticator to authenticate with LDAP. But we encounter some 
> cases where LDAP has disabled/restricted bind using user dn as a security 
> measure. Thus, I've created this jira to add support for 
> PasswordComparisonAuthenticator to cover the gap.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4293) Nifi-ldap-iaa support for PasswordComparisonAuthenticator

2017-10-06 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4293:

Status: Patch Available  (was: In Progress)

> Nifi-ldap-iaa support for PasswordComparisonAuthenticator
> -
>
> Key: NIFI-4293
> URL: https://issues.apache.org/jira/browse/NIFI-4293
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Felix Albani
>Assignee: Andy LoPresto
>Priority: Critical
>
> Nfii uses BindAuthenticator to authenticate with LDAP. But we encounter some 
> cases where LDAP has disabled/restricted bind using user dn as a security 
> measure. Thus, I've created this jira to add support for 
> PasswordComparisonAuthenticator to cover the gap.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4293) Nifi-ldap-iaa support for PasswordComparisonAuthenticator

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195547#comment-16195547
 ] 

ASF GitHub Bot commented on NIFI-4293:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2089
  
Please start your commit messages with the Jira ticket number you are 
working on. This helps reviewers and the community track the code. 

Example: "NIFI-4293 Added nifi-ldap-iaa support for 
PasswordComparisonAuthenticator". 


> Nifi-ldap-iaa support for PasswordComparisonAuthenticator
> -
>
> Key: NIFI-4293
> URL: https://issues.apache.org/jira/browse/NIFI-4293
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Felix Albani
>Priority: Critical
>
> Nfii uses BindAuthenticator to authenticate with LDAP. But we encounter some 
> cases where LDAP has disabled/restricted bind using user dn as a security 
> measure. Thus, I've created this jira to add support for 
> PasswordComparisonAuthenticator to cover the gap.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2089: Nifi-ldap-iaa support for PasswordComparisonAuthenticator

2017-10-06 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2089
  
Please start your commit messages with the Jira ticket number you are 
working on. This helps reviewers and the community track the code. 

Example: "NIFI-4293 Added nifi-ldap-iaa support for 
PasswordComparisonAuthenticator". 


---


[GitHub] nifi pull request #2143: NiFi-4338: Add documents for how to use SSL protoco...

2017-10-06 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2143#discussion_r143318740
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/CreateHadoopSequenceFile.java
 ---
@@ -64,7 +64,8 @@
 @SideEffectFree
 @InputRequirement(Requirement.INPUT_REQUIRED)
 @Tags({"hadoop", "sequence file", "create", "sequencefile"})
-@CapabilityDescription("Creates Hadoop Sequence Files from incoming flow 
files")
+@CapabilityDescription("Creates Hadoop Sequence Files from incoming flow 
files."
++ " If you want to use SSL-secured file system like swebhdfs, 
please see the 'SSL Configuration' topic of the 'Additional Details' of 
PutHDFS.")
--- End diff --

@joewitt I agree, the explanation is not a 'capability description' rather 
a documentation for configuration detail.

Is there any better place to write a common topic like this for components 
within a NAR package? Something like 'package.html' in Javadoc, describing what 
the NAR includes and common configuration, what type of flow user can construct 
with provided processors ... etc. Is there any improvement going on in 
'Extension Registry' sub project for such documentation? If not, does it sounds 
helpful? If so, I will raise a JIRA to add 'bundle.html' or 'nar.html' 
(Probably with Markdown support). How do you think?


---


[jira] [Created] (MINIFICPP-253) ContentRepository should remove items from count_map_.

2017-10-06 Thread marco polo (JIRA)
marco polo created MINIFICPP-253:


 Summary: ContentRepository should remove items from count_map_. 
 Key: MINIFICPP-253
 URL: https://issues.apache.org/jira/browse/MINIFICPP-253
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: marco polo


removeIfOrphaned in ContentRepository should remove the key/value pair from 
count_map_. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (MINIFICPP-252) ContentRepository stream count can be replaced by an atomic hash map

2017-10-06 Thread marco polo (JIRA)
marco polo created MINIFICPP-252:


 Summary: ContentRepository stream count can be replaced by an 
atomic hash map
 Key: MINIFICPP-252
 URL: https://issues.apache.org/jira/browse/MINIFICPP-252
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: marco polo
Priority: Minor


Something like 
https://github.com/facebook/folly/blob/master/folly/docs/AtomicHashMap.md 
can be used in place of a mutex. While I have not seen this become a 
bottleneck, I'm creating this ticket to explore whether or not this is a true 
problem through profiling, and if so, explore an atomic mapping. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (MINIFICPP-251) Move Civet implementations to an extension.

2017-10-06 Thread marco polo (JIRA)
marco polo created MINIFICPP-251:


 Summary: Move Civet implementations to an extension.
 Key: MINIFICPP-251
 URL: https://issues.apache.org/jira/browse/MINIFICPP-251
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: marco polo


We should be able to move anything that uses Civet, such as ListenHTTP and any 
dependencies ( such as boost ) to an extension and allow boost, civet, and any 
related libraries to be excluded from the build with an argument



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4307) Add Kotlin support to ExecuteScript

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195460#comment-16195460
 ] 

ASF GitHub Bot commented on NIFI-4307:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2104
  
I think Kotlin makes no sense in light of this because it's really not a 
scripting language. Focusing on InvokeScriptProcessor would make more sense 
because that is where a language like Kotlin could really shine. It also opens 
up the potential use of Scala there too.


> Add Kotlin support to ExecuteScript
> ---
>
> Key: NIFI-4307
> URL: https://issues.apache.org/jira/browse/NIFI-4307
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> Kotlin has a ScriptEngine implementation as of v1.1. Add support for it in 
> NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2104: NIFI-4307 Added Kotlin 1.1.X support to ExecuteScript.

2017-10-06 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2104
  
I think Kotlin makes no sense in light of this because it's really not a 
scripting language. Focusing on InvokeScriptProcessor would make more sense 
because that is where a language like Kotlin could really shine. It also opens 
up the potential use of Scala there too.


---


[GitHub] nifi pull request #2089: Nifi-ldap-iaa support for PasswordComparisonAuthent...

2017-10-06 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2089#discussion_r143315312
  
--- Diff: 
nifi-nar-bundles/nifi-ldap-iaa-providers-bundle/nifi-ldap-iaa-providers/src/main/java/org/apache/nifi/ldap/LdapProvider.java
 ---
@@ -193,8 +192,41 @@ public final void onConfigured(final 
LoginIdentityProviderConfigurationContext c
 
 final LdapUserSearch userSearch = new 
FilterBasedLdapUserSearch(userSearchBase, userSearchFilter, context);
 
-// bind
-final BindAuthenticator authenticator = new 
BindAuthenticator(context);
+
+String rawAuthenticatorType = 
configurationContext.getProperty("Authenticator Type");
+AuthenticatorType authenticatorType;
+
+if (StringUtils.isBlank(rawAuthenticatorType))
--- End diff --

Similarly to above, braces are encouraged to be on the same line as control 
statements to reduce code height. You can download the checkstyle rules for 
your IDE and have them automatically applied. 


---


[GitHub] nifi pull request #2089: Nifi-ldap-iaa support for PasswordComparisonAuthent...

2017-10-06 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2089#discussion_r143315219
  
--- Diff: 
nifi-nar-bundles/nifi-ldap-iaa-providers-bundle/nifi-ldap-iaa-providers/src/main/java/org/apache/nifi/ldap/LdapProvider.java
 ---
@@ -38,11 +38,10 @@
 import 
org.springframework.ldap.core.support.SimpleDirContextAuthenticationStrategy;
 import org.springframework.security.authentication.BadCredentialsException;
 import 
org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
+import 
org.springframework.security.authentication.encoding.LdapShaPasswordEncoder;
 import org.springframework.security.core.Authentication;
 import 
org.springframework.security.core.userdetails.UsernameNotFoundException;
-import 
org.springframework.security.ldap.authentication.AbstractLdapAuthenticationProvider;
-import org.springframework.security.ldap.authentication.BindAuthenticator;
-import 
org.springframework.security.ldap.authentication.LdapAuthenticationProvider;
+import org.springframework.security.ldap.authentication.*;
--- End diff --

I imagine this was your IDE collapsing these imports, but our [coding style 
guidelines](https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-CodeStyle)
 discourage the use of wildcard imports. 


---


[jira] [Commented] (NIFI-4392) Create graphite reporting task

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195456#comment-16195456
 ] 

ASF GitHub Bot commented on NIFI-4392:
--

Github user omerhadari commented on the issue:

https://github.com/apache/nifi/pull/2171
  
Done :)
I noticed that the ambari reporting task allows reporting of group-specific 
metrics, so I added that as an option to the reporting task as well.
Also I wasn't sure if you want me to squash the commits, let me know if you 
do.


> Create graphite reporting task
> --
>
> Key: NIFI-4392
> URL: https://issues.apache.org/jira/browse/NIFI-4392
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Omer Hadari
>Priority: Minor
>  Labels: features
>
> Create a reporting task for graphite, similar to that of datadog and ambari.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2171: NIFI-4392 - Add metric reporting task for Graphite

2017-10-06 Thread omerhadari
Github user omerhadari commented on the issue:

https://github.com/apache/nifi/pull/2171
  
Done :)
I noticed that the ambari reporting task allows reporting of group-specific 
metrics, so I added that as an option to the reporting task as well.
Also I wasn't sure if you want me to squash the commits, let me know if you 
do.


---


[jira] [Commented] (NIFI-4307) Add Kotlin support to ExecuteScript

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195428#comment-16195428
 ] 

ASF GitHub Bot commented on NIFI-4307:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2104
  
Your instinct about Compilable is spot-on, it's exactly when I stopped 
working this branch :P For ExecuteScript, I'd thought such a processor would be 
for quick onTrigger() implementations, and due to multithread issues with 
[NIFI-1822](https://issues.apache.org/jira/browse/NIFI-1822) (concurrent tasks) 
with various scripting engines, things might get dicey for Compilable engines.  
However this would be an excellent enhancement for InvokeScriptedProcessor 
(ISP), with the caveat that the engine is also Invocable. I haven't run thru 
the list of engines thus far (to include Kotlin) to see what that would look 
like, but I have investigated it and for the sake of generality I left it out 
of the current code. If we can improve ISP (or even ExecuteScript) by 
leveraging Compilable, I am totally on board!


> Add Kotlin support to ExecuteScript
> ---
>
> Key: NIFI-4307
> URL: https://issues.apache.org/jira/browse/NIFI-4307
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> Kotlin has a ScriptEngine implementation as of v1.1. Add support for it in 
> NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2104: NIFI-4307 Added Kotlin 1.1.X support to ExecuteScript.

2017-10-06 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2104
  
Your instinct about Compilable is spot-on, it's exactly when I stopped 
working this branch :P For ExecuteScript, I'd thought such a processor would be 
for quick onTrigger() implementations, and due to multithread issues with 
[NIFI-1822](https://issues.apache.org/jira/browse/NIFI-1822) (concurrent tasks) 
with various scripting engines, things might get dicey for Compilable engines.  
However this would be an excellent enhancement for InvokeScriptedProcessor 
(ISP), with the caveat that the engine is also Invocable. I haven't run thru 
the list of engines thus far (to include Kotlin) to see what that would look 
like, but I have investigated it and for the sake of generality I left it out 
of the current code. If we can improve ISP (or even ExecuteScript) by 
leveraging Compilable, I am totally on board!


---


[jira] [Commented] (NIFI-4346) Add a lookup service that uses HBase

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195393#comment-16195393
 ] 

ASF GitHub Bot commented on NIFI-4346:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2125
  
(Go ahead and commit, seems like a good approach to me)


> Add a lookup service that uses HBase
> 
>
> Key: NIFI-4346
> URL: https://issues.apache.org/jira/browse/NIFI-4346
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> A LookupService based on HBase should be able to handle at least two 
> scenarios:
> 1. Pull a single cell and return it as a string.
> 2. Pull multiple cells and return them as a Record that can be merged into 
> another record.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2125: NIFI-4346 Created a LookupService that uses HBase as its b...

2017-10-06 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2125
  
(Go ahead and commit, seems like a good approach to me)


---


[jira] [Commented] (NIFI-4346) Add a lookup service that uses HBase

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195391#comment-16195391
 ] 

ASF GitHub Bot commented on NIFI-4346:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2125
  
+1 The string lookup one is going to be interesting.


> Add a lookup service that uses HBase
> 
>
> Key: NIFI-4346
> URL: https://issues.apache.org/jira/browse/NIFI-4346
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> A LookupService based on HBase should be able to handle at least two 
> scenarios:
> 1. Pull a single cell and return it as a string.
> 2. Pull multiple cells and return them as a Record that can be merged into 
> another record.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2125: NIFI-4346 Created a LookupService that uses HBase as its b...

2017-10-06 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2125
  
+1 The string lookup one is going to be interesting.


---


[jira] [Commented] (NIFI-4307) Add Kotlin support to ExecuteScript

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195375#comment-16195375
 ] 

ASF GitHub Bot commented on NIFI-4307:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2104
  
@mattyb149 I did a sample flow with each of ours and found the performance 
to be awful. I did some investigation and experimentation with the Kotlin 
ScriptEngine and found that after repeatedly running the same script over and 
over again, the performance seems to progressively degrade. At least that is 
what I am seeing.

I tried using the Compilable (I think) interface for it, which seems to 
actually run code through the Kotlin compiler and execute only the generated 
output. That ran like a champ (as one would hope), but that means we might have 
to gut a lot of ExecutScript and such and start over with Compilable to make 
this work.

I am not sure if that is viable because I don't know if all of the supplied 
engines support that interface.


> Add Kotlin support to ExecuteScript
> ---
>
> Key: NIFI-4307
> URL: https://issues.apache.org/jira/browse/NIFI-4307
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> Kotlin has a ScriptEngine implementation as of v1.1. Add support for it in 
> NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2104: NIFI-4307 Added Kotlin 1.1.X support to ExecuteScript.

2017-10-06 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2104
  
@mattyb149 I did a sample flow with each of ours and found the performance 
to be awful. I did some investigation and experimentation with the Kotlin 
ScriptEngine and found that after repeatedly running the same script over and 
over again, the performance seems to progressively degrade. At least that is 
what I am seeing.

I tried using the Compilable (I think) interface for it, which seems to 
actually run code through the Kotlin compiler and execute only the generated 
output. That ran like a champ (as one would hope), but that means we might have 
to gut a lot of ExecutScript and such and start over with Compilable to make 
this work.

I am not sure if that is viable because I don't know if all of the supplied 
engines support that interface.


---


[GitHub] nifi pull request #2138: NIFI-4371 - add support for query timeout in Hive p...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r143296927
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/AbstractHiveQLProcessor.java
 ---
@@ -66,6 +71,38 @@
 .addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor QUERY_TIMEOUT = new 
PropertyDescriptor.Builder()
+.name("hive-query-timeout")
+.displayName("Query timeout")
+.description("Sets the number of seconds the driver will wait 
for a query to execute. "
++ "A value of 0 means no timeout. This feature is 
available starting with Hive 2.1.")
--- End diff --

How about we replace the "This feature is available..." sentence with 
"NOTE: Non-zero values may not be supported by the driver"


---


[jira] [Commented] (NIFI-4371) Add support for query timeout in Hive processors

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195262#comment-16195262
 ] 

ASF GitHub Bot commented on NIFI-4371:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r143296927
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/AbstractHiveQLProcessor.java
 ---
@@ -66,6 +71,38 @@
 .addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor QUERY_TIMEOUT = new 
PropertyDescriptor.Builder()
+.name("hive-query-timeout")
+.displayName("Query timeout")
+.description("Sets the number of seconds the driver will wait 
for a query to execute. "
++ "A value of 0 means no timeout. This feature is 
available starting with Hive 2.1.")
--- End diff --

How about we replace the "This feature is available..." sentence with 
"NOTE: Non-zero values may not be supported by the driver"


> Add support for query timeout in Hive processors
> 
>
> Key: NIFI-4371
> URL: https://issues.apache.org/jira/browse/NIFI-4371
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: Screen Shot 2017-09-09 at 4.31.21 PM.png, Screen Shot 
> 2017-09-09 at 6.38.51 PM.png, Screen Shot 2017-09-09 at 6.40.48 PM.png
>
>
> With HIVE-4924 it is possible to set a query timeout when executing a query 
> against Hive (starting with Hive 2.1). Right now, NiFi is built using Hive 
> 1.2.1 and this feature is not available by default (the method is not 
> implemented in the driver). However, if building NiFi with specific profiles 
> this feature can be used.
> The objective is to expose the query timeout parameter in the processor and 
> enable expression language. If the version of the driver is not implementing 
> the query timeout the processor will be in invalid state (unless expression 
> language is used, and in this case, the flow file will be routed to the 
> failure relationship).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195253#comment-16195253
 ] 

ASF GitHub Bot commented on NIFI-4325:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2113#discussion_r143295442
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/JsonQueryElasticsearch5.java
 ---
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHost;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.client.CredentialsProvider;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.entity.ContentType;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
+import org.apache.http.nio.entity.NStringEntity;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.elasticsearch.client.Response;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestClientBuilder;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URL;
+import java.security.KeyStore;
+import java.security.SecureRandom;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5", "fetch", "read", "get"})
+@CapabilityDescription("Retrieves a document from Elasticsearch using the 
specified connection properties and the "
++ "identifier of the document to retrieve. If the cluster has been 
configured for authorization and/or secure "
++ "transport (SSL/TLS), and the X-Pack plugin is available, secure 
connections can be made. This processor "
++ "supports Elasticsearch 5.x clusters.")
+public class JsonQueryElasticsearch5 extends AbstractProcessor {
+
+private RestClient client;
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("All original flowfiles that don't cause an error 
to occur go to this relationship").build();
+
+public static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
 

[GitHub] nifi pull request #2113: NIFI-4325 Added new processor that uses the JSON DS...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2113#discussion_r143294482
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/JsonQueryElasticsearch5.java
 ---
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHost;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.client.CredentialsProvider;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.entity.ContentType;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
+import org.apache.http.nio.entity.NStringEntity;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.elasticsearch.client.Response;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestClientBuilder;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URL;
+import java.security.KeyStore;
+import java.security.SecureRandom;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5", "fetch", "read", "get"})
--- End diff --

The tags should probably include "query" instead of "fetch" as does 
QueryElasticsearchHttp. Not sure whether they should include "json" or not, I'm 
fine with that either way :)


---


[GitHub] nifi pull request #2113: NIFI-4325 Added new processor that uses the JSON DS...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2113#discussion_r143295816
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/JsonQueryElasticsearch5.java
 ---
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHost;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.client.CredentialsProvider;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.entity.ContentType;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
+import org.apache.http.nio.entity.NStringEntity;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.elasticsearch.client.Response;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestClientBuilder;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URL;
+import java.security.KeyStore;
+import java.security.SecureRandom;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5", "fetch", "read", "get"})
+@CapabilityDescription("Retrieves a document from Elasticsearch using the 
specified connection properties and the "
++ "identifier of the document to retrieve. If the cluster has been 
configured for authorization and/or secure "
++ "transport (SSL/TLS), and the X-Pack plugin is available, secure 
connections can be made. This processor "
++ "supports Elasticsearch 5.x clusters.")
+public class JsonQueryElasticsearch5 extends AbstractProcessor {
+
+private RestClient client;
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("All original flowfiles that don't cause an error 
to occur go to this relationship").build();
+
+public static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("All FlowFiles that cannot be read from 
Elasticsearch are routed to this relationship").build();
+
+public static final Relationship REL_HITS = new 
Relationship.Builder().name("hits")
+.descript

[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195254#comment-16195254
 ] 

ASF GitHub Bot commented on NIFI-4325:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2113#discussion_r143295816
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/JsonQueryElasticsearch5.java
 ---
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHost;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.client.CredentialsProvider;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.entity.ContentType;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
+import org.apache.http.nio.entity.NStringEntity;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.elasticsearch.client.Response;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestClientBuilder;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URL;
+import java.security.KeyStore;
+import java.security.SecureRandom;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5", "fetch", "read", "get"})
+@CapabilityDescription("Retrieves a document from Elasticsearch using the 
specified connection properties and the "
++ "identifier of the document to retrieve. If the cluster has been 
configured for authorization and/or secure "
++ "transport (SSL/TLS), and the X-Pack plugin is available, secure 
connections can be made. This processor "
++ "supports Elasticsearch 5.x clusters.")
+public class JsonQueryElasticsearch5 extends AbstractProcessor {
+
+private RestClient client;
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("All original flowfiles that don't cause an error 
to occur go to this relationship").build();
+
+public static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
 

[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195252#comment-16195252
 ] 

ASF GitHub Bot commented on NIFI-4325:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2113#discussion_r143294482
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/JsonQueryElasticsearch5.java
 ---
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHost;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.client.CredentialsProvider;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.entity.ContentType;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
+import org.apache.http.nio.entity.NStringEntity;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.elasticsearch.client.Response;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestClientBuilder;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URL;
+import java.security.KeyStore;
+import java.security.SecureRandom;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5", "fetch", "read", "get"})
--- End diff --

The tags should probably include "query" instead of "fetch" as does 
QueryElasticsearchHttp. Not sure whether they should include "json" or not, I'm 
fine with that either way :)


> Create a new ElasticSearch processor that supports the JSON DSL
> ---
>
> Key: NIFI-4325
> URL: https://issues.apache.org/jira/browse/NIFI-4325
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> The existing ElasticSearch processors use the Lucene-style syntax for 
> querying, not the JSON DSL. A new processor is needed that can take a full 
> JSON query and execute it. It should also support aggregation queries in this 
> syntax. A user needs to be able to take a query as-is fro

[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195255#comment-16195255
 ] 

ASF GitHub Bot commented on NIFI-4325:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2113#discussion_r143294684
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/JsonQueryElasticsearch5.java
 ---
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHost;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.client.CredentialsProvider;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.entity.ContentType;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
+import org.apache.http.nio.entity.NStringEntity;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.elasticsearch.client.Response;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestClientBuilder;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URL;
+import java.security.KeyStore;
+import java.security.SecureRandom;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5", "fetch", "read", "get"})
+@CapabilityDescription("Retrieves a document from Elasticsearch using the 
specified connection properties and the "
++ "identifier of the document to retrieve. If the cluster has been 
configured for authorization and/or secure "
--- End diff --

This processor does more than retrieve based on the identifier, it's based 
on the query right? Maybe elaborate on that as well as the various results and 
their relationships?


> Create a new ElasticSearch processor that supports the JSON DSL
> ---
>
> Key: NIFI-4325
> URL: https://issues.apache.org/jira/browse/NIFI-4325
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> The existing ElasticSearch processors use t

[GitHub] nifi pull request #2113: NIFI-4325 Added new processor that uses the JSON DS...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2113#discussion_r143294684
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/JsonQueryElasticsearch5.java
 ---
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHost;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.client.CredentialsProvider;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.entity.ContentType;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
+import org.apache.http.nio.entity.NStringEntity;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.elasticsearch.client.Response;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestClientBuilder;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URL;
+import java.security.KeyStore;
+import java.security.SecureRandom;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5", "fetch", "read", "get"})
+@CapabilityDescription("Retrieves a document from Elasticsearch using the 
specified connection properties and the "
++ "identifier of the document to retrieve. If the cluster has been 
configured for authorization and/or secure "
--- End diff --

This processor does more than retrieve based on the identifier, it's based 
on the query right? Maybe elaborate on that as well as the various results and 
their relationships?


---


[GitHub] nifi pull request #2113: NIFI-4325 Added new processor that uses the JSON DS...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2113#discussion_r143295442
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/JsonQueryElasticsearch5.java
 ---
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHost;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.client.CredentialsProvider;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.entity.ContentType;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
+import org.apache.http.nio.entity.NStringEntity;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.elasticsearch.client.Response;
+import org.elasticsearch.client.RestClient;
+import org.elasticsearch.client.RestClientBuilder;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URL;
+import java.security.KeyStore;
+import java.security.SecureRandom;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5", "fetch", "read", "get"})
+@CapabilityDescription("Retrieves a document from Elasticsearch using the 
specified connection properties and the "
++ "identifier of the document to retrieve. If the cluster has been 
configured for authorization and/or secure "
++ "transport (SSL/TLS), and the X-Pack plugin is available, secure 
connections can be made. This processor "
++ "supports Elasticsearch 5.x clusters.")
+public class JsonQueryElasticsearch5 extends AbstractProcessor {
+
+private RestClient client;
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("All original flowfiles that don't cause an error 
to occur go to this relationship").build();
+
+public static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("All FlowFiles that cannot be read from 
Elasticsearch are routed to this relationship").build();
+
+public static final Relationship REL_HITS = new 
Relationship.Builder().name("hits")
+.descript

[jira] [Commented] (NIFI-4392) Create graphite reporting task

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195248#comment-16195248
 ] 

ASF GitHub Bot commented on NIFI-4392:
--

Github user omerhadari commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2171#discussion_r143294672
  
--- Diff: 
nifi-nar-bundles/nifi-metrics-reporting-bundle/nifi-metrics-reporting-task/src/main/java/org/apache/nifi/metrics/reporting/reporter/service/GraphiteMetricReporterService.java
 ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.metrics.reporting.reporter.service;
+
+import com.codahale.metrics.MetricRegistry;
+import com.codahale.metrics.ScheduledReporter;
+import com.codahale.metrics.graphite.Graphite;
+import com.codahale.metrics.graphite.GraphiteReporter;
+import com.codahale.metrics.graphite.GraphiteSender;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.metrics.reporting.task.MetricsReportingTask;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import javax.net.SocketFactory;
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * A controller service that provides metric reporters for graphite, can 
be used by {@link MetricsReportingTask}.
+ *
+ * @author Omer Hadari
+ */
+@Tags({"metrics", "reporting", "graphite"})
+@CapabilityDescription("A controller service that provides metric 
reporters for graphite. " +
+"Used by MetricsReportingTask.")
+public class GraphiteMetricReporterService extends 
AbstractControllerService implements MetricReporterService {
+
+/** Points to the hostname of the graphite listener. */
+public static final PropertyDescriptor HOST = new 
PropertyDescriptor.Builder()
+.name("host")
+.displayName("host")
+.description("The hostname of the carbon listener")
+.required(true)
+.addValidator(StandardValidators.URI_VALIDATOR)
+.build();
+
+/** Points to the port on which the graphite server listens. */
+public static final PropertyDescriptor PORT = new 
PropertyDescriptor.Builder()
+.name("port")
+.displayName("port")
+.description("The port on which carbon listens")
+.required(true)
+.addValidator(StandardValidators.PORT_VALIDATOR)
+.build();
+
+/** Points to the charset name that the graphite server expects. */
+public static final PropertyDescriptor CHARSET = new 
PropertyDescriptor.Builder()
+.name("charset")
+.displayName("charset")
+.description("The charset used by the graphite server")
+.required(true)
+.defaultValue("UTF-8")
+.addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
+.build();
+
+/** List of property descriptors used by the service. */
+private static final List properties;
+
+static {
+final List props = new ArrayList<>();
+props.add(HOST);
+props.add(PORT);
+properties = Collections.unmodifiableList(props);
+}
+
+/** Graphite sender, a connection to the server. */
+private GraphiteSender graphiteSender;
+
+/**
+ * Create the {@link #graphiteSender} according to configuration.
  

[GitHub] nifi pull request #2171: NIFI-4392 - Add metric reporting task for Graphite

2017-10-06 Thread omerhadari
Github user omerhadari commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2171#discussion_r143294672
  
--- Diff: 
nifi-nar-bundles/nifi-metrics-reporting-bundle/nifi-metrics-reporting-task/src/main/java/org/apache/nifi/metrics/reporting/reporter/service/GraphiteMetricReporterService.java
 ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.metrics.reporting.reporter.service;
+
+import com.codahale.metrics.MetricRegistry;
+import com.codahale.metrics.ScheduledReporter;
+import com.codahale.metrics.graphite.Graphite;
+import com.codahale.metrics.graphite.GraphiteReporter;
+import com.codahale.metrics.graphite.GraphiteSender;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.metrics.reporting.task.MetricsReportingTask;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import javax.net.SocketFactory;
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * A controller service that provides metric reporters for graphite, can 
be used by {@link MetricsReportingTask}.
+ *
+ * @author Omer Hadari
+ */
+@Tags({"metrics", "reporting", "graphite"})
+@CapabilityDescription("A controller service that provides metric 
reporters for graphite. " +
+"Used by MetricsReportingTask.")
+public class GraphiteMetricReporterService extends 
AbstractControllerService implements MetricReporterService {
+
+/** Points to the hostname of the graphite listener. */
+public static final PropertyDescriptor HOST = new 
PropertyDescriptor.Builder()
+.name("host")
+.displayName("host")
+.description("The hostname of the carbon listener")
+.required(true)
+.addValidator(StandardValidators.URI_VALIDATOR)
+.build();
+
+/** Points to the port on which the graphite server listens. */
+public static final PropertyDescriptor PORT = new 
PropertyDescriptor.Builder()
+.name("port")
+.displayName("port")
+.description("The port on which carbon listens")
+.required(true)
+.addValidator(StandardValidators.PORT_VALIDATOR)
+.build();
+
+/** Points to the charset name that the graphite server expects. */
+public static final PropertyDescriptor CHARSET = new 
PropertyDescriptor.Builder()
+.name("charset")
+.displayName("charset")
+.description("The charset used by the graphite server")
+.required(true)
+.defaultValue("UTF-8")
+.addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
+.build();
+
+/** List of property descriptors used by the service. */
+private static final List properties;
+
+static {
+final List props = new ArrayList<>();
+props.add(HOST);
+props.add(PORT);
+properties = Collections.unmodifiableList(props);
+}
+
+/** Graphite sender, a connection to the server. */
+private GraphiteSender graphiteSender;
+
+/**
+ * Create the {@link #graphiteSender} according to configuration.
+ *
+ * @param context used to access properties.
+ */
+@OnEnabled
+public void onEnabled(final ConfigurationContext context) {
+String host = context.getProperty(HOST).getValue();
+int port 

[jira] [Commented] (NIFI-4371) Add support for query timeout in Hive processors

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195229#comment-16195229
 ] 

ASF GitHub Bot commented on NIFI-4371:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2138
  
The "unit tests" for TestSelectHiveQL use Derby as the database, only to 
test the functionality of getting the "HiveQL" statement to the database and 
parsing its results. In that vein, Derby supports setQueryTimeout 
([DERBY-31](https://issues.apache.org/jira/browse/DERBY-31)), so can we add a 
unit test that sets the value, to exercise that part of the code?


> Add support for query timeout in Hive processors
> 
>
> Key: NIFI-4371
> URL: https://issues.apache.org/jira/browse/NIFI-4371
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: Screen Shot 2017-09-09 at 4.31.21 PM.png, Screen Shot 
> 2017-09-09 at 6.38.51 PM.png, Screen Shot 2017-09-09 at 6.40.48 PM.png
>
>
> With HIVE-4924 it is possible to set a query timeout when executing a query 
> against Hive (starting with Hive 2.1). Right now, NiFi is built using Hive 
> 1.2.1 and this feature is not available by default (the method is not 
> implemented in the driver). However, if building NiFi with specific profiles 
> this feature can be used.
> The objective is to expose the query timeout parameter in the processor and 
> enable expression language. If the version of the driver is not implementing 
> the query timeout the processor will be in invalid state (unless expression 
> language is used, and in this case, the flow file will be routed to the 
> failure relationship).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2138: NIFI-4371 - add support for query timeout in Hive processo...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2138
  
The "unit tests" for TestSelectHiveQL use Derby as the database, only to 
test the functionality of getting the "HiveQL" statement to the database and 
parsing its results. In that vein, Derby supports setQueryTimeout 
([DERBY-31](https://issues.apache.org/jira/browse/DERBY-31)), so can we add a 
unit test that sets the value, to exercise that part of the code?


---


[jira] [Commented] (NIFI-4246) OAuth 2 Authorization support - Client Credentials Grant

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195228#comment-16195228
 ] 

ASF GitHub Bot commented on NIFI-4246:
--

Github user jdye64 commented on the issue:

https://github.com/apache/nifi/pull/2085
  
@joewitt rebased and incremented bundle version from 1.4.0-SNAPSHOT to 
1.5.0-SNAPSHOT


> OAuth 2 Authorization support - Client Credentials Grant
> 
>
> Key: NIFI-4246
> URL: https://issues.apache.org/jira/browse/NIFI-4246
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>
> If your interacting with REST endpoints on the web chances are you are going 
> to run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
> Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA 
> is focused solely on the Client Credentials Grant method defined at 
> https://tools.ietf.org/html/rfc6749#section-4.4
> This implementation should provide a ControllerService in which the enduser 
> can configure the credentials for obtaining the authorization grant (access 
> token) from the resource owner. In turn a new property will be added to the 
> InvokeHTTP processor (if it doesn't already exist from one of the other JIRA 
> efforts similar to this one) where the processor can reference this 
> controller service to obtain the access token and insert the appropriate HTTP 
> header (Authorization: Bearer{access_token}) so that the InvokeHTTP processor 
> can interact with the OAuth protected resources without having to worry about 
> setting up the credentials for each InvokeHTTP processor saving time and 
> complexity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2085: NIFI-4246 - Client Credentials Grant based OAuth2 Controll...

2017-10-06 Thread jdye64
Github user jdye64 commented on the issue:

https://github.com/apache/nifi/pull/2085
  
@joewitt rebased and incremented bundle version from 1.4.0-SNAPSHOT to 
1.5.0-SNAPSHOT


---


[GitHub] nifi pull request #2020: [NiFi-3973] Add PutKudu Processor for ingesting dat...

2017-10-06 Thread cammachusa
Github user cammachusa closed the pull request at:

https://github.com/apache/nifi/pull/2020


---


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195219#comment-16195219
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143279372
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -175,6 +181,32 @@ public QueryDatabaseTable() {
 return propDescriptors;
 }
 
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List results = new 
ArrayList<>(super.customValidate(validationContext));
+
+final String tableName = 
validationContext.getProperty(TABLE_NAME).getValue();
+final String sqlQuery = 
validationContext.getProperty(SQL_QUERY).getValue();
+
+if(!StringUtils.isEmpty(sqlQuery) && 
!StringUtils.isEmpty(tableName)){
+results.add(new ValidationResult.Builder()
+.valid(false)
+.subject("SQL Query")
+.explanation("SQL Query and Table Name can't both 
be specified at the same time.")
--- End diff --

This might need some elaboration, in case the user wouldn't necessarily 
understand why they can't both be specified (due to EL support in the Table 
Name property, e.g.)


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195220#comment-16195220
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143280289
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -175,6 +181,32 @@ public QueryDatabaseTable() {
 return propDescriptors;
 }
 
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List results = new 
ArrayList<>(super.customValidate(validationContext));
+
+final String tableName = 
validationContext.getProperty(TABLE_NAME).getValue();
+final String sqlQuery = 
validationContext.getProperty(SQL_QUERY).getValue();
+
+if(!StringUtils.isEmpty(sqlQuery) && 
!StringUtils.isEmpty(tableName)){
+results.add(new ValidationResult.Builder()
+.valid(false)
+.subject("SQL Query")
--- End diff --

This should match either the "Arbitrary/Custom Query" or "Table Name" 
property or both I think


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195218#comment-16195218
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143290546
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -175,6 +181,32 @@ public QueryDatabaseTable() {
 return propDescriptors;
 }
 
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List results = new 
ArrayList<>(super.customValidate(validationContext));
+
+final String tableName = 
validationContext.getProperty(TABLE_NAME).getValue();
+final String sqlQuery = 
validationContext.getProperty(SQL_QUERY).getValue();
+
+if(!StringUtils.isEmpty(sqlQuery) && 
!StringUtils.isEmpty(tableName)){
+results.add(new ValidationResult.Builder()
+.valid(false)
+.subject("SQL Query")
+.explanation("SQL Query and Table Name can't both 
be specified at the same time.")
+.build());
+}
+
+if(!StringUtils.isEmpty(sqlQuery) && isDynamicMaxValues){
--- End diff --

Is this a hard requirement? Expression Language support (since incoming 
connections are not allowed for QueryDatabaseTable) is meant to support 
Variable Registry and environment variables, which could be used in a custom 
query if it also included such an EL statement?


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195216#comment-16195216
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143289913
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -77,7 +80,8 @@
 @InputRequirement(Requirement.INPUT_FORBIDDEN)
 @Tags({"sql", "select", "jdbc", "query", "database"})
 @SeeAlso({GenerateTableFetch.class, ExecuteSQL.class})
-@CapabilityDescription("Generates and executes a SQL select query to fetch 
all rows whose values in the specified Maximum Value column(s) are larger than 
the "
+@CapabilityDescription("Generates a SQL select query, or  uses a provided 
statement, and executes it to fetch all rows whose values in the specified "
--- End diff --

Nitpick, but an extra space between "or  uses"


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195215#comment-16195215
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143289820
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -231,7 +243,9 @@ public void setup(final ProcessContext context) {
 
 // Try to fill the columnTypeMap with the types of the desired 
max-value columns
 final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions().getValue();
+final String propTableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions().getValue();
+final String tableName = 
org.apache.commons.lang3.StringUtils.isEmpty(propTableName) ? 
ARBITRARY_SQL_TABLE_NAME : propTableName;
--- End diff --

Why the fully-qualified StringUtils class? If we have both (NiFi and 
Commons Lang), can we get rid of one?


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143279372
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -175,6 +181,32 @@ public QueryDatabaseTable() {
 return propDescriptors;
 }
 
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List results = new 
ArrayList<>(super.customValidate(validationContext));
+
+final String tableName = 
validationContext.getProperty(TABLE_NAME).getValue();
+final String sqlQuery = 
validationContext.getProperty(SQL_QUERY).getValue();
+
+if(!StringUtils.isEmpty(sqlQuery) && 
!StringUtils.isEmpty(tableName)){
+results.add(new ValidationResult.Builder()
+.valid(false)
+.subject("SQL Query")
+.explanation("SQL Query and Table Name can't both 
be specified at the same time.")
--- End diff --

This might need some elaboration, in case the user wouldn't necessarily 
understand why they can't both be specified (due to EL support in the Table 
Name property, e.g.)


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143289820
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -231,7 +243,9 @@ public void setup(final ProcessContext context) {
 
 // Try to fill the columnTypeMap with the types of the desired 
max-value columns
 final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions().getValue();
+final String propTableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions().getValue();
+final String tableName = 
org.apache.commons.lang3.StringUtils.isEmpty(propTableName) ? 
ARBITRARY_SQL_TABLE_NAME : propTableName;
--- End diff --

Why the fully-qualified StringUtils class? If we have both (NiFi and 
Commons Lang), can we get rid of one?


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143290546
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -175,6 +181,32 @@ public QueryDatabaseTable() {
 return propDescriptors;
 }
 
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List results = new 
ArrayList<>(super.customValidate(validationContext));
+
+final String tableName = 
validationContext.getProperty(TABLE_NAME).getValue();
+final String sqlQuery = 
validationContext.getProperty(SQL_QUERY).getValue();
+
+if(!StringUtils.isEmpty(sqlQuery) && 
!StringUtils.isEmpty(tableName)){
+results.add(new ValidationResult.Builder()
+.valid(false)
+.subject("SQL Query")
+.explanation("SQL Query and Table Name can't both 
be specified at the same time.")
+.build());
+}
+
+if(!StringUtils.isEmpty(sqlQuery) && isDynamicMaxValues){
--- End diff --

Is this a hard requirement? Expression Language support (since incoming 
connections are not allowed for QueryDatabaseTable) is meant to support 
Variable Registry and environment variables, which could be used in a custom 
query if it also included such an EL statement?


---


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195217#comment-16195217
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143290669
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -190,8 +222,11 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 
 final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions().getValue();
+
+final String propTableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions().getValue();
+final String tableName = StringUtils.isEmpty(propTableName) ? 
ARBITRARY_SQL_TABLE_NAME : propTableName;
 final String columnNames = 
context.getProperty(COLUMN_NAMES).evaluateAttributeExpressions().getValue();
+final String sqlQuery = context.getProperty(SQL_QUERY).getValue();
--- End diff --

Should this support Expression Language for Variable Registry support (see 
my comment above)?


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195214#comment-16195214
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143279020
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -155,10 +155,22 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor SQL_QUERY = new 
PropertyDescriptor.Builder()
+.name("db-fetch-sql-query")
+.displayName("Arbitrary Query")
--- End diff --

Possibly call this Custom Query? Arbitrary is... well arbitrary :P and 
might cause some confusion for the user. Plus the description refers to it as a 
custom query


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195213#comment-16195213
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143278853
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -155,10 +155,22 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor SQL_QUERY = new 
PropertyDescriptor.Builder()
+.name("db-fetch-sql-query")
+.displayName("Arbitrary Query")
+.description("A custom SQL query used to retrieve data. 
Instead of building a SQL query from "
++ "other properties, this query will be used. Query 
must have no WHERE or ORDER BY statements. "
++ "If a WHERE clause is needed use a sub-query or the 
'Additional WHERE clause' property.")
--- End diff --

Do we need more doc here around max-value columns? If they specify a 
max-value column in the other property, and it is not available in this query, 
then the getSelectStatement() below doesn't seem like it would work as expected


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143279020
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -155,10 +155,22 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor SQL_QUERY = new 
PropertyDescriptor.Builder()
+.name("db-fetch-sql-query")
+.displayName("Arbitrary Query")
--- End diff --

Possibly call this Custom Query? Arbitrary is... well arbitrary :P and 
might cause some confusion for the user. Plus the description refers to it as a 
custom query


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143290669
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -190,8 +222,11 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 
 final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions().getValue();
+
+final String propTableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions().getValue();
+final String tableName = StringUtils.isEmpty(propTableName) ? 
ARBITRARY_SQL_TABLE_NAME : propTableName;
 final String columnNames = 
context.getProperty(COLUMN_NAMES).evaluateAttributeExpressions().getValue();
+final String sqlQuery = context.getProperty(SQL_QUERY).getValue();
--- End diff --

Should this support Expression Language for Variable Registry support (see 
my comment above)?


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143280289
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -175,6 +181,32 @@ public QueryDatabaseTable() {
 return propDescriptors;
 }
 
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List results = new 
ArrayList<>(super.customValidate(validationContext));
+
+final String tableName = 
validationContext.getProperty(TABLE_NAME).getValue();
+final String sqlQuery = 
validationContext.getProperty(SQL_QUERY).getValue();
+
+if(!StringUtils.isEmpty(sqlQuery) && 
!StringUtils.isEmpty(tableName)){
+results.add(new ValidationResult.Builder()
+.valid(false)
+.subject("SQL Query")
--- End diff --

This should match either the "Arbitrary/Custom Query" or "Table Name" 
property or both I think


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143278853
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -155,10 +155,22 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor SQL_QUERY = new 
PropertyDescriptor.Builder()
+.name("db-fetch-sql-query")
+.displayName("Arbitrary Query")
+.description("A custom SQL query used to retrieve data. 
Instead of building a SQL query from "
++ "other properties, this query will be used. Query 
must have no WHERE or ORDER BY statements. "
++ "If a WHERE clause is needed use a sub-query or the 
'Additional WHERE clause' property.")
--- End diff --

Do we need more doc here around max-value columns? If they specify a 
max-value column in the other property, and it is not available in this query, 
then the getSelectStatement() below doesn't seem like it would work as expected


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r143289913
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -77,7 +80,8 @@
 @InputRequirement(Requirement.INPUT_FORBIDDEN)
 @Tags({"sql", "select", "jdbc", "query", "database"})
 @SeeAlso({GenerateTableFetch.class, ExecuteSQL.class})
-@CapabilityDescription("Generates and executes a SQL select query to fetch 
all rows whose values in the specified Maximum Value column(s) are larger than 
the "
+@CapabilityDescription("Generates a SQL select query, or  uses a provided 
statement, and executes it to fetch all rows whose values in the specified "
--- End diff --

Nitpick, but an extra space between "or  uses"


---


[jira] [Created] (NIFI-4472) add alerts when a node is disconnected in my NIFI cluster

2017-10-06 Thread Haimo Liu (JIRA)
Haimo Liu created NIFI-4472:
---

 Summary: add alerts when a node is disconnected in my NIFI cluster
 Key: NIFI-4472
 URL: https://issues.apache.org/jira/browse/NIFI-4472
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Core Framework
Reporter: Haimo Liu


when a NIFI node is disconnected from my cluster, it would be nice that I can 
get timely alters/notifications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4256) Add support for all AWS S3 Encryption Options

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195193#comment-16195193
 ] 

ASF GitHub Bot commented on NIFI-4256:
--

Github user baank commented on the issue:

https://github.com/apache/nifi/pull/2066
  
Sure not a problem. I have already implemented his changes but just need to 
get a new PR approved and also there was a comment in JIRA regarding whether to 
include the client side S3 work at all.


> Add support for all AWS S3 Encryption Options
> -
>
> Key: NIFI-4256
> URL: https://issues.apache.org/jira/browse/NIFI-4256
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Franco
>  Labels: aws, aws-s3, security
>
> NiFi currently only supports SSE-S3 encryption (AES256).
> Support needs to be added for:
> * SSE-S3
> * SSE-KMS
> * SSE-C
> * CSE-KMS CMK
> * CSE-Master Key
> With all of the appropriate configuration options and such that SSE is 
> available only for PutS3Object whilst CSE is available also for FetchS3Object.
> Given that this will add another 20 or so UI properties the intention is to 
> split it into a Client Side Encryption Service and Server Side Encryption 
> Service. This will allow users to reuse "encryption" across different 
> workflows.
> Existing flows using the Server Side Encryption option will still work as is 
> but will be overridden if a service is added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2066: NIFI-4256 - Add support for all AWS S3 Encryption Options

2017-10-06 Thread baank
Github user baank commented on the issue:

https://github.com/apache/nifi/pull/2066
  
Sure not a problem. I have already implemented his changes but just need to 
get a new PR approved and also there was a comment in JIRA regarding whether to 
include the client side S3 work at all.


---


[GitHub] nifi issue #2020: [NiFi-3973] Add PutKudu Processor for ingesting data to Ku...

2017-10-06 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2020
  
@cammach @cammachusa please close this PR since it has been merged.


---


[jira] [Commented] (NIFI-3518) Create a Morphlines processor

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195190#comment-16195190
 ] 

ASF GitHub Bot commented on NIFI-3518:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2028
  
Is there anyone familiar with morphlines that can help test/review?


> Create a Morphlines processor
> -
>
> Key: NIFI-3518
> URL: https://issues.apache.org/jira/browse/NIFI-3518
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: William Nouet
>Priority: Minor
>
> Create a dedicate processor to run Morphlines transformations 
> (http://kitesdk.org/docs/1.1.0/morphlines/morphlines-reference-guide.html) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4471) Set flow limits at process group level

2017-10-06 Thread Haimo Liu (JIRA)
Haimo Liu created NIFI-4471:
---

 Summary: Set flow limits at process group level
 Key: NIFI-4471
 URL: https://issues.apache.org/jira/browse/NIFI-4471
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Core Framework
Reporter: Haimo Liu


In a multi-tenancy type of operational environment, as a NIFI admin user, I 
want to be able to set some limits at the Process Group level, to prevent my 
NIFI server from being stressed out.

1. I want to say "no connection's limit may be set higher than xxx MB."
2. "I can queue no more than xxx FFs at any connections"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2028: NIFI-3518 Create a Morphlines processor

2017-10-06 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2028
  
Is there anyone familiar with morphlines that can help test/review?


---


[jira] [Commented] (NIFI-4256) Add support for all AWS S3 Encryption Options

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195186#comment-16195186
 ] 

ASF GitHub Bot commented on NIFI-4256:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2066
  
@baank will you be in a position to help work with @jvwing on 
review/contrib cycles?


> Add support for all AWS S3 Encryption Options
> -
>
> Key: NIFI-4256
> URL: https://issues.apache.org/jira/browse/NIFI-4256
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Franco
>  Labels: aws, aws-s3, security
>
> NiFi currently only supports SSE-S3 encryption (AES256).
> Support needs to be added for:
> * SSE-S3
> * SSE-KMS
> * SSE-C
> * CSE-KMS CMK
> * CSE-Master Key
> With all of the appropriate configuration options and such that SSE is 
> available only for PutS3Object whilst CSE is available also for FetchS3Object.
> Given that this will add another 20 or so UI properties the intention is to 
> split it into a Client Side Encryption Service and Server Side Encryption 
> Service. This will allow users to reuse "encryption" across different 
> workflows.
> Existing flows using the Server Side Encryption option will still work as is 
> but will be overridden if a service is added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2066: NIFI-4256 - Add support for all AWS S3 Encryption Options

2017-10-06 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2066
  
@baank will you be in a position to help work with @jvwing on 
review/contrib cycles?


---


[GitHub] nifi issue #2089: Nifi-ldap-iaa support for PasswordComparisonAuthenticator

2017-10-06 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2089
  
Will review. 


---


[jira] [Commented] (NIFI-4246) OAuth 2 Authorization support - Client Credentials Grant

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195182#comment-16195182
 ] 

ASF GitHub Bot commented on NIFI-4246:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2085
  
could you rebase please @jdye64 



> OAuth 2 Authorization support - Client Credentials Grant
> 
>
> Key: NIFI-4246
> URL: https://issues.apache.org/jira/browse/NIFI-4246
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>
> If your interacting with REST endpoints on the web chances are you are going 
> to run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
> Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA 
> is focused solely on the Client Credentials Grant method defined at 
> https://tools.ietf.org/html/rfc6749#section-4.4
> This implementation should provide a ControllerService in which the enduser 
> can configure the credentials for obtaining the authorization grant (access 
> token) from the resource owner. In turn a new property will be added to the 
> InvokeHTTP processor (if it doesn't already exist from one of the other JIRA 
> efforts similar to this one) where the processor can reference this 
> controller service to obtain the access token and insert the appropriate HTTP 
> header (Authorization: Bearer{access_token}) so that the InvokeHTTP processor 
> can interact with the OAuth protected resources without having to worry about 
> setting up the credentials for each InvokeHTTP processor saving time and 
> complexity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2085: NIFI-4246 - Client Credentials Grant based OAuth2 Controll...

2017-10-06 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2085
  
could you rebase please @jdye64 



---


[jira] [Commented] (NIFI-4307) Add Kotlin support to ExecuteScript

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195180#comment-16195180
 ] 

ASF GitHub Bot commented on NIFI-4307:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2104
  
@MikeThomsen We've talked a bit on the mailing lists, what do you want to 
do here? I want to make sure your contribution is recognized, as those changes 
were an integral part of my own branch, except I hadn't cherry-picked and tried 
to overlay my commits. If (as you mentioned) you'd like to start from the 
branch I had, would you like to test/add/change/delete from there and then push 
a new branch/PR? Would be happy to review and merge (and yes of course you 
could alter whatever you want from anything I had!)


> Add Kotlin support to ExecuteScript
> ---
>
> Key: NIFI-4307
> URL: https://issues.apache.org/jira/browse/NIFI-4307
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> Kotlin has a ScriptEngine implementation as of v1.1. Add support for it in 
> NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2104: NIFI-4307 Added Kotlin 1.1.X support to ExecuteScript.

2017-10-06 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2104
  
@MikeThomsen We've talked a bit on the mailing lists, what do you want to 
do here? I want to make sure your contribution is recognized, as those changes 
were an integral part of my own branch, except I hadn't cherry-picked and tried 
to overlay my commits. If (as you mentioned) you'd like to start from the 
branch I had, would you like to test/add/change/delete from there and then push 
a new branch/PR? Would be happy to review and merge (and yes of course you 
could alter whatever you want from anything I had!)


---


[jira] [Commented] (NIFI-4242) CSVReader shouldn't require that an escape character be defined

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195179#comment-16195179
 ] 

ASF GitHub Bot commented on NIFI-4242:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2088
  
@Wesley-Lawrence can you please rebase to resolve the conflicts?


> CSVReader shouldn't require that an escape character be defined
> ---
>
> Key: NIFI-4242
> URL: https://issues.apache.org/jira/browse/NIFI-4242
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4242.patch, NIFI-4242.patch
>
>
> There are situations where, when parsing a CSV file, one doesn't want to 
> define an escape character. For example, when using quote character ", the 
> following is valid CSV;
> {code}
> a,"""b",c
> {code}
> The second column should be interpreted as "b. But when Apache Commons CSV is 
> told that there's an escape character, the above row is invalid 
> (interestingly, if it was """b""", it would be valid as "b"). 
> There are known formats that Apache Commons CSV provides, that doesn't define 
> escape characters either.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2088: NIFI-4242 Allow quote and escape chars for CSV to be 'unde...

2017-10-06 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2088
  
@Wesley-Lawrence can you please rebase to resolve the conflicts?


---


[jira] [Updated] (NIFI-4301) ExecuteScript Processor executing Python Script fails at os.getpid()

2017-10-06 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4301:
--
Fix Version/s: 1.5.0

> ExecuteScript Processor executing Python Script fails at os.getpid()
> 
>
> Key: NIFI-4301
> URL: https://issues.apache.org/jira/browse/NIFI-4301
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Will Lieu
>Assignee: Pierre Villard
> Fix For: 1.5.0
>
>
> Currently NiFi Version 1.3.0 uses Jython-Shaded-2.7.0 which contains a bug of 
> the os.getpid() method not being implemented. Is there any way you guys can 
> rev this jar to use 2.7.1? 
> See: [Jython Issue 2405|http://bugs.jython.org/issue2405.)]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4301) ExecuteScript Processor executing Python Script fails at os.getpid()

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195175#comment-16195175
 ] 

ASF GitHub Bot commented on NIFI-4301:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2095


> ExecuteScript Processor executing Python Script fails at os.getpid()
> 
>
> Key: NIFI-4301
> URL: https://issues.apache.org/jira/browse/NIFI-4301
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Will Lieu
>Assignee: Pierre Villard
> Fix For: 1.5.0
>
>
> Currently NiFi Version 1.3.0 uses Jython-Shaded-2.7.0 which contains a bug of 
> the os.getpid() method not being implemented. Is there any way you guys can 
> rev this jar to use 2.7.1? 
> See: [Jython Issue 2405|http://bugs.jython.org/issue2405.)]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4301) ExecuteScript Processor executing Python Script fails at os.getpid()

2017-10-06 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4301:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ExecuteScript Processor executing Python Script fails at os.getpid()
> 
>
> Key: NIFI-4301
> URL: https://issues.apache.org/jira/browse/NIFI-4301
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Will Lieu
>Assignee: Pierre Villard
> Fix For: 1.5.0
>
>
> Currently NiFi Version 1.3.0 uses Jython-Shaded-2.7.0 which contains a bug of 
> the os.getpid() method not being implemented. Is there any way you guys can 
> rev this jar to use 2.7.1? 
> See: [Jython Issue 2405|http://bugs.jython.org/issue2405.)]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2095: NIFI-4301 - bumped jython-shaded version to 2.7.1

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2095


---


[jira] [Commented] (NIFI-4301) ExecuteScript Processor executing Python Script fails at os.getpid()

2017-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195174#comment-16195174
 ] 

ASF subversion and git services commented on NIFI-4301:
---

Commit 39c5c5ab42e2d376741d388a43db03c11eb7ac00 in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=39c5c5a ]

NIFI-4301 - This closes #2095. bumped jython-shaded version to 2.7.1

Signed-off-by: joewitt 


> ExecuteScript Processor executing Python Script fails at os.getpid()
> 
>
> Key: NIFI-4301
> URL: https://issues.apache.org/jira/browse/NIFI-4301
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Will Lieu
>Assignee: Pierre Villard
>
> Currently NiFi Version 1.3.0 uses Jython-Shaded-2.7.0 which contains a bug of 
> the os.getpid() method not being implemented. Is there any way you guys can 
> rev this jar to use 2.7.1? 
> See: [Jython Issue 2405|http://bugs.jython.org/issue2405.)]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4371) Add support for query timeout in Hive processors

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195169#comment-16195169
 ] 

ASF GitHub Bot commented on NIFI-4371:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r143281834
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/AbstractHiveQLProcessor.java
 ---
@@ -66,6 +71,38 @@
 .addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor QUERY_TIMEOUT = new 
PropertyDescriptor.Builder()
+.name("hive-query-timeout")
+.displayName("Query timeout")
+.description("Sets the number of seconds the driver will wait 
for a query to execute. "
++ "A value of 0 means no timeout. This feature is 
available starting with Hive 2.1.")
--- End diff --

The part about the feature availability is nice if/when there's a choice, 
but for now it's not technically germane, since the Hive NAR ships with a 
particular version of Hive. I will remove that part and merge, the rest LGTM :)


> Add support for query timeout in Hive processors
> 
>
> Key: NIFI-4371
> URL: https://issues.apache.org/jira/browse/NIFI-4371
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: Screen Shot 2017-09-09 at 4.31.21 PM.png, Screen Shot 
> 2017-09-09 at 6.38.51 PM.png, Screen Shot 2017-09-09 at 6.40.48 PM.png
>
>
> With HIVE-4924 it is possible to set a query timeout when executing a query 
> against Hive (starting with Hive 2.1). Right now, NiFi is built using Hive 
> 1.2.1 and this feature is not available by default (the method is not 
> implemented in the driver). However, if building NiFi with specific profiles 
> this feature can be used.
> The objective is to expose the query timeout parameter in the processor and 
> enable expression language. If the version of the driver is not implementing 
> the query timeout the processor will be in invalid state (unless expression 
> language is used, and in this case, the flow file will be routed to the 
> failure relationship).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2138: NIFI-4371 - add support for query timeout in Hive p...

2017-10-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r143281834
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/AbstractHiveQLProcessor.java
 ---
@@ -66,6 +71,38 @@
 .addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor QUERY_TIMEOUT = new 
PropertyDescriptor.Builder()
+.name("hive-query-timeout")
+.displayName("Query timeout")
+.description("Sets the number of seconds the driver will wait 
for a query to execute. "
++ "A value of 0 means no timeout. This feature is 
available starting with Hive 2.1.")
--- End diff --

The part about the feature availability is nice if/when there's a choice, 
but for now it's not technically germane, since the Hive NAR ships with a 
particular version of Hive. I will remove that part and merge, the rest LGTM :)


---


[jira] [Commented] (NIFI-4371) Add support for query timeout in Hive processors

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195166#comment-16195166
 ] 

ASF GitHub Bot commented on NIFI-4371:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r143281431
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/AbstractHiveQLProcessor.java
 ---
@@ -66,6 +71,38 @@
 .addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor QUERY_TIMEOUT = new 
PropertyDescriptor.Builder()
+.name("hive-query-timeout")
+.displayName("Query timeout")
+.description("Sets the number of seconds the driver will wait 
for a query to execute. "
++ "A value of 0 means no timeout. This feature is 
available starting with Hive 2.1.")
--- End diff --

I'd remove the 'This feature is available starting with Hive 2.1". 
Otherwise this all lgtm


> Add support for query timeout in Hive processors
> 
>
> Key: NIFI-4371
> URL: https://issues.apache.org/jira/browse/NIFI-4371
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: Screen Shot 2017-09-09 at 4.31.21 PM.png, Screen Shot 
> 2017-09-09 at 6.38.51 PM.png, Screen Shot 2017-09-09 at 6.40.48 PM.png
>
>
> With HIVE-4924 it is possible to set a query timeout when executing a query 
> against Hive (starting with Hive 2.1). Right now, NiFi is built using Hive 
> 1.2.1 and this feature is not available by default (the method is not 
> implemented in the driver). However, if building NiFi with specific profiles 
> this feature can be used.
> The objective is to expose the query timeout parameter in the processor and 
> enable expression language. If the version of the driver is not implementing 
> the query timeout the processor will be in invalid state (unless expression 
> language is used, and in this case, the flow file will be routed to the 
> failure relationship).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2138: NIFI-4371 - add support for query timeout in Hive p...

2017-10-06 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r143281431
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/AbstractHiveQLProcessor.java
 ---
@@ -66,6 +71,38 @@
 .addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor QUERY_TIMEOUT = new 
PropertyDescriptor.Builder()
+.name("hive-query-timeout")
+.displayName("Query timeout")
+.description("Sets the number of seconds the driver will wait 
for a query to execute. "
++ "A value of 0 means no timeout. This feature is 
available starting with Hive 2.1.")
--- End diff --

I'd remove the 'This feature is available starting with Hive 2.1". 
Otherwise this all lgtm


---


[jira] [Commented] (NIFI-4392) Create graphite reporting task

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195161#comment-16195161
 ] 

ASF GitHub Bot commented on NIFI-4392:
--

Github user omerhadari commented on the issue:

https://github.com/apache/nifi/pull/2171
  
Thanks for the review, I am on it


> Create graphite reporting task
> --
>
> Key: NIFI-4392
> URL: https://issues.apache.org/jira/browse/NIFI-4392
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Omer Hadari
>Priority: Minor
>  Labels: features
>
> Create a reporting task for graphite, similar to that of datadog and ambari.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2171: NIFI-4392 - Add metric reporting task for Graphite

2017-10-06 Thread omerhadari
Github user omerhadari commented on the issue:

https://github.com/apache/nifi/pull/2171
  
Thanks for the review, I am on it


---


[jira] [Commented] (NIFI-4371) Add support for query timeout in Hive processors

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195157#comment-16195157
 ] 

ASF GitHub Bot commented on NIFI-4371:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r143280686
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -290,6 +292,9 @@ public void process(final OutputStream out) throws 
IOException {
 }
 }
 
+// set query timeout
+st.setQueryTimeout(queryTimeout);
--- End diff --

disregard what i said.  matt pointed out that the code checks IF a timeout 
was set and is non-zero.  Based on that I think the way it was implemented is 
awesome.


> Add support for query timeout in Hive processors
> 
>
> Key: NIFI-4371
> URL: https://issues.apache.org/jira/browse/NIFI-4371
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: Screen Shot 2017-09-09 at 4.31.21 PM.png, Screen Shot 
> 2017-09-09 at 6.38.51 PM.png, Screen Shot 2017-09-09 at 6.40.48 PM.png
>
>
> With HIVE-4924 it is possible to set a query timeout when executing a query 
> against Hive (starting with Hive 2.1). Right now, NiFi is built using Hive 
> 1.2.1 and this feature is not available by default (the method is not 
> implemented in the driver). However, if building NiFi with specific profiles 
> this feature can be used.
> The objective is to expose the query timeout parameter in the processor and 
> enable expression language. If the version of the driver is not implementing 
> the query timeout the processor will be in invalid state (unless expression 
> language is used, and in this case, the flow file will be routed to the 
> failure relationship).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2138: NIFI-4371 - add support for query timeout in Hive p...

2017-10-06 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r143280686
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -290,6 +292,9 @@ public void process(final OutputStream out) throws 
IOException {
 }
 }
 
+// set query timeout
+st.setQueryTimeout(queryTimeout);
--- End diff --

disregard what i said.  matt pointed out that the code checks IF a timeout 
was set and is non-zero.  Based on that I think the way it was implemented is 
awesome.


---


[jira] [Commented] (NIFI-4371) Add support for query timeout in Hive processors

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195150#comment-16195150
 ] 

ASF GitHub Bot commented on NIFI-4371:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r143279988
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -290,6 +292,9 @@ public void process(final OutputStream out) throws 
IOException {
 }
 }
 
+// set query timeout
+st.setQueryTimeout(queryTimeout);
--- End diff --

I get the point of the customValidate but I'd keep its check but mark it as 
valid whether the timeout method is supported or not.  YOu can set some 
processor instance boolean to advise whether it is supported then in this 
call/check set the timeout if it is and ignore it if not. Otherwise we're 
requiring them to use a version of hive which supports it and that seems a 
little heavy handed.  It is probably a good idea to do the current validation 
logic in some other lifecycle call like 'onAdded' or something, warn if 
timeouts not supported and point out hanging threads are possible, and keep 
going.


> Add support for query timeout in Hive processors
> 
>
> Key: NIFI-4371
> URL: https://issues.apache.org/jira/browse/NIFI-4371
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: Screen Shot 2017-09-09 at 4.31.21 PM.png, Screen Shot 
> 2017-09-09 at 6.38.51 PM.png, Screen Shot 2017-09-09 at 6.40.48 PM.png
>
>
> With HIVE-4924 it is possible to set a query timeout when executing a query 
> against Hive (starting with Hive 2.1). Right now, NiFi is built using Hive 
> 1.2.1 and this feature is not available by default (the method is not 
> implemented in the driver). However, if building NiFi with specific profiles 
> this feature can be used.
> The objective is to expose the query timeout parameter in the processor and 
> enable expression language. If the version of the driver is not implementing 
> the query timeout the processor will be in invalid state (unless expression 
> language is used, and in this case, the flow file will be routed to the 
> failure relationship).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2138: NIFI-4371 - add support for query timeout in Hive p...

2017-10-06 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r143279988
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -290,6 +292,9 @@ public void process(final OutputStream out) throws 
IOException {
 }
 }
 
+// set query timeout
+st.setQueryTimeout(queryTimeout);
--- End diff --

I get the point of the customValidate but I'd keep its check but mark it as 
valid whether the timeout method is supported or not.  YOu can set some 
processor instance boolean to advise whether it is supported then in this 
call/check set the timeout if it is and ignore it if not. Otherwise we're 
requiring them to use a version of hive which supports it and that seems a 
little heavy handed.  It is probably a good idea to do the current validation 
logic in some other lifecycle call like 'onAdded' or something, warn if 
timeouts not supported and point out hanging threads are possible, and keep 
going.


---


[jira] [Updated] (NIFI-4464) Couldn't install nifi as service on a Mac OS (OS X El Capitan 10.11.6)

2017-10-06 Thread Karthikeyan Govindaraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthikeyan Govindaraj updated NIFI-4464:
-
Priority: Critical  (was: Major)

> Couldn't install nifi as service on a Mac OS (OS X El Capitan 10.11.6)
> --
>
> Key: NIFI-4464
> URL: https://issues.apache.org/jira/browse/NIFI-4464
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Karthikeyan Govindaraj
>Priority: Critical
>  Labels: mac-os-x, nifi, nifi-as-service
> Attachments: Screen Shot 2017-10-04 at 10.04.55 AM.png
>
>
> Installed the NiFi via HomeBrew on my Mac laptop with *OS X (El Capital v 
> 10.11.6)* and ran the _start / run_ commands, gave the expected result.
> But when I say `*nifi install*` , I couldn't do so. It gave the following 
> message.
> `*_Installing Apache NiFi as a service is not supported on OS X or Cygwin._*`
> and I could see, that the *nifi.sh *says the above message if it detected the 
> OS as _*Darwin*_ (
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/bin/nifi.sh#L59
> )
> and unfortunately the OS X is Darwin. But in the "*Getting Started with 
> Apache NiFi*", it says installing nifi as a service is supported for OS X 
> users. (Link -> 
> https://nifi.apache.org/docs/nifi-docs/html/getting-started.html#installing-as-a-service)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2143: NiFi-4338: Add documents for how to use SSL protoco...

2017-10-06 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2143#discussion_r143278990
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/CreateHadoopSequenceFile.java
 ---
@@ -64,7 +64,8 @@
 @SideEffectFree
 @InputRequirement(Requirement.INPUT_REQUIRED)
 @Tags({"hadoop", "sequence file", "create", "sequencefile"})
-@CapabilityDescription("Creates Hadoop Sequence Files from incoming flow 
files")
+@CapabilityDescription("Creates Hadoop Sequence Files from incoming flow 
files."
++ " If you want to use SSL-secured file system like swebhdfs, 
please see the 'SSL Configuration' topic of the 'Additional Details' of 
PutHDFS.")
--- End diff --

I think we should avoid adding the 'if you want to use SSL-Secured file 
system" entry in all these descriptions.  If it is an option of the processor 
then on that option this comment can exist.


---


[jira] [Updated] (NIFI-4450) Upgrade Kite SDK to latest release

2017-10-06 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4450:
--
Fix Version/s: 1.5.0

> Upgrade Kite SDK to latest release
> --
>
> Key: NIFI-4450
> URL: https://issues.apache.org/jira/browse/NIFI-4450
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Trivial
> Fix For: 1.5.0
>
>
> Kite 1.1.0 was released more than 2 years ago. It might be a good idea to 
> update it inside NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-4450) Upgrade Kite SDK to latest release

2017-10-06 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-4450.
---
Resolution: Fixed

> Upgrade Kite SDK to latest release
> --
>
> Key: NIFI-4450
> URL: https://issues.apache.org/jira/browse/NIFI-4450
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Trivial
> Fix For: 1.5.0
>
>
> Kite 1.1.0 was released more than 2 years ago. It might be a good idea to 
> update it inside NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4450) Upgrade Kite SDK to latest release

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195142#comment-16195142
 ] 

ASF GitHub Bot commented on NIFI-4450:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2187
  
The change in master was made that now allows kite to just bring in all of 
its own 2.3.1 dependencies, so I merged your commit without the Jackson 
additions. Thanks.


> Upgrade Kite SDK to latest release
> --
>
> Key: NIFI-4450
> URL: https://issues.apache.org/jira/browse/NIFI-4450
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Trivial
>
> Kite 1.1.0 was released more than 2 years ago. It might be a good idea to 
> update it inside NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4450) Upgrade Kite SDK to latest release

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195140#comment-16195140
 ] 

ASF GitHub Bot commented on NIFI-4450:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2187


> Upgrade Kite SDK to latest release
> --
>
> Key: NIFI-4450
> URL: https://issues.apache.org/jira/browse/NIFI-4450
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Trivial
>
> Kite 1.1.0 was released more than 2 years ago. It might be a good idea to 
> update it inside NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-2663) Add Websocket support for MQTT protocol

2017-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195141#comment-16195141
 ] 

ASF GitHub Bot commented on NIFI-2663:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2154#discussion_r143278549
  
--- Diff: 
nifi-nar-bundles/nifi-mqtt-bundle/nifi-mqtt-processors/src/test/java/org/apache/nifi/processors/mqtt/common/MqttTestClient.java
 ---
@@ -195,4 +218,296 @@ public String getServerURI() {
 public void close() throws MqttException {
 
 }
+
--- End diff --

There is a lot of boilerplate looking javadocs.  Are these copied/pasted 
from somewhere?  Where is that?


> Add Websocket support for MQTT protocol
> ---
>
> Key: NIFI-2663
> URL: https://issues.apache.org/jira/browse/NIFI-2663
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0, 0.6.1
>Reporter: Andrew Psaltis
>Assignee: Andrew Psaltis
>
> Today NiFi only supports MQTT over plain TCP using the PublishMQTT and 
> ConsumeMQTT processors. However, there are many cases in the IoT world where 
> WebSockets (secure and not) is the preferred transport mechanism for MQTT.  
> This JIRA is to enhance those processors to also support WS.
> Following are the basics of what is required:
>  
> 1)  Require to configure web socket as the transport protocol, currently 
> PublishMQTT processor supports TCP as the only transport protocol
> 2)  URL input for the Web socket transport should be in the format of 
> ws://IPAddress:websocket_listening_port, as of now it accepts only TCP url, 
> and a bulletin is raised if a protocol other than tcp or ssl is used.
> 3)  Similar to TCP non-secured and secured publisher we should 
> extend/provide processor to support WS and WSS transport protocols



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2187: NIFI-4450 Update Kite SDK version

2017-10-06 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2187
  
The change in master was made that now allows kite to just bring in all of 
its own 2.3.1 dependencies, so I merged your commit without the Jackson 
additions. Thanks.


---


[GitHub] nifi pull request #2154: NIFI-2663: Add WebSocket support for MQTT processor...

2017-10-06 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2154#discussion_r143278549
  
--- Diff: 
nifi-nar-bundles/nifi-mqtt-bundle/nifi-mqtt-processors/src/test/java/org/apache/nifi/processors/mqtt/common/MqttTestClient.java
 ---
@@ -195,4 +218,296 @@ public String getServerURI() {
 public void close() throws MqttException {
 
 }
+
--- End diff --

There is a lot of boilerplate looking javadocs.  Are these copied/pasted 
from somewhere?  Where is that?


---


[GitHub] nifi pull request #2187: NIFI-4450 Update Kite SDK version

2017-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2187


---


[jira] [Commented] (NIFI-4450) Upgrade Kite SDK to latest release

2017-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195139#comment-16195139
 ] 

ASF subversion and git services commented on NIFI-4450:
---

Commit 59b60e62d86d5008abbce4db4f3505706b0a52bb in nifi's branch 
refs/heads/master from [~lanzani]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=59b60e6 ]

NIFI-4450 - Update Kite SDK version

This closes #2187.

Signed-off-by: Bryan Bende 


> Upgrade Kite SDK to latest release
> --
>
> Key: NIFI-4450
> URL: https://issues.apache.org/jira/browse/NIFI-4450
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Assignee: Giovanni Lanzani
>Priority: Trivial
>
> Kite 1.1.0 was released more than 2 years ago. It might be a good idea to 
> update it inside NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-06 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r143277153
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-processors/src/main/java/org/apache/nifi/processors/PutDruid.java
 ---
@@ -0,0 +1,206 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import org.codehaus.jackson.JsonParseException;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.apache.nifi.controller.api.DruidTranquilityService;
+import com.metamx.tranquility.tranquilizer.MessageDroppedException;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.twitter.util.Await;
+import com.twitter.util.Future;
+import com.twitter.util.FutureEventListener;
+
+import scala.runtime.BoxedUnit;
+
+@SideEffectFree
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Sends events to Apache Druid for Indexing. "
+   + "Leverages Druid Tranquility 
Controller service."
+   + "Incoming flow files are 
expected to contain 1 or many JSON objects, one JSON object per line")
+public class PutDruid extends AbstractSessionFactoryProcessor {
+
+private List properties;
+private Set relationships;
+private final Map messageStatus = new 
HashMap();
+
+public static final PropertyDescriptor DRUID_TRANQUILITY_SERVICE = new 
PropertyDescriptor.Builder()
+.name("druid_tranquility_service")
+.description("Tranquility Service to use for sending events to 
Druid")
+.required(true)
+.identifiesControllerService(DruidTranquilityService.class)
+.build();
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("SUCCESS")
+.description("Succes relationship")
+.build();
+
+public static final Relationship REL_FAIL = new Relationship.Builder()
+.name("FAIL")
+.description("FlowFiles are routed to this relationship when 
they cannot be parsed")
+.build();
+
+public static final Relationship REL_DROPPED = new 
Relationship.Builder()
+.name("DROPPED")
+.description("FlowFiles are routed to this relationship when 
they are outside of the configured time window, timestamp format is invalid, 
ect...")
+.build();
+
+public void init(final ProcessorInitializationContext context){
+List properties = new ArrayList<>();
+properties.add(DRUID_TRANQUILITY_SERVICE);
+this.properties = Collections.unmodifiableList(properties);
+
+Set rela

[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-06 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r143276335
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-processors/src/main/java/org/apache/nifi/processors/PutDruid.java
 ---
@@ -0,0 +1,206 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import org.codehaus.jackson.JsonParseException;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.apache.nifi.controller.api.DruidTranquilityService;
+import com.metamx.tranquility.tranquilizer.MessageDroppedException;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.twitter.util.Await;
+import com.twitter.util.Future;
+import com.twitter.util.FutureEventListener;
+
+import scala.runtime.BoxedUnit;
+
+@SideEffectFree
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Sends events to Apache Druid for Indexing. "
+   + "Leverages Druid Tranquility 
Controller service."
+   + "Incoming flow files are 
expected to contain 1 or many JSON objects, one JSON object per line")
+public class PutDruid extends AbstractSessionFactoryProcessor {
+
+private List properties;
+private Set relationships;
+private final Map messageStatus = new 
HashMap();
+
+public static final PropertyDescriptor DRUID_TRANQUILITY_SERVICE = new 
PropertyDescriptor.Builder()
+.name("druid_tranquility_service")
+.description("Tranquility Service to use for sending events to 
Druid")
+.required(true)
+.identifiesControllerService(DruidTranquilityService.class)
+.build();
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("SUCCESS")
+.description("Succes relationship")
+.build();
+
+public static final Relationship REL_FAIL = new Relationship.Builder()
+.name("FAIL")
+.description("FlowFiles are routed to this relationship when 
they cannot be parsed")
+.build();
+
+public static final Relationship REL_DROPPED = new 
Relationship.Builder()
+.name("DROPPED")
+.description("FlowFiles are routed to this relationship when 
they are outside of the configured time window, timestamp format is invalid, 
ect...")
+.build();
+
+public void init(final ProcessorInitializationContext context){
+List properties = new ArrayList<>();
+properties.add(DRUID_TRANQUILITY_SERVICE);
+this.properties = Collections.unmodifiableList(properties);
+
+Set rela

  1   2   >