[jira] [Commented] (NIFI-4731) BigQuery processors

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626803#comment-16626803
 ] 

ASF GitHub Bot commented on NIFI-4731:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/3019
  
@pvillard31 Haven't built and tested it. But by looking at the changed 
code, the proxy settings looks good to me. I will try it in action later. 
Thanks!


> BigQuery processors
> ---
>
> Key: NIFI-4731
> URL: https://issues.apache.org/jira/browse/NIFI-4731
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mikhail Sosonkin
>Priority: Major
>
> NIFI should have processors for putting data into BigQuery (Streaming and 
> Batch).
> Initial working processors can be found this repository: 
> https://github.com/nologic/nifi/tree/NIFI-4731/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/bigquery
> I'd like to get them into Nifi proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3019: [NIFI-4731][NIFI-4933] Big Query processor

2018-09-24 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/3019
  
@pvillard31 Haven't built and tested it. But by looking at the changed 
code, the proxy settings looks good to me. I will try it in action later. 
Thanks!


---


[jira] [Commented] (NIFI-5625) Support variables for the properties of HTTP processors

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626678#comment-16626678
 ] 

ASF GitHub Bot commented on NIFI-5625:
--

Github user kemixkoo commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3020#discussion_r219832541
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetHTTP.java
 ---
@@ -177,19 +180,22 @@
 .name("Username")
 .description("Username required to access the URL")
 .required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
 .name("Password")
 .description("Password required to access the URL")
 .required(false)
 .sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
--- End diff --

Yes, I totally agree with the policy for password. but, In my case, I have 
several processors with the same password,  when I change the account from dev 
to prod, I have to do change for all processors. 

Also, I saw the "PROP_PROXY_PASSWORD" property support the EL in InvokeHTTP.


> Support variables  for the properties of HTTP processors
> 
>
> Key: NIFI-5625
> URL: https://issues.apache.org/jira/browse/NIFI-5625
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions, Variable Registry
>Affects Versions: 1.7.1
>Reporter: Kemix Koo
>Priority: Minor
>
> When set some group (global) variables, some properties of HTTP processors 
> don't support the expressions. for example, USER and PASS, if don't set the 
> global variables, must change for each processors one by one. it's so 
> troublesomely. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5625) Support variables for the properties of HTTP processors

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626679#comment-16626679
 ] 

ASF GitHub Bot commented on NIFI-5625:
--

Github user kemixkoo commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3020#discussion_r219841379
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PostHTTP.java
 ---
@@ -211,25 +215,29 @@
 .required(false)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .defaultValue(VersionInfo.getUserAgent("Apache-HttpClient", 
"org.apache.http.client", HttpClientBuilder.class))
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 public static final PropertyDescriptor COMPRESSION_LEVEL = new 
PropertyDescriptor.Builder()
 .name("Compression Level")
 .description("Determines the GZIP Compression Level to use 
when sending the file; the value must be in the range of 0-9. A value of 0 
indicates that the file will not be GZIP'ed")
 .required(true)
 .addValidator(StandardValidators.createLongValidator(0, 9, 
true))
 .defaultValue("0")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 public static final PropertyDescriptor ATTRIBUTES_AS_HEADERS_REGEX = 
new PropertyDescriptor.Builder()
 .name("Attributes to Send as HTTP Headers (Regex)")
 .description("Specifies the Regular Expression that determines 
the names of FlowFile attributes that should be sent as HTTP Headers")
 .addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
--- End diff --

Oh, should be one mistake.


> Support variables  for the properties of HTTP processors
> 
>
> Key: NIFI-5625
> URL: https://issues.apache.org/jira/browse/NIFI-5625
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions, Variable Registry
>Affects Versions: 1.7.1
>Reporter: Kemix Koo
>Priority: Minor
>
> When set some group (global) variables, some properties of HTTP processors 
> don't support the expressions. for example, USER and PASS, if don't set the 
> global variables, must change for each processors one by one. it's so 
> troublesomely. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3020: NIFI-5625: support the variables for the properties...

2018-09-24 Thread kemixkoo
Github user kemixkoo commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3020#discussion_r219832541
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetHTTP.java
 ---
@@ -177,19 +180,22 @@
 .name("Username")
 .description("Username required to access the URL")
 .required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
 .name("Password")
 .description("Password required to access the URL")
 .required(false)
 .sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
--- End diff --

Yes, I totally agree with the policy for password. but, In my case, I have 
several processors with the same password,  when I change the account from dev 
to prod, I have to do change for all processors. 

Also, I saw the "PROP_PROXY_PASSWORD" property support the EL in InvokeHTTP.


---


[GitHub] nifi pull request #3020: NIFI-5625: support the variables for the properties...

2018-09-24 Thread kemixkoo
Github user kemixkoo commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3020#discussion_r219841379
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PostHTTP.java
 ---
@@ -211,25 +215,29 @@
 .required(false)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .defaultValue(VersionInfo.getUserAgent("Apache-HttpClient", 
"org.apache.http.client", HttpClientBuilder.class))
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 public static final PropertyDescriptor COMPRESSION_LEVEL = new 
PropertyDescriptor.Builder()
 .name("Compression Level")
 .description("Determines the GZIP Compression Level to use 
when sending the file; the value must be in the range of 0-9. A value of 0 
indicates that the file will not be GZIP'ed")
 .required(true)
 .addValidator(StandardValidators.createLongValidator(0, 9, 
true))
 .defaultValue("0")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 public static final PropertyDescriptor ATTRIBUTES_AS_HEADERS_REGEX = 
new PropertyDescriptor.Builder()
 .name("Attributes to Send as HTTP Headers (Regex)")
 .description("Specifies the Regular Expression that determines 
the names of FlowFile attributes that should be sent as HTTP Headers")
 .addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
--- End diff --

Oh, should be one mistake.


---


[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626519#comment-16626519
 ] 

ASF GitHub Bot commented on NIFI-5585:
--

Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3010#discussion_r220008088
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/NonLocalPartitionPartitioner.java
 ---
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.controller.queue.clustered.partition;
+
+import org.apache.nifi.controller.repository.FlowFileRecord;
+
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * PReturns remote partitions when queried for a partition; never returns 
the {@link LocalQueuePartition}.
+ */
+public class NonLocalPartitionPartitioner implements FlowFilePartitioner {
+private final AtomicLong counter = new AtomicLong(0L);
+
+@Override
+public QueuePartition getPartition(final FlowFileRecord flowFile, 
final QueuePartition[] partitions, final QueuePartition localPartition) {
+QueuePartition remotePartition = null;
+for (int i = 0, numPartitions = partitions.length; i < 
numPartitions; i++) {
+final long count = counter.getAndIncrement();
--- End diff --

Very good catch!  I've updated the partitioner to use a startIndex rather 
than the result of counter.getAndIncrement() each iteration.


> Decommision Nodes from Cluster
> --
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be decommissioned to the other active nodes.  This work depends 
> on NIFI-5516.
> Similar to the client sending PUT request a DISCONNECTING message to 
> cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request 
> to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The 
> DECOMMISSIONING request will be idempotent.
> The steps to decommission a node and remove it from the cluster are:
>  # Send request to disconnect the node
>  # Once disconnect completes, send request to decommission the node.
>  # Once decommission completes, send request to delete node.
> When an error occurs and the node can not complete decommissioning, the user 
> can:
>  # Send request to delete the node from the cluster
>  # Diagnose why the node had issues with the decommission (out of memory, no 
> network connection, etc) and address the issue
>  # Restart NiFi on the node to so that it will reconnect to the cluster
>  # Go through the steps to decommission and remove a node
> Toolkit CLI commands for retrieving a list of nodes and 
> disconnecting/decommissioning/deleting nodes have been added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3010: [WIP] NIFI-5585

2018-09-24 Thread jtstorck
Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3010#discussion_r220008088
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/NonLocalPartitionPartitioner.java
 ---
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.controller.queue.clustered.partition;
+
+import org.apache.nifi.controller.repository.FlowFileRecord;
+
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * PReturns remote partitions when queried for a partition; never returns 
the {@link LocalQueuePartition}.
+ */
+public class NonLocalPartitionPartitioner implements FlowFilePartitioner {
+private final AtomicLong counter = new AtomicLong(0L);
+
+@Override
+public QueuePartition getPartition(final FlowFileRecord flowFile, 
final QueuePartition[] partitions, final QueuePartition localPartition) {
+QueuePartition remotePartition = null;
+for (int i = 0, numPartitions = partitions.length; i < 
numPartitions; i++) {
+final long count = counter.getAndIncrement();
--- End diff --

Very good catch!  I've updated the partitioner to use a startIndex rather 
than the result of counter.getAndIncrement() each iteration.


---


[GitHub] nifi pull request #3028: Nifi 4806

2018-09-24 Thread joewitt
GitHub user joewitt opened a pull request:

https://github.com/apache/nifi/pull/3028

Nifi 4806

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joewitt/incubator-nifi NIFI-4806

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3028.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3028


commit 3ba0181f4376e2dd4826608b6832d74d926e0707
Author: joewitt 
Date:   2018-09-21T03:24:17Z

NIFI-4806 updated tika and a ton of other deps as found by dependency 
versions plugin

commit fdd82bc9c8153d99473566ea0907fa4a0c6bd17c
Author: joewitt 
Date:   2018-09-21T03:42:43Z

NIFI-4806 updated tika and a ton of other deps as found by dependency 
versions plugin

commit dce7cc15764c15849c79e09d7811d19759ae9434
Author: joewitt 
Date:   2018-09-21T22:28:40Z

NIFI-4806




---


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-24 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626511#comment-16626511
 ] 

Colin Dean commented on NIFI-5612:
--

I ran my modified version against the database:

{code}
nifi-1.7.1/logs/nifi-app.log
7:2018-09-24 18:00:56,595 ERROR [Timer-Driven Process Thread-18] 
o.a.nifi.processors.standard.ExecuteSQL 
ExecuteSQL[id=0d057522-0166-1000-23f3-82a2cd072976] 
ExecuteSQL[id=0d057522-0166-1000-23f3-82a2cd072976] failed to process session 
due to java.lang.RuntimeException: Unable to resolve union for value 0 with 
type java.lang.Long; Processor Administratively Yielded for 1 sec: 
java.lang.RuntimeException: Unable to resolve union for value 0 with type 
java.lang.Long
8:java.lang.RuntimeException: Unable to resolve union for value 0 with type 
java.lang.Long
40:2018-09-24 18:00:56,599 WARN [Timer-Driven Process Thread-18] 
o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
ExecuteSQL[id=0d057522-0166-1000-23f3-82a2cd072976] due to uncaught Exception: 
java.lang.RuntimeException: Unable to resolve union for value 0 with type 
java.lang.Long
41:java.lang.RuntimeException: Unable to resolve union for value 0 with type 
java.lang.Long
{code}

The database is producing a Long when we're expecting an Integer.

[This logic in 
JdbcCommon#createSchema|https://github.com/apache/nifi/blob/e959630c22c9a52ec717141f6cf9f018830a38bf/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/JdbcCommon.java#L545]
 says we should expect an Integer in the Avro type.

> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>  

[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626504#comment-16626504
 ] 

ASF GitHub Bot commented on NIFI-5585:
--

Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3010#discussion_r22790
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/NonLocalPartitionPartitioner.java
 ---
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.controller.queue.clustered.partition;
+
+import org.apache.nifi.controller.repository.FlowFileRecord;
+
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * PReturns remote partitions when queried for a partition; never returns 
the {@link LocalQueuePartition}.
--- End diff --

Fixed.


> Decommision Nodes from Cluster
> --
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be decommissioned to the other active nodes.  This work depends 
> on NIFI-5516.
> Similar to the client sending PUT request a DISCONNECTING message to 
> cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request 
> to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The 
> DECOMMISSIONING request will be idempotent.
> The steps to decommission a node and remove it from the cluster are:
>  # Send request to disconnect the node
>  # Once disconnect completes, send request to decommission the node.
>  # Once decommission completes, send request to delete node.
> When an error occurs and the node can not complete decommissioning, the user 
> can:
>  # Send request to delete the node from the cluster
>  # Diagnose why the node had issues with the decommission (out of memory, no 
> network connection, etc) and address the issue
>  # Restart NiFi on the node to so that it will reconnect to the cluster
>  # Go through the steps to decommission and remove a node
> Toolkit CLI commands for retrieving a list of nodes and 
> disconnecting/decommissioning/deleting nodes have been added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626503#comment-16626503
 ] 

ASF GitHub Bot commented on NIFI-5585:
--

Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3010#discussion_r22647
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/SocketLoadBalancedFlowFileQueue.java
 ---
@@ -204,6 +206,19 @@ public synchronized void setLoadBalanceStrategy(final 
LoadBalanceStrategy strate
 setFlowFilePartitioner(partitioner);
 }
 
+@Override
+public void decommissionQueue() {
+if (clusterCoordinator == null) {
+// Not clustered, so don't change partitions
+return;
+}
+
+// TODO set decommissioned boolean
--- End diff --

It can!


> Decommision Nodes from Cluster
> --
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be decommissioned to the other active nodes.  This work depends 
> on NIFI-5516.
> Similar to the client sending PUT request a DISCONNECTING message to 
> cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request 
> to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The 
> DECOMMISSIONING request will be idempotent.
> The steps to decommission a node and remove it from the cluster are:
>  # Send request to disconnect the node
>  # Once disconnect completes, send request to decommission the node.
>  # Once decommission completes, send request to delete node.
> When an error occurs and the node can not complete decommissioning, the user 
> can:
>  # Send request to delete the node from the cluster
>  # Diagnose why the node had issues with the decommission (out of memory, no 
> network connection, etc) and address the issue
>  # Restart NiFi on the node to so that it will reconnect to the cluster
>  # Go through the steps to decommission and remove a node
> Toolkit CLI commands for retrieving a list of nodes and 
> disconnecting/decommissioning/deleting nodes have been added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3010: [WIP] NIFI-5585

2018-09-24 Thread jtstorck
Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3010#discussion_r22790
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/NonLocalPartitionPartitioner.java
 ---
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.controller.queue.clustered.partition;
+
+import org.apache.nifi.controller.repository.FlowFileRecord;
+
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * PReturns remote partitions when queried for a partition; never returns 
the {@link LocalQueuePartition}.
--- End diff --

Fixed.


---


[jira] [Commented] (NIFI-5585) Decommision Nodes from Cluster

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626502#comment-16626502
 ] 

ASF GitHub Bot commented on NIFI-5585:
--

Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3010#discussion_r22525
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/StandardFlowService.java
 ---
@@ -662,6 +682,39 @@ private void handleReconnectionRequest(final 
ReconnectionRequestMessage request)
 }
 }
 
+private void handleDecommissionRequest(final DecommissionMessage 
request) throws InterruptedException {
+logger.info("Received decommission request message from manager 
with explanation: " + request.getExplanation());
+decommission(request.getExplanation());
+}
+
+private void decommission(final String explanation) throws 
InterruptedException {
+writeLock.lock();
+try {
+
+logger.info("Decommissioning node due to " + explanation);
+
+// mark node as decommissioning
+controller.setConnectionStatus(new 
NodeConnectionStatus(nodeId, NodeConnectionState.DECOMMISSIONING, 
DecommissionCode.DECOMMISSIONED, explanation));
+// request to stop all processors on node
+controller.stopAllProcessors();
--- End diff --

Done.  Also, all RPGs will have stopTransmitting() called on them.


> Decommision Nodes from Cluster
> --
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be decommissioned to the other active nodes.  This work depends 
> on NIFI-5516.
> Similar to the client sending PUT request a DISCONNECTING message to 
> cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request 
> to the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The 
> DECOMMISSIONING request will be idempotent.
> The steps to decommission a node and remove it from the cluster are:
>  # Send request to disconnect the node
>  # Once disconnect completes, send request to decommission the node.
>  # Once decommission completes, send request to delete node.
> When an error occurs and the node can not complete decommissioning, the user 
> can:
>  # Send request to delete the node from the cluster
>  # Diagnose why the node had issues with the decommission (out of memory, no 
> network connection, etc) and address the issue
>  # Restart NiFi on the node to so that it will reconnect to the cluster
>  # Go through the steps to decommission and remove a node
> Toolkit CLI commands for retrieving a list of nodes and 
> disconnecting/decommissioning/deleting nodes have been added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3010: [WIP] NIFI-5585

2018-09-24 Thread jtstorck
Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3010#discussion_r22647
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/SocketLoadBalancedFlowFileQueue.java
 ---
@@ -204,6 +206,19 @@ public synchronized void setLoadBalanceStrategy(final 
LoadBalanceStrategy strate
 setFlowFilePartitioner(partitioner);
 }
 
+@Override
+public void decommissionQueue() {
+if (clusterCoordinator == null) {
+// Not clustered, so don't change partitions
+return;
+}
+
+// TODO set decommissioned boolean
--- End diff --

It can!


---


[GitHub] nifi pull request #3010: [WIP] NIFI-5585

2018-09-24 Thread jtstorck
Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3010#discussion_r22525
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/StandardFlowService.java
 ---
@@ -662,6 +682,39 @@ private void handleReconnectionRequest(final 
ReconnectionRequestMessage request)
 }
 }
 
+private void handleDecommissionRequest(final DecommissionMessage 
request) throws InterruptedException {
+logger.info("Received decommission request message from manager 
with explanation: " + request.getExplanation());
+decommission(request.getExplanation());
+}
+
+private void decommission(final String explanation) throws 
InterruptedException {
+writeLock.lock();
+try {
+
+logger.info("Decommissioning node due to " + explanation);
+
+// mark node as decommissioning
+controller.setConnectionStatus(new 
NodeConnectionStatus(nodeId, NodeConnectionState.DECOMMISSIONING, 
DecommissionCode.DECOMMISSIONED, explanation));
+// request to stop all processors on node
+controller.stopAllProcessors();
--- End diff --

Done.  Also, all RPGs will have stopTransmitting() called on them.


---


[jira] [Updated] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-24 Thread Colin Dean (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Dean updated NIFI-5612:
-
Description: 
I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
but not on dozens of others in the same database.
{code:java}
2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught Exception: 
org.apache.avro.file.DataFileWriter$AppendWriteException: 
org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
org.apache.avro.file.DataFileWriter$AppendWriteException: 
org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
at 
org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
at 
org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
["null","int"]: 0
at 
org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
at 
org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
at 
org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
at 
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
at 
org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
at 
org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
at 
org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
at 
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
at 
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
... 15 common frames omitted
{code}
I don't know if I can share the database schema – still working with my team on 
that – but looking at it, I think it has something to do with the signedness of 
int(1) or tinyint(1) because those two are the only numerical types common to 
all of the table.

 

Edit 2018-09-24:

I am able to reproduce the exception using
 * Vagrant 2.1.1
 * Virtualbox 5.2.18 r124319
 * Ubuntu 18.04
 * MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in use 
on the system where I observed this failure first)
 * MySQL Connector/J 5.1.46
 * NiFi 1.7.1

With this table definition and data:
{code:sql}
create table fails ( 
  fails int(1) unsigned NOT NULL default '0' 
) ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;

insert into fails values ();
{code}
and an ExecuteSQL processor set up to access that table.

  was:
I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
but not on dozens of others in the same database.

{code}
2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught Exception: 
org.apache.avro.file.DataFileWriter$AppendWriteException: 
org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
org.apache.avro.file.DataFileWriter$AppendWriteException: 
org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
at 

[jira] [Updated] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-24 Thread Colin Dean (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Dean updated NIFI-5612:
-
Description: 
I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
but not on dozens of others in the same database.
{code:java}
2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught Exception: 
org.apache.avro.file.DataFileWriter$AppendWriteException: 
org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
org.apache.avro.file.DataFileWriter$AppendWriteException: 
org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
at 
org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
at 
org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
["null","int"]: 0
at 
org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
at 
org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
at 
org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
at 
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
at 
org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
at 
org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
at 
org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
at 
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
at 
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
... 15 common frames omitted
{code}
I don't know if I can share the database schema – still working with my team on 
that – but looking at it, I think it has something to do with the signedness of 
int(1) or tinyint(1) because those two are the only numerical types common to 
all of the table.

 

*Edit 2018-09-24, so that my update doesn't get buried:*

I am able to reproduce the exception using
 * Vagrant 2.1.1
 * Virtualbox 5.2.18 r124319
 * Ubuntu 18.04
 * MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in use 
on the system where I observed this failure first)
 * MySQL Connector/J 5.1.46
 * NiFi 1.7.1

With this table definition and data:
{code:sql}
create table fails ( 
  fails int(1) unsigned NOT NULL default '0' 
) ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;

insert into fails values ();
{code}
and an ExecuteSQL processor set up to access that table.

  was:
I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
but not on dozens of others in the same database.
{code:java}
2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught Exception: 
org.apache.avro.file.DataFileWriter$AppendWriteException: 
org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
org.apache.avro.file.DataFileWriter$AppendWriteException: 
org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
at 

[jira] [Commented] (NIFI-4731) BigQuery processors

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626468#comment-16626468
 ] 

ASF GitHub Bot commented on NIFI-4731:
--

Github user danieljimenez commented on the issue:

https://github.com/apache/nifi/pull/3019
  
LGTM


> BigQuery processors
> ---
>
> Key: NIFI-4731
> URL: https://issues.apache.org/jira/browse/NIFI-4731
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mikhail Sosonkin
>Priority: Major
>
> NIFI should have processors for putting data into BigQuery (Streaming and 
> Batch).
> Initial working processors can be found this repository: 
> https://github.com/nologic/nifi/tree/NIFI-4731/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/bigquery
> I'd like to get them into Nifi proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3019: [NIFI-4731][NIFI-4933] Big Query processor

2018-09-24 Thread danieljimenez
Github user danieljimenez commented on the issue:

https://github.com/apache/nifi/pull/3019
  
LGTM


---


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-24 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626438#comment-16626438
 ] 

Colin Dean commented on NIFI-5612:
--

I decided to try a wider range of precisions, too:

{code:sql}
create table fails1 ( fails int(1) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails1 values();
create table fails2 ( fails int(2) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails2 values();
create table fails3 ( fails int(3) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails3 values();
create table fails4 ( fails int(4) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails4 values();
create table fails5 ( fails int(5) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails5 values();
create table fails6 ( fails int(6) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails6 values();
create table fails7 ( fails int(7) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails7 values();
create table fails8 ( fails int(8) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails8 values();
create table fails9 ( fails int(9) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails9 values();
create table fails10 ( fails int(10) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails10 values();
create table fails11 ( fails int(11) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails11 values();
create table fails12 ( fails int(12) unsigned NOT NULL default '0' ) 
ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
insert into fails12 values();
{code}

Success => 12, 10, 9
Failure => 1, 2, 8, 6


> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused 

[jira] [Updated] (NIFI-5514) MergeRecord does not create a merged FlowFile until a maximum threshold is reached

2018-09-24 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5514:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> MergeRecord does not create a merged FlowFile until a maximum threshold is 
> reached
> --
>
> Key: NIFI-5514
> URL: https://issues.apache.org/jira/browse/NIFI-5514
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> MergeRecord allows the user to specify a minimum number of records. However, 
> if the minimum number of records is reached, the merged FlowFile is not 
> created unless the maximum number of records or max number of bytes is also 
> reached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5514) MergeRecord does not create a merged FlowFile until a maximum threshold is reached

2018-09-24 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626432#comment-16626432
 ] 

ASF subversion and git services commented on NIFI-5514:
---

Commit 2a964681eca443cc335b6f269d2b9ddab57250c7 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=2a96468 ]

NIFI-5514: Fixed bugs in MergeRecord around minimum thresholds not being 
honored and validation not being performed to ensure that minimum threshold is 
smaller than max threshold (would previously allow min record = 100, max 
records = 2 as a valid configuration)
NIFI-5514: Do not rely on ProcessSession.getQueueSize() to return a queue size 
of 0 objects because if the processor is holding onto data, the queue size 
won't be 0.

Signed-off-by: Pierre Villard 

This closes #2954.


> MergeRecord does not create a merged FlowFile until a maximum threshold is 
> reached
> --
>
> Key: NIFI-5514
> URL: https://issues.apache.org/jira/browse/NIFI-5514
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> MergeRecord allows the user to specify a minimum number of records. However, 
> if the minimum number of records is reached, the merged FlowFile is not 
> created unless the maximum number of records or max number of bytes is also 
> reached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5514) MergeRecord does not create a merged FlowFile until a maximum threshold is reached

2018-09-24 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626431#comment-16626431
 ] 

ASF subversion and git services commented on NIFI-5514:
---

Commit 2a964681eca443cc335b6f269d2b9ddab57250c7 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=2a96468 ]

NIFI-5514: Fixed bugs in MergeRecord around minimum thresholds not being 
honored and validation not being performed to ensure that minimum threshold is 
smaller than max threshold (would previously allow min record = 100, max 
records = 2 as a valid configuration)
NIFI-5514: Do not rely on ProcessSession.getQueueSize() to return a queue size 
of 0 objects because if the processor is holding onto data, the queue size 
won't be 0.

Signed-off-by: Pierre Villard 

This closes #2954.


> MergeRecord does not create a merged FlowFile until a maximum threshold is 
> reached
> --
>
> Key: NIFI-5514
> URL: https://issues.apache.org/jira/browse/NIFI-5514
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> MergeRecord allows the user to specify a minimum number of records. However, 
> if the minimum number of records is reached, the merged FlowFile is not 
> created unless the maximum number of records or max number of bytes is also 
> reached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5514) MergeRecord does not create a merged FlowFile until a maximum threshold is reached

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626433#comment-16626433
 ] 

ASF GitHub Bot commented on NIFI-5514:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2954


> MergeRecord does not create a merged FlowFile until a maximum threshold is 
> reached
> --
>
> Key: NIFI-5514
> URL: https://issues.apache.org/jira/browse/NIFI-5514
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> MergeRecord allows the user to specify a minimum number of records. However, 
> if the minimum number of records is reached, the merged FlowFile is not 
> created unless the maximum number of records or max number of bytes is also 
> reached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2954: NIFI-5514: Fixed bugs in MergeRecord around minimum...

2018-09-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2954


---


[jira] [Commented] (NIFI-5514) MergeRecord does not create a merged FlowFile until a maximum threshold is reached

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626428#comment-16626428
 ] 

ASF GitHub Bot commented on NIFI-5514:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2954
  
+1, merging to master, thanks @markap14 


> MergeRecord does not create a merged FlowFile until a maximum threshold is 
> reached
> --
>
> Key: NIFI-5514
> URL: https://issues.apache.org/jira/browse/NIFI-5514
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> MergeRecord allows the user to specify a minimum number of records. However, 
> if the minimum number of records is reached, the merged FlowFile is not 
> created unless the maximum number of records or max number of bytes is also 
> reached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2954: NIFI-5514: Fixed bugs in MergeRecord around minimum thresh...

2018-09-24 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2954
  
+1, merging to master, thanks @markap14 


---


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-24 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626411#comment-16626411
 ] 

Colin Dean commented on NIFI-5612:
--

I am able to reproduce the exception using

* Vagrant 2.1.1
* Virtualbox 5.2.18 r124319
* Ubuntu 18.04
* MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in use 
on the system where I observed this failure first)
* MySQL Connector/J 5.1.46
* NiFi 1.7.1

With this table definition and data:

{code:sql}
create table fails ( 
  fails int(1) unsigned NOT NULL default '0' 
) ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;

insert into fails values ();
{code}

and an ExecuteSQL processor set up to access that table.

> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
>   ... 15 common frames omitted
> {code}
> I don't know if I can share the database schema – still working with my team 
> on that – but looking at it, I think it has something to do with the 
> signedness of int(1) or tinyint(1) because those two are the only numerical 
> types common to all of the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5632) Create put processor for Neo4J

2018-09-24 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5632:
---
Summary: Create put processor for Neo4J  (was: Create put process for Neo4J)

> Create put processor for Neo4J
> --
>
> Key: NIFI-5632
> URL: https://issues.apache.org/jira/browse/NIFI-5632
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Priority: Major
>
> Once NIFI-5537 is merged, a processor should be created that allows users to 
> provide create statements in Cypher syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5632) Create put process for Neo4J

2018-09-24 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5632:
--

 Summary: Create put process for Neo4J
 Key: NIFI-5632
 URL: https://issues.apache.org/jira/browse/NIFI-5632
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen


Once NIFI-5537 is merged, a processor should be created that allows users to 
provide create statements in Cypher syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5631) Create Neo4J client service

2018-09-24 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5631:
--

 Summary: Create Neo4J client service
 Key: NIFI-5631
 URL: https://issues.apache.org/jira/browse/NIFI-5631
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen


Once NIFI-5537 is merged, the abstract base class should be converted into a 
controller service that provides a simple interface for connecting to Neo4J, 
and the query processor should be updated to use it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5618) NullPointerException is thrown if attempting to view details of a Provenance Event on a node that is disconnected from cluster

2018-09-24 Thread Mark Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-5618:
-
Fix Version/s: 1.8.0
   Status: Patch Available  (was: Open)

> NullPointerException is thrown if attempting to view details of a Provenance 
> Event on a node that is disconnected from cluster
> --
>
> Key: NIFI-5618
> URL: https://issues.apache.org/jira/browse/NIFI-5618
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> I have a cluster of 2 nodes. I disconnected one of the nodes, then did a 
> Provenance Query. This returned the results correctly. However, when I tried 
> to view the details of the provenance event, I got an error in the UI 
> indicating that I should check my logs. User log has the following (partial) 
> stack trace:
> {code:java}
> 2018-09-20 15:16:36,049 ERROR [NiFi Web Server-177] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.api.ProvenanceEventResource.getProvenanceEvent(ProvenanceEventResource.java:299)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
> at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
> at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:268){code}
>  
> It appears to be due to the fact that since the node is disconnected, the 
> clusterNodeId is not provided in the REST API call. So the following block of 
> code:
> {code:java}
> final ClusterCoordinator coordinator = getClusterCoordinator();
> if (coordinator != null) {
> final NodeIdentifier nodeId = 
> coordinator.getNodeIdentifier(clusterNodeId);
> event.setClusterNodeAddress(nodeId.getApiAddress() + ":" + 
> nodeId.getApiPort());
> }{code}
> results in calling coordinator.getNodeIdentifier(null), which returns null 
> for the nodeId. We then call nodeId.getApiAddress(), throwing a NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3027: NIFI-5618: Avoid NPE when viewing Provenance Event ...

2018-09-24 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3027

NIFI-5618: Avoid NPE when viewing Provenance Event details on a disco…

…nnected node

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5618

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3027.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3027


commit d491ec59c28ffdd7dd223798047e2f86411f27e5
Author: Mark Payne 
Date:   2018-09-24T19:16:35Z

NIFI-5618: Avoid NPE when viewing Provenance Event details on a 
disconnected node




---


[jira] [Commented] (NIFI-5618) NullPointerException is thrown if attempting to view details of a Provenance Event on a node that is disconnected from cluster

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626307#comment-16626307
 ] 

ASF GitHub Bot commented on NIFI-5618:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3027

NIFI-5618: Avoid NPE when viewing Provenance Event details on a disco…

…nnected node

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5618

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3027.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3027


commit d491ec59c28ffdd7dd223798047e2f86411f27e5
Author: Mark Payne 
Date:   2018-09-24T19:16:35Z

NIFI-5618: Avoid NPE when viewing Provenance Event details on a 
disconnected node




> NullPointerException is thrown if attempting to view details of a Provenance 
> Event on a node that is disconnected from cluster
> --
>
> Key: NIFI-5618
> URL: https://issues.apache.org/jira/browse/NIFI-5618
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> I have a cluster of 2 nodes. I disconnected one of the nodes, then did a 
> Provenance Query. This returned the results correctly. However, when I tried 
> to view the details of the provenance event, I got an error in the UI 
> indicating that I should check my logs. User log has the following (partial) 
> stack trace:
> {code:java}
> 2018-09-20 15:16:36,049 ERROR [NiFi Web Server-177] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.api.ProvenanceEventResource.getProvenanceEvent(ProvenanceEventResource.java:299)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
> at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
> at 
> 

[jira] [Updated] (NIFI-5630) Status History no longer showing counter values

2018-09-24 Thread Mark Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-5630:
-
Fix Version/s: 1.8.0

> Status History no longer showing counter values
> ---
>
> Key: NIFI-5630
> URL: https://issues.apache.org/jira/browse/NIFI-5630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When viewing Status History for a Processor, if that processor has any 
> counters, they should be shown in the Status History. This was added a few 
> releases ago but on master appears not to show this in 1.8.0-SNAPSHOT. 
> Appears to be ok in 1.7.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5630) Status History no longer showing counter values

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626298#comment-16626298
 ] 

ASF GitHub Bot commented on NIFI-5630:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3026

NIFI-5630: Ensure that we include counters in Status History when pre…

…sent

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5630

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3026.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3026


commit 64030982604aafc11196b0745be916e7df46dbed
Author: Mark Payne 
Date:   2018-09-24T19:12:35Z

NIFI-5630: Ensure that we include counters in Status History when present




> Status History no longer showing counter values
> ---
>
> Key: NIFI-5630
> URL: https://issues.apache.org/jira/browse/NIFI-5630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> When viewing Status History for a Processor, if that processor has any 
> counters, they should be shown in the Status History. This was added a few 
> releases ago but on master appears not to show this in 1.8.0-SNAPSHOT. 
> Appears to be ok in 1.7.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5630) Status History no longer showing counter values

2018-09-24 Thread Mark Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-5630:
-
Status: Patch Available  (was: Open)

> Status History no longer showing counter values
> ---
>
> Key: NIFI-5630
> URL: https://issues.apache.org/jira/browse/NIFI-5630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> When viewing Status History for a Processor, if that processor has any 
> counters, they should be shown in the Status History. This was added a few 
> releases ago but on master appears not to show this in 1.8.0-SNAPSHOT. 
> Appears to be ok in 1.7.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5630) Status History no longer showing counter values

2018-09-24 Thread Mark Payne (JIRA)
Mark Payne created NIFI-5630:


 Summary: Status History no longer showing counter values
 Key: NIFI-5630
 URL: https://issues.apache.org/jira/browse/NIFI-5630
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne


When viewing Status History for a Processor, if that processor has any 
counters, they should be shown in the Status History. This was added a few 
releases ago but on master appears not to show this in 1.8.0-SNAPSHOT. Appears 
to be ok in 1.7.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3026: NIFI-5630: Ensure that we include counters in Statu...

2018-09-24 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3026

NIFI-5630: Ensure that we include counters in Status History when pre…

…sent

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5630

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3026.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3026


commit 64030982604aafc11196b0745be916e7df46dbed
Author: Mark Payne 
Date:   2018-09-24T19:12:35Z

NIFI-5630: Ensure that we include counters in Status History when present




---


[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626294#comment-16626294
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219954907
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/test/java/org/apache/nifi/processors/neo4j/TestNeo4JCyperExecutor.java
 ---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.StatementResult;
+import org.neo4j.driver.v1.Record;
+import org.neo4j.driver.v1.summary.ResultSummary;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.mockito.Answers;
+import org.mockito.Mock;
+import org.mockito.Mockito;
+import org.mockito.junit.MockitoJUnit;
+import org.mockito.junit.MockitoRule;
+
+import java.io.File;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Neo4J Cypher unit tests.
+ */
+public class TestNeo4JCyperExecutor {
--- End diff --

Typo in class name. 


> Create Neo4J cypher execution processor
> ---
>
> Key: NIFI-5537
> URL: https://issues.apache.org/jira/browse/NIFI-5537
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: graph, neo4j, node, relationship
> Fix For: 1.8.0
>
>
> Create Nifi Neo4J cypher queries processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5514) MergeRecord does not create a merged FlowFile until a maximum threshold is reached

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626296#comment-16626296
 ] 

ASF GitHub Bot commented on NIFI-5514:
--

Github user bdesert commented on the issue:

https://github.com/apache/nifi/pull/2954
  
+1 LGTM.
Test case updated.
Live test on local env (up to date) succeeded. Works as expected.
Travis is failing for JP only (US and FR are OK). Can be merged.


> MergeRecord does not create a merged FlowFile until a maximum threshold is 
> reached
> --
>
> Key: NIFI-5514
> URL: https://issues.apache.org/jira/browse/NIFI-5514
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> MergeRecord allows the user to specify a minimum number of records. However, 
> if the minimum number of records is reached, the merged FlowFile is not 
> created unless the maximum number of records or max number of bytes is also 
> reached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2954: NIFI-5514: Fixed bugs in MergeRecord around minimum thresh...

2018-09-24 Thread bdesert
Github user bdesert commented on the issue:

https://github.com/apache/nifi/pull/2954
  
+1 LGTM.
Test case updated.
Live test on local env (up to date) succeeded. Works as expected.
Travis is failing for JP only (US and FR are OK). Can be merged.


---


[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-24 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219954907
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/test/java/org/apache/nifi/processors/neo4j/TestNeo4JCyperExecutor.java
 ---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.StatementResult;
+import org.neo4j.driver.v1.Record;
+import org.neo4j.driver.v1.summary.ResultSummary;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.mockito.Answers;
+import org.mockito.Mock;
+import org.mockito.Mockito;
+import org.mockito.junit.MockitoJUnit;
+import org.mockito.junit.MockitoRule;
+
+import java.io.File;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Neo4J Cypher unit tests.
+ */
+public class TestNeo4JCyperExecutor {
--- End diff --

Typo in class name. 


---


[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626289#comment-16626289
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219953983
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED 

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-24 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219953983
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED = 
new AllowableValue(LoadBalancingStrategy.LEAST_CONNECTED.name(), "Least 
Connected", "Least Connected Strategy");
+
+protected static final PropertyDescriptor LOAD_BALANCING_STRATEGY = 
new PropertyDescriptor.Builder()
   

[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626285#comment-16626285
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219953428
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED 

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-24 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219953428
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED = 
new AllowableValue(LoadBalancingStrategy.LEAST_CONNECTED.name(), "Least 
Connected", "Least Connected Strategy");
+
+protected static final PropertyDescriptor LOAD_BALANCING_STRATEGY = 
new PropertyDescriptor.Builder()
   

[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626284#comment-16626284
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219953286
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED 

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-24 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219953286
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED = 
new AllowableValue(LoadBalancingStrategy.LEAST_CONNECTED.name(), "Least 
Connected", "Least Connected Strategy");
+
+protected static final PropertyDescriptor LOAD_BALANCING_STRATEGY = 
new PropertyDescriptor.Builder()
   

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-24 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219953129
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED = 
new AllowableValue(LoadBalancingStrategy.LEAST_CONNECTED.name(), "Least 
Connected", "Least Connected Strategy");
+
+protected static final PropertyDescriptor LOAD_BALANCING_STRATEGY = 
new PropertyDescriptor.Builder()
   

[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626282#comment-16626282
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219953129
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED 

[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626280#comment-16626280
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219952904
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
--- End diff --

Our policy so far has been that passwords do not support expression 
language, for a couple reasons:

* How to evaluate if a password `abc${def}` should be interpreted as `abc` 
+ *the value of(`def`)* or the literal string `abc${def}`
* The variable registry is not designed to store sensitive values securely, 
so if a 

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-24 Thread alopresto
Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219952904
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
--- End diff --

Our policy so far has been that passwords do not support expression 
language, for a couple reasons:

* How to evaluate if a password `abc${def}` should be interpreted as `abc` 
+ *the value of(`def`)* or the literal string `abc${def}`
* The variable registry is not designed to store sensitive values securely, 
so if a password is stored here, it can be accessed by an unauthorized user


---


[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626215#comment-16626215
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219939777
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
--- End diff --

Needs a notice on how to use it if authentication is disabled.


> Create Neo4J cypher execution processor
> ---
>
> Key: NIFI-5537
> URL: https://issues.apache.org/jira/browse/NIFI-5537
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: graph, neo4j, node, relationship
> Fix For: 1.8.0
>
>
> Create Nifi Neo4J cypher 

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-24 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r219939777
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
--- End diff --

Needs a notice on how to use it if authentication is disabled.


---


[jira] [Created] (NIFI-5629) GetFile becomes slow listing vast directories

2018-09-24 Thread Adam (JIRA)
Adam created NIFI-5629:
--

 Summary: GetFile becomes slow listing vast directories
 Key: NIFI-5629
 URL: https://issues.apache.org/jira/browse/NIFI-5629
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.6.0
Reporter: Adam


GetFile repeatedly lists entire directories before applying batching, meaning 
for vast directories it spends a long time listing directories.

 

Pull request to follow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5628) Verify content length on replicated requests

2018-09-24 Thread Andy LoPresto (JIRA)
Andy LoPresto created NIFI-5628:
---

 Summary: Verify content length on replicated requests
 Key: NIFI-5628
 URL: https://issues.apache.org/jira/browse/NIFI-5628
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.7.1, 1.7.0, 1.6.0, 1.5.0
Reporter: Andy LoPresto
Assignee: Andy LoPresto


Verify the content-length of requests when replicated in the cluster. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3025: NIFI-5605 Added UpdateCassandra.

2018-09-24 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/3025
  
@zenfenan this is geared toward delete/update operations.


---


[jira] [Commented] (NIFI-5605) Create Cassandra update/delete processor

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626151#comment-16626151
 ] 

ASF GitHub Bot commented on NIFI-5605:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/3025
  
@zenfenan this is geared toward delete/update operations.


> Create Cassandra update/delete processor
> 
>
> Key: NIFI-5605
> URL: https://issues.apache.org/jira/browse/NIFI-5605
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> There should be a processor that can update and delete from Cassandra tables. 
> Where QueryCassandra is geared toward reading data, this should be geared 
> toward data mutations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5605) Create Cassandra update/delete processor

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626150#comment-16626150
 ] 

ASF GitHub Bot commented on NIFI-5605:
--

GitHub user MikeThomsen opened a pull request:

https://github.com/apache/nifi/pull/3025

NIFI-5605 Added UpdateCassandra.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MikeThomsen/nifi NIFI-5605

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3025.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3025


commit d9cb1ba54e672952e172241324f895c8dbf0f7f1
Author: Mike Thomsen 
Date:   2018-09-19T01:14:29Z

NIFI-5605 Added UpdateCassandra.




> Create Cassandra update/delete processor
> 
>
> Key: NIFI-5605
> URL: https://issues.apache.org/jira/browse/NIFI-5605
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> There should be a processor that can update and delete from Cassandra tables. 
> Where QueryCassandra is geared toward reading data, this should be geared 
> toward data mutations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3025: NIFI-5605 Added UpdateCassandra.

2018-09-24 Thread MikeThomsen
GitHub user MikeThomsen opened a pull request:

https://github.com/apache/nifi/pull/3025

NIFI-5605 Added UpdateCassandra.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MikeThomsen/nifi NIFI-5605

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3025.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3025


commit d9cb1ba54e672952e172241324f895c8dbf0f7f1
Author: Mike Thomsen 
Date:   2018-09-19T01:14:29Z

NIFI-5605 Added UpdateCassandra.




---


[jira] [Created] (NIFI-5627) Improve handling of sensitive properties

2018-09-24 Thread Andy LoPresto (JIRA)
Andy LoPresto created NIFI-5627:
---

 Summary: Improve handling of sensitive properties
 Key: NIFI-5627
 URL: https://issues.apache.org/jira/browse/NIFI-5627
 Project: Apache NiFi
  Issue Type: Epic
  Components: Configuration, Core Framework
Reporter: Andy LoPresto
Assignee: Andy LoPresto


There are a number of disparate issues around the handling of _sensitive 
properties_. 

There should be a clear naming strategy to differentiate:
1. component properties that are sensitive ({{InvokeHTTP}} password, 
{{EncryptContent}} password etc.)
1. secret framework configuration values ({{nifi.sensitive.props.key}}, 
{{nifi.security.keystorePasswd}}, LDAP Manager password, etc.)

This epic regards the first. 

In addition:
* Sensitive component properties should be handled in Expression Language
* Sensitive component properties should be versionable in conjunction with NiFi 
Registry (this requires distributed key management)
* Dynamic property descriptors on components should be able to be marked as 
sensitive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5605) Create Cassandra update/delete processor

2018-09-24 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5605:
---
Description: There should be a processor that can update and delete from 
Cassandra tables. Where QueryCassandra is geared toward reading data, this 
should be geared toward data mutations.  (was: There should be a processor that 
can delete from Cassandra tables.)
Summary: Create Cassandra update/delete processor  (was: Create 
Cassandra delete processor)

> Create Cassandra update/delete processor
> 
>
> Key: NIFI-5605
> URL: https://issues.apache.org/jira/browse/NIFI-5605
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> There should be a processor that can update and delete from Cassandra tables. 
> Where QueryCassandra is geared toward reading data, this should be geared 
> toward data mutations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5595) Add filter to template endpoint

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626071#comment-16626071
 ] 

ASF GitHub Bot commented on NIFI-5595:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/3024
  
Reviewing...


> Add filter to template endpoint
> ---
>
> Key: NIFI-5595
> URL: https://issues.apache.org/jira/browse/NIFI-5595
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Major
>
> The template endpoint needs a CORS filter applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3024: NIFI-5595 - Added the CORS filter to the templates/upload ...

2018-09-24 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/3024
  
Reviewing...


---


[jira] [Commented] (NIFI-5595) Add filter to template endpoint

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16625952#comment-16625952
 ] 

ASF GitHub Bot commented on NIFI-5595:
--

GitHub user thenatog opened a pull request:

https://github.com/apache/nifi/pull/3024

NIFI-5595 - Added the CORS filter to the templates/upload endpoint us…

…ing a URL matcher.

NIFI-5595 - Explicitly allow methods GET, HEAD. These are the Spring 
defaults when the allowedMethods is empty but now it is explicit. This will 
require other methods like POST etc to be from the same origin (for the 
template/upload URL).

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/thenatog/nifi NIFI-5595-rebased

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3024.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3024


commit dca24d4f2d25f583dd3177a8c376dca32651d04d
Author: thenatog 
Date:   2018-09-14T01:45:00Z

NIFI-5595 - Added the CORS filter to the templates/upload endpoint using a 
URL matcher.

NIFI-5595 - Explicitly allow methods GET, HEAD. These are the Spring 
defaults when the allowedMethods is empty but now it is explicit. This will 
require other methods like POST etc to be from the same origin (for the 
template/upload URL).




> Add filter to template endpoint
> ---
>
> Key: NIFI-5595
> URL: https://issues.apache.org/jira/browse/NIFI-5595
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Major
>
> The template endpoint needs a CORS filter applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3024: NIFI-5595 - Added the CORS filter to the templates/...

2018-09-24 Thread thenatog
GitHub user thenatog opened a pull request:

https://github.com/apache/nifi/pull/3024

NIFI-5595 - Added the CORS filter to the templates/upload endpoint us…

…ing a URL matcher.

NIFI-5595 - Explicitly allow methods GET, HEAD. These are the Spring 
defaults when the allowedMethods is empty but now it is explicit. This will 
require other methods like POST etc to be from the same origin (for the 
template/upload URL).

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/thenatog/nifi NIFI-5595-rebased

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3024.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3024


commit dca24d4f2d25f583dd3177a8c376dca32651d04d
Author: thenatog 
Date:   2018-09-14T01:45:00Z

NIFI-5595 - Added the CORS filter to the templates/upload endpoint using a 
URL matcher.

NIFI-5595 - Explicitly allow methods GET, HEAD. These are the Spring 
defaults when the allowedMethods is empty but now it is explicit. This will 
require other methods like POST etc to be from the same origin (for the 
template/upload URL).




---


[jira] [Commented] (NIFI-5514) MergeRecord does not create a merged FlowFile until a maximum threshold is reached

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16625830#comment-16625830
 ] 

ASF GitHub Bot commented on NIFI-5514:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2954#discussion_r219837666
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MergeRecord.java
 ---
@@ -304,13 +336,25 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 session.commit();
 }
 
+// If there is no more data queued up, complete any bin that meets 
our minimum threshold
+int completedBins = 0;
+final QueueSize queueSize = session.getQueueSize();
--- End diff --

@bdesert that's a great catch! I have pushed a new commit that updates the 
code like you suggested, to just check the size of the flowFiles list. I also 
created NIFI-5626 to address the inconsistency between MockProcessSession and 
StandardProcessSession.


> MergeRecord does not create a merged FlowFile until a maximum threshold is 
> reached
> --
>
> Key: NIFI-5514
> URL: https://issues.apache.org/jira/browse/NIFI-5514
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> MergeRecord allows the user to specify a minimum number of records. However, 
> if the minimum number of records is reached, the merged FlowFile is not 
> created unless the maximum number of records or max number of bytes is also 
> reached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5626) MockProcessSession's getQueueSize() inconsistent with StandardProcessSession's

2018-09-24 Thread Mark Payne (JIRA)
Mark Payne created NIFI-5626:


 Summary: MockProcessSession's getQueueSize() inconsistent with 
StandardProcessSession's
 Key: NIFI-5626
 URL: https://issues.apache.org/jira/browse/NIFI-5626
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mark Payne


When calling StandardProcessSession.getQueueSize() it returns the size of the 
queues, including any FlowFiles that are held by the Processor. However, 
MockProcessSession does not include the size of FlowFiles held by the 
Processor. As a result, we can have a processor that passes a unit test but 
does not perform the same behavior in production. For example, if a processor 
calls:
{code:java}
FlowFile flowFile = session.get();
if (flowFile != null) {
// Process FlowFile
}

QueueSize queueSize = session.getQueueSize();
if (queueSize.getObjectCount() == 0) {
// Perform some logic now that the queue is empty
}{code}
When running a unit test, if a single FlowFiles is enqueued, and then the 
Processor is triggered, in the unit test, QueueSize.getObjectCount() will be 0. 
However, in production, QueueSize.getObjectCount() will be 1, because the 
Processor is still holding the FlowFile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2954: NIFI-5514: Fixed bugs in MergeRecord around minimum...

2018-09-24 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2954#discussion_r219837666
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MergeRecord.java
 ---
@@ -304,13 +336,25 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 session.commit();
 }
 
+// If there is no more data queued up, complete any bin that meets 
our minimum threshold
+int completedBins = 0;
+final QueueSize queueSize = session.getQueueSize();
--- End diff --

@bdesert that's a great catch! I have pushed a new commit that updates the 
code like you suggested, to just check the size of the flowFiles list. I also 
created NIFI-5626 to address the inconsistency between MockProcessSession and 
StandardProcessSession.


---


[jira] [Updated] (NIFI-5588) Unable to set indefinite max wait time on DBCPConnectionPool

2018-09-24 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5588:
-
Component/s: (was: Core Framework)
 Extensions

> Unable to set indefinite max wait time on DBCPConnectionPool
> 
>
> Key: NIFI-5588
> URL: https://issues.apache.org/jira/browse/NIFI-5588
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>
> The DBCPConnectionPool controller service accepts a "Max Wait Time" that 
> configures 
> bq. The maximum amount of time that the pool will wait (when there are no 
> available connections) for a connection to be returned before failing, or -1 
> to wait indefinitely. 
> This value must validate as a time period. *There is no valid way to set 
> {{-1}}* with the current validator.
> The validator [in 
> use|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java#L110]
>  is {{StandardValidators.TIME_PERIOD_VALIDATOR}}. The 
> [TIME_PERIOD_VALIDATOR|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/processor/util/StandardValidators.java#L443]
>  uses [a regex built  in 
> FormatUtils|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java#L44]
>  that must have a time unit:
> {code:java}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> {code}
> The regex does not allow for an value such as {{-1}} or {{-1 secs}}, etc.
> The obvious workaround is to set that _very_ high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5591) Enable compression of Avro in ExecuteSQL

2018-09-24 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5591:
-
Component/s: (was: Core Framework)
 Extensions

> Enable compression of Avro in ExecuteSQL
> 
>
> Key: NIFI-5591
> URL: https://issues.apache.org/jira/browse/NIFI-5591
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>  Labels: ExecuteSQL, avro, compression
>
> The Avro stream that comes out of the ExecuteSQL processor is uncompressed. 
> It's possible to rewrite it compressed using a combination of ConvertRecord 
> processor with AvroReader and AvroRecordSetWriter, but that's a lot of extra 
> I/O that could be handled transparently at the moment that the Avro data is 
> created.
> For implementation, it looks like ExecuteSQL builds a set of 
> {{JdbcCommon.AvroConvertionOptions}}[here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExecuteSQL.java#L246].
>  That options object would need to gain a compression flag. Then, within 
> {{JdbcCommon#convertToAvroStream}} 
> [here|https://github.com/apache/nifi/blob/0dd4a91a6741eec04965a260c8aff38b72b3828d/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/JdbcCommon.java#L281],
>  the {{dataFileWriter}} would get a codec set by {{setCodec}}, with the codec 
> having been created shortly before.
> For example of creating the codec, I looked at how the AvroRecordSetWriter 
> does it. The {{setCodec()}} is performed 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/WriteAvroResultWithSchema.java#L44]
>  after the codec is created by factory option 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L104]
>  using a factory method 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L137].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5591) Enable compression of Avro in ExecuteSQL

2018-09-24 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5591:
-
Status: Patch Available  (was: Open)

> Enable compression of Avro in ExecuteSQL
> 
>
> Key: NIFI-5591
> URL: https://issues.apache.org/jira/browse/NIFI-5591
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>  Labels: ExecuteSQL, avro, compression
>
> The Avro stream that comes out of the ExecuteSQL processor is uncompressed. 
> It's possible to rewrite it compressed using a combination of ConvertRecord 
> processor with AvroReader and AvroRecordSetWriter, but that's a lot of extra 
> I/O that could be handled transparently at the moment that the Avro data is 
> created.
> For implementation, it looks like ExecuteSQL builds a set of 
> {{JdbcCommon.AvroConvertionOptions}}[here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExecuteSQL.java#L246].
>  That options object would need to gain a compression flag. Then, within 
> {{JdbcCommon#convertToAvroStream}} 
> [here|https://github.com/apache/nifi/blob/0dd4a91a6741eec04965a260c8aff38b72b3828d/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/JdbcCommon.java#L281],
>  the {{dataFileWriter}} would get a codec set by {{setCodec}}, with the codec 
> having been created shortly before.
> For example of creating the codec, I looked at how the AvroRecordSetWriter 
> does it. The {{setCodec()}} is performed 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/WriteAvroResultWithSchema.java#L44]
>  after the codec is created by factory option 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L104]
>  using a factory method 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L137].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5591) Enable compression of Avro in ExecuteSQL

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16625513#comment-16625513
 ] 

ASF GitHub Bot commented on NIFI-5591:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/3023

NIFI-5591 - Added avro compression format to ExecuteSQL

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-5591

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3023.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3023


commit 30ad5d306123b9f99a9327aeac77727ca6d6490d
Author: Pierre Villard 
Date:   2018-09-23T19:42:26Z

NIFI-5591 - Added avro compression format to ExecuteSQL




> Enable compression of Avro in ExecuteSQL
> 
>
> Key: NIFI-5591
> URL: https://issues.apache.org/jira/browse/NIFI-5591
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>  Labels: ExecuteSQL, avro, compression
>
> The Avro stream that comes out of the ExecuteSQL processor is uncompressed. 
> It's possible to rewrite it compressed using a combination of ConvertRecord 
> processor with AvroReader and AvroRecordSetWriter, but that's a lot of extra 
> I/O that could be handled transparently at the moment that the Avro data is 
> created.
> For implementation, it looks like ExecuteSQL builds a set of 
> {{JdbcCommon.AvroConvertionOptions}}[here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExecuteSQL.java#L246].
>  That options object would need to gain a compression flag. Then, within 
> {{JdbcCommon#convertToAvroStream}} 
> [here|https://github.com/apache/nifi/blob/0dd4a91a6741eec04965a260c8aff38b72b3828d/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/JdbcCommon.java#L281],
>  the {{dataFileWriter}} would get a codec set by {{setCodec}}, with the codec 
> having been created shortly before.
> For example of creating the codec, I looked at how the AvroRecordSetWriter 
> does it. The {{setCodec()}} is performed 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/WriteAvroResultWithSchema.java#L44]
>  after the codec is created by factory option 
> 

[GitHub] nifi pull request #3023: NIFI-5591 - Added avro compression format to Execut...

2018-09-24 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/3023

NIFI-5591 - Added avro compression format to ExecuteSQL

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-5591

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3023.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3023


commit 30ad5d306123b9f99a9327aeac77727ca6d6490d
Author: Pierre Villard 
Date:   2018-09-23T19:42:26Z

NIFI-5591 - Added avro compression format to ExecuteSQL




---


[jira] [Updated] (NIFI-5588) Unable to set indefinite max wait time on DBCPConnectionPool

2018-09-24 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5588:
-
Status: Patch Available  (was: Open)

> Unable to set indefinite max wait time on DBCPConnectionPool
> 
>
> Key: NIFI-5588
> URL: https://issues.apache.org/jira/browse/NIFI-5588
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>
> The DBCPConnectionPool controller service accepts a "Max Wait Time" that 
> configures 
> bq. The maximum amount of time that the pool will wait (when there are no 
> available connections) for a connection to be returned before failing, or -1 
> to wait indefinitely. 
> This value must validate as a time period. *There is no valid way to set 
> {{-1}}* with the current validator.
> The validator [in 
> use|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java#L110]
>  is {{StandardValidators.TIME_PERIOD_VALIDATOR}}. The 
> [TIME_PERIOD_VALIDATOR|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/processor/util/StandardValidators.java#L443]
>  uses [a regex built  in 
> FormatUtils|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java#L44]
>  that must have a time unit:
> {code:java}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> {code}
> The regex does not allow for an value such as {{-1}} or {{-1 secs}}, etc.
> The obvious workaround is to set that _very_ high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5588) Unable to set indefinite max wait time on DBCPConnectionPool

2018-09-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16625509#comment-16625509
 ] 

ASF GitHub Bot commented on NIFI-5588:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/3022

NIFI-5588 - Fix max wait time in DBCP Connection Pool

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-5588

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3022.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3022


commit b8978480c64a6e4d6242caba51a562defd290fd7
Author: Pierre Villard 
Date:   2018-09-23T20:00:36Z

NIFI-5588 - Fix max wait time in DBCP Connection Pool




> Unable to set indefinite max wait time on DBCPConnectionPool
> 
>
> Key: NIFI-5588
> URL: https://issues.apache.org/jira/browse/NIFI-5588
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>
> The DBCPConnectionPool controller service accepts a "Max Wait Time" that 
> configures 
> bq. The maximum amount of time that the pool will wait (when there are no 
> available connections) for a connection to be returned before failing, or -1 
> to wait indefinitely. 
> This value must validate as a time period. *There is no valid way to set 
> {{-1}}* with the current validator.
> The validator [in 
> use|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java#L110]
>  is {{StandardValidators.TIME_PERIOD_VALIDATOR}}. The 
> [TIME_PERIOD_VALIDATOR|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/processor/util/StandardValidators.java#L443]
>  uses [a regex built  in 
> FormatUtils|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java#L44]
>  that must have a time unit:
> {code:java}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> {code}
> The regex does not allow for an value such as {{-1}} or {{-1 secs}}, etc.
> The obvious workaround is to set that _very_ high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3022: NIFI-5588 - Fix max wait time in DBCP Connection Po...

2018-09-24 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/3022

NIFI-5588 - Fix max wait time in DBCP Connection Pool

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-5588

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3022.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3022


commit b8978480c64a6e4d6242caba51a562defd290fd7
Author: Pierre Villard 
Date:   2018-09-23T20:00:36Z

NIFI-5588 - Fix max wait time in DBCP Connection Pool




---