[jira] [Issue Comment Deleted] (NIFI-5018) basic snap-to-grid feature for UI

2018-09-27 Thread Ryan Bower (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Bower updated NIFI-5018:
-
Comment: was deleted

(was: [~joewitt] is this an ok place to inquire about some of the canvas 
component details?

My grid snap works, but the edges aren't consistent with canvas components of 
differing dimensions.  I may be able to normalize the rounding and account for 
the width of the component, but I'm not sure.  Is there documentation of the 
default dimensions for each canvas component? )

> basic snap-to-grid feature for UI
> -
>
> Key: NIFI-5018
> URL: https://issues.apache.org/jira/browse/NIFI-5018
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.5.0, 1.6.0
> Environment: Tested on Windows
>Reporter: Ryan Bower
>Priority: Minor
>  Labels: web-ui
>   Original Estimate: 0.25h
>  Remaining Estimate: 0.25h
>
> NiFi 1.2.0 contained the flow alignment feature, detailed:
> *NiFi 1.2.0 has a nice, new little feature that will surely please those who 
> may spend a bit of time – for some, perhaps A LOT of time – getting their 
> flow to line up perfectly. The background grid lines can help with this, but 
> the process can still be quite tedious with many components. Now there is a 
> quick, easy way.*
> **I've made a slight modification to the UI (roughly 5 lines) that results in 
> a "snap-to-grid" for selected components.  See [this 
> video|https://www.youtube.com/watch?v=S7lnBMMO6KE&feature=youtu.be] for an 
> example of it in action.
> Target file: 
> nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js
> The processor alignment is based on rounding the component's X and Y 
> coordinates during the drag event.  The result is a consistent "snap" 
> alignment.  I modified the following code to achieve this:
>  
>  
>  
> {{// previous code...}}
>  
> {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, 
> nfDialog, nfClient, nfErrorHandler) {}}
> {{  'use strict';}}
> {{  var nfCanvas;}}
> {{  var drag;}}{{ }}
>  
> {{// added for snap-to-grid feature.}}
> {{  var snapTo = 16;}}
>  
> {{// code...}}
>  
> {{  var nfDraggable = {}}
>  
> {{    // more code...}}
>  
> {{    if (dragSelection.empty()) }}{ 
>  
> {{      // more code...}}
>  
> {{    } else {}}
> {{      // update the position of the drag selection}}{{    }}
>  {{      dragSelection.attr('x', function (d) {}}
>  {{        d.x += d3.event.dx;}}
> {{        // rounding the result achieves the "snap" alignment}}
>  {{        return (Math.round(d.x/snapTo) * snapTo);}}
>  {{    })}}
>  {{      .attr('y', function (d) {}}
>  {{        d.y += d3.event.dy;}}
>  {{        return (Math.round(d.y/snapTo) * snapTo);}}
>  {{      });}}
>         }
>  
> {{ // more code}}
>  
> {{    updateComponentPosition: function (d, delta) {}}
>  {{      // perform rounding again for the update}}
>  {{      var newPosition = {}}
>  {{      'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}}
>  {{      'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}}
>  {{    };}}
> {{ // more code}}
> {{}}}
>  
> The downside of this is that components must start aligned in order to snap 
> to the same alignment on the canvas.  To remedy this, just use the 1.2.0 flow 
> alignment feature.  Note: this is only an issue for old, unaligned flows.  
> New flows and aligned flows don't have this problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5585) Prepare Nodes to be Decommissioned from Cluster

2018-09-27 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5585:
--
Description: 
Allow a node in the cluster to be decommissioned, rebalancing flowfiles on the 
node to be decommissioned to the other active nodes.  This work depends on 
NIFI-5516.

Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.
OFFLOADING nodes will transition to the OFFLOADED state once all flowfiles have 
been rebalanced to other connected nodes in the cluster.
OFFLOADING nodes that remain in the OFFLOADING state (due to errors encountered 
while offloading) can be reconnected to the cluster by restarting NiFi on the 
node.
OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
request via the UI/CLI, or restarting NiFi on the node.
OFFLOADED nodes can be deleted from the cluster.

OFFLOADING a node:
* stops all processors
* terminates all processors
* stops transmitting on all remote process groups
* rebalances flowfiles to other connected nodes in the cluster (via the work 
done in NIFI-5516)

The steps to decommission a node and remove it from the cluster are:
 # Send request to disconnect the node
 # Once disconnect completes, send request to offload the node.
 # Once offload completes, send request to delete node.
 # Once the delete request has finished, the NiFi service on the host can be 
stopped/removed.

When an error occurs and the node can not complete offloading, the user can:
 # Send request to delete the node from the cluster
 # Diagnose why the node had issues with the offload (out of memory, no network 
connection, etc) and address the issue
 # Restart NiFi on the node to so that it will reconnect to the cluster
 # Go through the steps to decommission a node

The OFFLOADING request is idempotent.

Toolkit CLI commands for retrieving a single node, list of nodes, and 
connecting/disconnecting/offloading/deleting nodes have been added.

The cluster table UI has an icon to initiate the OFFLOADING of a DISCONNECTED 
node.

Similar to the client sending PUT request with a DISCONNECTING message to 
cluster/nodes/\{id}, an OFFLOADING message can be sent as a PUT request to the 
same URI to initiate an OFFLOAD for a DISCONNECTED node.



  was:
Allow a node in the cluster to be decommissioned, rebalancing flowfiles on the 
node to be decommissioned to the other active nodes.  This work depends on 
NIFI-5516.

Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.
OFFLOADING nodes will transition to the OFFLOADED state once all flowfiles have 
been rebalanced to other connected nodes in the cluster.
OFFLOADING nodes that remain in the OFFLOADING state (due to errors encountered 
while offloading) can be reconnected to the cluster by restarting NiFi on the 
node.
OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
request via the UI/CLI, or restarting NiFi on the node.
OFFLOADED nodes can be deleted from the cluster.

OFFLOADING a node:
* stops all processors
* terminates all processors
* stops transmitting on all remote process groups
* rebalances flowfiles to other connected nodes in the cluster (via the work 
done in NIFI-5516)

The steps to decommission a node and remove it from the cluster are:
 # Send request to disconnect the node
 # Once disconnect completes, send request to offload the node.
 # Once offload completes, send request to delete node.
 # Once the delete request has finished, the NiFi service on the host can be 
stopped/removed.

When an error occurs and the node can not complete offloading, the user can:
 # Send request to delete the node from the cluster
 # Diagnose why the node had issues with the offload (out of memory, no network 
connection, etc) and address the issue
 # Restart NiFi on the node to so that it will reconnect to the cluster
 # Go through the steps to decommission a node

The OFFLOADING request is idempotent.

Toolkit CLI commands for retrieving a single node, list of nodes, and 
connecting/disconnecting/offloading/deleting nodes have been added.

The cluster table UI has a icon to initiate the OFFLOADING of a DISCONNECTED 
node.

Similar to the client sending PUT request with a DISCONNECTING message to 
cluster/nodes/\{id}, an OFFLOADING message can be sent as a PUT request to the 
same URI to initiate an OFFLOAD for a DISCONNECTED node.




> Prepare Nodes to be Decommissioned from Cluster
> ---
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be d

[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631177#comment-16631177
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user mans2singh commented on the issue:

https://github.com/apache/nifi/pull/2956
  
@MikeThomsen - Regarding your graph output questions - Can you please let 
me know what are the commands/files that were ingested for above output ?  


> Create Neo4J cypher execution processor
> ---
>
> Key: NIFI-5537
> URL: https://issues.apache.org/jira/browse/NIFI-5537
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: graph, neo4j, node, relationship
> Fix For: 1.8.0
>
>
> Create Nifi Neo4J cypher queries processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-27 Thread mans2singh
Github user mans2singh commented on the issue:

https://github.com/apache/nifi/pull/2956
  
@MikeThomsen - Regarding your graph output questions - Can you please let 
me know what are the commands/files that were ingested for above output ?  


---


[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631172#comment-16631172
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221104284
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/test/java/org/apache/nifi/processors/neo4j/TestNeo4JCyperExecutor.java
 ---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.StatementResult;
+import org.neo4j.driver.v1.Record;
+import org.neo4j.driver.v1.summary.ResultSummary;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.mockito.Answers;
+import org.mockito.Mock;
+import org.mockito.Mockito;
+import org.mockito.junit.MockitoJUnit;
+import org.mockito.junit.MockitoRule;
+
+import java.io.File;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Neo4J Cypher unit tests.
+ */
+public class TestNeo4JCyperExecutor {
--- End diff --

Correcting.


> Create Neo4J cypher execution processor
> ---
>
> Key: NIFI-5537
> URL: https://issues.apache.org/jira/browse/NIFI-5537
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: graph, neo4j, node, relationship
> Fix For: 1.8.0
>
>
> Create Nifi Neo4J cypher queries processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631170#comment-16631170
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221104258
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEG

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-27 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221104284
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/test/java/org/apache/nifi/processors/neo4j/TestNeo4JCyperExecutor.java
 ---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.StatementResult;
+import org.neo4j.driver.v1.Record;
+import org.neo4j.driver.v1.summary.ResultSummary;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.mockito.Answers;
+import org.mockito.Mock;
+import org.mockito.Mockito;
+import org.mockito.junit.MockitoJUnit;
+import org.mockito.junit.MockitoRule;
+
+import java.io.File;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Neo4J Cypher unit tests.
+ */
+public class TestNeo4JCyperExecutor {
--- End diff --

Correcting.


---


[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-27 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221104258
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED = 
new AllowableValue(LoadBalancingStrategy.LEAST_CONNECTED.name(), "Least 
Connected", "Least Connected Strategy");
+
+protected static final PropertyDescriptor LOAD_BALANCING_STRATEGY = 
new PropertyDescriptor.Builder()
  

[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631164#comment-16631164
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221103863
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEG

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-27 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221103863
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED = 
new AllowableValue(LoadBalancingStrategy.LEAST_CONNECTED.name(), "Least 
Connected", "Least Connected Strategy");
+
+protected static final PropertyDescriptor LOAD_BALANCING_STRATEGY = 
new PropertyDescriptor.Builder()
  

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-27 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221103455
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED = 
new AllowableValue(LoadBalancingStrategy.LEAST_CONNECTED.name(), "Least 
Connected", "Least Connected Strategy");
+
+protected static final PropertyDescriptor LOAD_BALANCING_STRATEGY = 
new PropertyDescriptor.Builder()
  

[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631161#comment-16631161
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221103455
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEG

[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631151#comment-16631151
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221101649
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEG

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-27 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221101649
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_ROUND_ROBIN = new 
AllowableValue(LoadBalancingStrategy.ROUND_ROBIN.name(), "Round Robin", "Round 
Robin Strategy");
+
+public static AllowableValue LOAD_BALANCING_STRATEGY_LEAST_CONNECTED = 
new AllowableValue(LoadBalancingStrategy.LEAST_CONNECTED.name(), "Least 
Connected", "Least Connected Strategy");
+
+protected static final PropertyDescriptor LOAD_BALANCING_STRATEGY = 
new PropertyDescriptor.Builder()
  

[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631147#comment-16631147
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221100711
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
--- End diff --

@alopresto - I can remove this.


> Create Neo4J cypher execution processor
> ---
>
> Key: NIFI-5537
> URL: https://issues.apache.org/jira/browse/NIFI-5537
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Ext

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-27 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221100711
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("neo4j-password")
+.displayName("Password")
+.description("Password for Neo4J user")
+.required(true)
+.sensitive(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
--- End diff --

@alopresto - I can remove this.


---


[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631146#comment-16631146
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221100628
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
--- End diff --

@MikeThomsen - Should I add a link to the documentation ?


> Create Neo4J cypher execution processor
> ---
>
> Key: NIFI-5537
> URL: https://issues.apache.org/jira/browse/NIFI-5537
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: graph, neo4j, node, relationship
> Fix For: 1.8.0
>
>
> Create Nifi Neo4J c

[GitHub] nifi pull request #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-27 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2956#discussion_r221100628
  
--- Diff: 
nifi-nar-bundles/nifi-neo4j-bundle/nifi-neo4j-processors/src/main/java/org/apache/nifi/processors/neo4j/AbstractNeo4JCypherExecutor.java
 ---
@@ -0,0 +1,281 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.neo4j;
+
+import java.io.File;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.neo4j.driver.v1.AuthTokens;
+import org.neo4j.driver.v1.Config;
+import org.neo4j.driver.v1.Config.ConfigBuilder;
+import org.neo4j.driver.v1.Config.LoadBalancingStrategy;
+import org.neo4j.driver.v1.Config.TrustStrategy;
+import org.neo4j.driver.v1.Driver;
+import org.neo4j.driver.v1.GraphDatabase;
+
+/**
+ * Abstract base class for Neo4JCypherExecutor processors
+ */
+abstract class AbstractNeo4JCypherExecutor extends AbstractProcessor {
+
+protected static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("neo4J-query")
+.displayName("Neo4J Query")
+.description("Specifies the Neo4j Query.")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CONNECTION_URL = new 
PropertyDescriptor.Builder()
+.name("neo4j-connection-url")
+.displayName("Neo4j Connection URL")
+.description("Neo4J endpoing to connect to.")
+.required(true)
+.defaultValue("bolt://localhost:7687")
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("neo4j-username")
+.displayName("Username")
+.description("Username for accessing Neo4J")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
--- End diff --

@MikeThomsen - Should I add a link to the documentation ?


---


[jira] [Commented] (NIFI-5644) Fix typo in AbstractDatabaseFetchProcessor.java

2018-09-27 Thread Diego Queiroz (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631098#comment-16631098
 ] 

Diego Queiroz commented on NIFI-5644:
-

I tried to make PR in GitHub but I feel I did something wrong... 
https://github.com/apache/nifi/commit/949071e92365117acb19ed559b5b7efc54e0c373

> Fix typo in AbstractDatabaseFetchProcessor.java
> ---
>
> Key: NIFI-5644
> URL: https://issues.apache.org/jira/browse/NIFI-5644
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Reporter: Diego Queiroz
>Priority: Trivial
>
> dbAdaper -> dbAdapter



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631093#comment-16631093
 ] 

ASF GitHub Bot commented on NIFI-5612:
--

Github user colindean commented on the issue:

https://github.com/apache/nifi/pull/3032
  
@mattyb149 Thoughts?


> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
>   ... 15 common frames omitted
> {code}
> I don't know if I can share the database schema – still working with my team 
> on that – but looking at it, I think it has something to do with the 
> signedness of int(1) or tinyint(1) because those two are the only numerical 
> types common to all of the table.
>  
> *Edit 2018-09-24, so that my update doesn't get buried:*
> I am able to reproduce the exception using
>  * Vagrant 2.1.1
>  * Virtualbox 5.2.18 r124319
>  * Ubuntu 18.04
>  * MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in 
> use on the system where I observed this failure first)
>  * MySQL Connector/J 5.1.46
>  * NiFi 1.7.1
> With this table definition and data:
> {code:sql}
> create table fails ( 
>   fails int(1) unsigned NOT NULL default '0' 
> ) ENGINE=InnoDB AUTO_INCREM

[GitHub] nifi issue #3032: NIFI-5612: Support JDBC drivers that return Long for unsig...

2018-09-27 Thread colindean
Github user colindean commented on the issue:

https://github.com/apache/nifi/pull/3032
  
@mattyb149 Thoughts?


---


[jira] [Created] (NIFI-5644) Fix typo in AbstractDatabaseFetchProcessor.java

2018-09-27 Thread Diego Queiroz (JIRA)
Diego Queiroz created NIFI-5644:
---

 Summary: Fix typo in AbstractDatabaseFetchProcessor.java
 Key: NIFI-5644
 URL: https://issues.apache.org/jira/browse/NIFI-5644
 Project: Apache NiFi
  Issue Type: Task
  Components: Extensions
Reporter: Diego Queiroz


dbAdaper -> dbAdapter



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5643) QueryDatabaseTable custom query generates invalid Oracle queries

2018-09-27 Thread Diego Queiroz (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diego Queiroz updated NIFI-5643:

Description: 
When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved by 
*[NIFI-5471|https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blobdiff;f=nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java;h=3da8a73f184d4622bf969a3986804b531eac7f31;hp=e13f9de8369a6885e8e59b7ca875dd516ba3dc2a;hb=9742dd2fac42d4afb8b1e009535ed50c02373493;hpb=e97ae921f759e51ee1709ef0884fc029bd40d26b]*
 in master, but if there are plans for a new release of 1.7.x, a fix for this 
would be great.

 

I'll drop the log here to help people to find this thread:
{noformat}
2018-09-27 17:41:45,293 ERROR [Timer-Driven Process Thread-82] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=5dfe3f5d-69b8-1df7-1379-140a9e518ac3] Failed to process 
session due to org.apache.nifi.processor.exception.ProcessException: Unable to 
communicate with database in order to determine column types: 
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:309)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:232)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:232)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not 
properly ended

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:195)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:876)
at 
oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at 
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
at 
oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:269)
... 12 common frames omitted{noformat}

  was:
When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved in 
*[NIFI-5471|https://github.com/apache/nifi/commit/9742dd2fac42d4afb8b1e009535ed50c02373493#diff-23f5f3a6652c943e1824efb3b9528d0d]*
 (so master seems to be OK), but if there are plans for a new release of 1.7.x, 
a fix for this would be great.

 

I'll drop the log here to help people t

[jira] [Updated] (NIFI-5643) QueryDatabaseTable custom query generates invalid Oracle queries

2018-09-27 Thread Diego Queiroz (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diego Queiroz updated NIFI-5643:

Description: 
When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved in 
*[NIFI-5471|https://github.com/apache/nifi/commit/9742dd2fac42d4afb8b1e009535ed50c02373493#diff-23f5f3a6652c943e1824efb3b9528d0d]*(so
 master seems to be OK), but if there are plans for a new release of 1.7.x, a 
fix for this would be great.

 

I'll drop the log here to help people to find this thread:
{noformat}
2018-09-27 17:41:45,293 ERROR [Timer-Driven Process Thread-82] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=5dfe3f5d-69b8-1df7-1379-140a9e518ac3] Failed to process 
session due to org.apache.nifi.processor.exception.ProcessException: Unable to 
communicate with database in order to determine column types: 
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:309)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:232)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:232)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not 
properly ended

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:195)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:876)
at 
oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at 
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
at 
oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:269)
... 12 common frames omitted{noformat}

  was:
When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved in 
*[NIFI-1706|https://github.com/apache/nifi/commit/82ac815536d53c00f848f2eae79474035a9eb126#diff-23f5f3a6652c943e1824efb3b9528d0d]*
 (so master seems to be OK), but if there are plans for a new release of 1.7.x, 
a fix for this would be great.

 

I'll drop the log here to help people to find this thread:
{noformat}
2018-09-27 17:41:45,293 ERROR [Timer-Driven Process Thread-82] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=5dfe3f5d-69b8-1df7-1379-140a9e518ac3] Failed to process 
session due to org.apache.nifi.processor.exce

[jira] [Updated] (NIFI-5643) QueryDatabaseTable custom query generates invalid Oracle queries

2018-09-27 Thread Diego Queiroz (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diego Queiroz updated NIFI-5643:

Description: 
When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved in 
*[NIFI-5471|https://github.com/apache/nifi/commit/9742dd2fac42d4afb8b1e009535ed50c02373493#diff-23f5f3a6652c943e1824efb3b9528d0d]*
 (so master seems to be OK), but if there are plans for a new release of 1.7.x, 
a fix for this would be great.

 

I'll drop the log here to help people to find this thread:
{noformat}
2018-09-27 17:41:45,293 ERROR [Timer-Driven Process Thread-82] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=5dfe3f5d-69b8-1df7-1379-140a9e518ac3] Failed to process 
session due to org.apache.nifi.processor.exception.ProcessException: Unable to 
communicate with database in order to determine column types: 
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:309)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:232)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:232)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not 
properly ended

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:195)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:876)
at 
oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at 
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
at 
oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:269)
... 12 common frames omitted{noformat}

  was:
When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved in 
*[NIFI-5471|https://github.com/apache/nifi/commit/9742dd2fac42d4afb8b1e009535ed50c02373493#diff-23f5f3a6652c943e1824efb3b9528d0d]*(so
 master seems to be OK), but if there are plans for a new release of 1.7.x, a 
fix for this would be great.

 

I'll drop the log here to help people to find this thread:
{noformat}
2018-09-27 17:41:45,293 ERROR [Timer-Driven Process Thread-82] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=5dfe3f5d-69b8-1df7-1379-140a9e518ac3] Failed to process 
session due to org.apache.nifi.processor.exce

[jira] [Updated] (NIFI-5643) QueryDatabaseTable custom query generates invalid Oracle queries

2018-09-27 Thread Diego Queiroz (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diego Queiroz updated NIFI-5643:

Description: 
When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved in 
*[NIFI-1706|https://github.com/apache/nifi/commit/82ac815536d53c00f848f2eae79474035a9eb126#diff-23f5f3a6652c943e1824efb3b9528d0d]*
 (so master seems to be OK), but if there are plans for a new release of 1.7.x, 
a fix for this would be great.

I'll drop the log here to help people to find this thread:
{noformat}
2018-09-27 17:41:45,293 ERROR [Timer-Driven Process Thread-82] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=5dfe3f5d-69b8-1df7-1379-140a9e518ac3] Failed to process 
session due to org.apache.nifi.processor.exception.ProcessException: Unable to 
communicate with database in order to determine column types: 
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:309)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:232)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:232)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not 
properly ended

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:195)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:876)
at 
oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at 
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
at 
oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:269)
... 12 common frames omitted{noformat}

  was:
When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:

 
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 

The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved in 
*[NIFI-1706|https://github.com/apache/nifi/commit/82ac815536d53c00f848f2eae79474035a9eb126#diff-23f5f3a6652c943e1824efb3b9528d0d]*
 (so master seems to be OK), but if there are plans for a new release of 1.7.x, 
a fix for this would be great.

 

I'll drop the log here to help people to find this thread:
{noformat}
2018-09-27 17:41:45,293 ERROR [Timer-Driven Process Thread-82] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=5dfe3f5d-69b8-1df7-1379-140a9e518ac3] Failed to process 
session due to org.apache.nifi.processor.e

[jira] [Updated] (NIFI-5643) QueryDatabaseTable custom query generates invalid Oracle queries

2018-09-27 Thread Diego Queiroz (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diego Queiroz updated NIFI-5643:

Description: 
When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved in 
*[NIFI-1706|https://github.com/apache/nifi/commit/82ac815536d53c00f848f2eae79474035a9eb126#diff-23f5f3a6652c943e1824efb3b9528d0d]*
 (so master seems to be OK), but if there are plans for a new release of 1.7.x, 
a fix for this would be great.

 

I'll drop the log here to help people to find this thread:
{noformat}
2018-09-27 17:41:45,293 ERROR [Timer-Driven Process Thread-82] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=5dfe3f5d-69b8-1df7-1379-140a9e518ac3] Failed to process 
session due to org.apache.nifi.processor.exception.ProcessException: Unable to 
communicate with database in order to determine column types: 
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:309)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:232)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:232)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not 
properly ended

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:195)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:876)
at 
oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at 
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
at 
oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:269)
... 12 common frames omitted{noformat}

  was:
When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved in 
*[NIFI-1706|https://github.com/apache/nifi/commit/82ac815536d53c00f848f2eae79474035a9eb126#diff-23f5f3a6652c943e1824efb3b9528d0d]*
 (so master seems to be OK), but if there are plans for a new release of 1.7.x, 
a fix for this would be great.

I'll drop the log here to help people to find this thread:
{noformat}
2018-09-27 17:41:45,293 ERROR [Timer-Driven Process Thread-82] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=5dfe3f5d-69b8-1df7-1379-140a9e518ac3] Failed to process 
session due to org.apache.nifi.processor.except

[jira] [Created] (NIFI-5643) QueryDatabaseTable custom query generates invalid Oracle queries

2018-09-27 Thread Diego Queiroz (JIRA)
Diego Queiroz created NIFI-5643:
---

 Summary: QueryDatabaseTable custom query generates invalid Oracle 
queries
 Key: NIFI-5643
 URL: https://issues.apache.org/jira/browse/NIFI-5643
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.7.1
Reporter: Diego Queiroz


When using "Custom Query" to generate a query for Oracle, it wraps the query 
with:

 
{noformat}
SELECT * FROM ( your_query ) AS TABLE_NAME WHERE 1 = 0{noformat}
 

The problem is that the " AS " sintax is invalid in Oracle (see 
[https://www.techonthenet.com/oracle/alias.php]).

The correct syntax for Oracle is:
{noformat}
SELECT * FROM ( your_query ) TABLE_NAME WHERE 1 = 0{noformat}
Apparently, this is already solved in 
*[NIFI-1706|https://github.com/apache/nifi/commit/82ac815536d53c00f848f2eae79474035a9eb126#diff-23f5f3a6652c943e1824efb3b9528d0d]*
 (so master seems to be OK), but if there are plans for a new release of 1.7.x, 
a fix for this would be great.

 

I'll drop the log here to help people to find this thread:
{noformat}
2018-09-27 17:41:45,293 ERROR [Timer-Driven Process Thread-82] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=5dfe3f5d-69b8-1df7-1379-140a9e518ac3] Failed to process 
session due to org.apache.nifi.processor.exception.ProcessException: Unable to 
communicate with database in order to determine column types: 
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
org.apache.nifi.processor.exception.ProcessException: Unable to communicate 
with database in order to determine column types
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:309)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:232)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:232)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not 
properly ended

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:195)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:876)
at 
oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at 
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
at 
oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at 
org.apache.nifi.processors.standard.AbstractDatabaseFetchProcessor.setup(AbstractDatabaseFetchProcessor.java:269)
... 12 common frames omitted{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5585) Prepare Nodes to be Decommissioned from Cluster

2018-09-27 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5585:
--
Description: 
Allow a node in the cluster to be decommissioned, rebalancing flowfiles on the 
node to be decommissioned to the other active nodes.  This work depends on 
NIFI-5516.

Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.
OFFLOADING nodes will transition to the OFFLOADED state once all flowfiles have 
been rebalanced to other connected nodes in the cluster.
OFFLOADING nodes that remain in the OFFLOADING state (due to errors encountered 
while offloading) can be reconnected to the cluster by restarting NiFi on the 
node.
OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
request via the UI/CLI, or restarting NiFi on the node.
OFFLOADED nodes can be deleted from the cluster.

OFFLOADING a node:
* stops all processors
* terminates all processors
* stops transmitting on all remote process groups
* rebalances flowfiles to other connected nodes in the cluster (via the work 
done in NIFI-5516)

The steps to decommission a node and remove it from the cluster are:
 # Send request to disconnect the node
 # Once disconnect completes, send request to offload the node.
 # Once offload completes, send request to delete node.
 # Once the delete request has finished, the NiFi service on the host can be 
stopped/removed.

When an error occurs and the node can not complete offloading, the user can:
 # Send request to delete the node from the cluster
 # Diagnose why the node had issues with the offload (out of memory, no network 
connection, etc) and address the issue
 # Restart NiFi on the node to so that it will reconnect to the cluster
 # Go through the steps to decommission a node

The OFFLOADING request is idempotent.

Toolkit CLI commands for retrieving a single node, list of nodes, and 
connecting/disconnecting/offloading/deleting nodes have been added.

The cluster table UI has a icon to initiate the OFFLOADING of a DISCONNECTED 
node.

Similar to the client sending PUT request with a DISCONNECTING message to 
cluster/nodes/\{id}, an OFFLOADING message can be sent as a PUT request to the 
same URI to initiate an OFFLOAD for a DISCONNECTED node.



  was:
Allow a node in the cluster to be decommissioned, rebalancing flowfiles on the 
node to be decommissioned to the other active nodes.  This work depends on 
NIFI-5516.

Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.
OFFLOADING nodes will transition to the OFFLOADED state once all flowfiles have 
been rebalanced to other connected nodes in the cluster.
OFFLOADING nodes that remain in the OFFLOADING state (due to errors encountered 
while offloading) can be reconnected to the cluster by restarting NiFi on the 
node.
OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
request via the UI/CLI, or restarting NiFi on the node.
OFFLOADED nodes can be deleted from the cluster.

OFFLOADING a node:
* stops all processors
* terminates all processors
* stops transmitting on all remote process groups
* rebalances flowfiles to other connected nodes in the cluster (via the work 
done in NIFI-5516)

The steps to decommission a node and remove it from the cluster are:
 # Send request to disconnect the node
 # Once disconnect completes, send request to offload the node.
 # Once offload completes, send request to delete node.
 # Once the delete request has finished, the NiFi service on the host can be 
stopped/removed.

When an error occurs and the node can not complete offloading, the user can:
 # Send request to delete the node from the cluster
 # Diagnose why the node had issues with the offload (out of memory, no network 
connection, etc) and address the issue
 # Restart NiFi on the node to so that it will reconnect to the cluster
 # Go through the steps to decommission a node

Toolkit CLI commands for retrieving a single node, list of nodes, and 
connecting/disconnecting/offloading/deleting nodes have been added.


> Prepare Nodes to be Decommissioned from Cluster
> ---
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be decommissioned to the other active nodes.  This work depends 
> on NIFI-5516.
> Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.
> OFFLOADING nodes will transition to the OFFLOADED state once all flowfiles 
> have been rebalanced to other connected nodes in the cluster.
> OFFLOADING nodes that remain in the OFF

[jira] [Updated] (NIFI-5585) Prepare Nodes to be Decommissioned from Cluster

2018-09-27 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5585:
--
Description: 
Allow a node in the cluster to be decommissioned, rebalancing flowfiles on the 
node to be decommissioned to the other active nodes.  This work depends on 
NIFI-5516.

Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.
OFFLOADING nodes will transition to the OFFLOADED state once all flowfiles have 
been rebalanced to other connected nodes in the cluster.
OFFLOADING nodes that remain in the OFFLOADING state (due to errors encountered 
while offloading) can be reconnected to the cluster by restarting NiFi on the 
node.
OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
request via the UI/CLI, or restarting NiFi on the node.
OFFLOADED nodes can be deleted from the cluster.

OFFLOADING a node:
* stops all processors
* terminates all processors
* stops transmitting on all remote process groups
* rebalances flowfiles to other connected nodes in the cluster (via the work 
done in NIFI-5516)

The steps to decommission a node and remove it from the cluster are:
 # Send request to disconnect the node
 # Once disconnect completes, send request to offload the node.
 # Once offload completes, send request to delete node.
 # Once the delete request has finished, the NiFi service on the host can be 
stopped/removed.

When an error occurs and the node can not complete offloading, the user can:
 # Send request to delete the node from the cluster
 # Diagnose why the node had issues with the offload (out of memory, no network 
connection, etc) and address the issue
 # Restart NiFi on the node to so that it will reconnect to the cluster
 # Go through the steps to decommission a node

Toolkit CLI commands for retrieving a single node, list of nodes, and 
connecting/disconnecting/offloading/deleting nodes have been added.

  was:
Allow a node in the cluster to be decommissioned, rebalancing flowfiles on the 
node to be decommissioned to the other active nodes.  This work depends on 
NIFI-5516.

Similar to the client sending PUT request a DISCONNECTING message to 
cluster/nodes/\{id}, an OFFLOADING message can be sent as a PUT request to the 
same URI to initiate an OFFLOAD for a DISCONNECTED node. The OFFLOADING request 
will be idempotent.

Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.
OFFLOADING nodes will transition to the OFFLOADED state once all flowfiles have 
been rebalanced to other connected nodes in the cluster.
OFFLOADING nodes that remain in the OFFLOADING state (due to errors encountered 
while offloading) can be reconnected to the cluster by restarting NiFi on the 
node.
OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
request via the UI/CLI, or restarting NiFi on the node.
OFFLOADED nodes can be deleted from the cluster.

OFFLOADING a node:
* stops all processors
* terminates all processors
* stops transmitting on all remote process groups
* rebalances flowfiles to other connected nodes in the cluster (via the work 
done in NIFI-5516)

The steps to decommission a node and remove it from the cluster are:
 # Send request to disconnect the node
 # Once disconnect completes, send request to offload the node.
 # Once offload completes, send request to delete node.
 # Once the delete request has finished, the NiFi service on the host can be 
stopped/removed.

When an error occurs and the node can not complete offloading, the user can:
 # Send request to delete the node from the cluster
 # Diagnose why the node had issues with the offload (out of memory, no network 
connection, etc) and address the issue
 # Restart NiFi on the node to so that it will reconnect to the cluster
 # Go through the steps to decommission a node

Toolkit CLI commands for retrieving a single node, list of nodes, and 
connecting/disconnecting/offloading/deleting nodes have been added.


> Prepare Nodes to be Decommissioned from Cluster
> ---
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be decommissioned to the other active nodes.  This work depends 
> on NIFI-5516.
> Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.
> OFFLOADING nodes will transition to the OFFLOADED state once all flowfiles 
> have been rebalanced to other connected nodes in the cluster.
> OFFLOADING nodes that remain in the OFFLOADING state (due to errors 
> encountered while offloading) can be reconnected to the

[jira] [Updated] (NIFI-5585) Prepare Nodes to be Decommissioned from Cluster

2018-09-27 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5585:
--
Description: 
Allow a node in the cluster to be decommissioned, rebalancing flowfiles on the 
node to be decommissioned to the other active nodes.  This work depends on 
NIFI-5516.

Similar to the client sending PUT request a DISCONNECTING message to 
cluster/nodes/\{id}, an OFFLOADING message can be sent as a PUT request to the 
same URI to initiate an OFFLOAD for a DISCONNECTED node. The OFFLOADING request 
will be idempotent.

Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.
OFFLOADING nodes will transition to the OFFLOADED state once all flowfiles have 
been rebalanced to other connected nodes in the cluster.
OFFLOADING nodes that remain in the OFFLOADING state (due to errors encountered 
while offloading) can be reconnected to the cluster by restarting NiFi on the 
node.
OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
request via the UI/CLI, or restarting NiFi on the node.
OFFLOADED nodes can be deleted from the cluster.

OFFLOADING a node:
* stops all processors
* terminates all processors
* stops transmitting on all remote process groups
* rebalances flowfiles to other connected nodes in the cluster (via the work 
done in NIFI-5516)

The steps to decommission a node and remove it from the cluster are:
 # Send request to disconnect the node
 # Once disconnect completes, send request to offload the node.
 # Once offload completes, send request to delete node.
 # Once the delete request has finished, the NiFi service on the host can be 
stopped/removed.

When an error occurs and the node can not complete offloading, the user can:
 # Send request to delete the node from the cluster
 # Diagnose why the node had issues with the offload (out of memory, no network 
connection, etc) and address the issue
 # Restart NiFi on the node to so that it will reconnect to the cluster
 # Go through the steps to decommission a node

Toolkit CLI commands for retrieving a single node, list of nodes, and 
connecting/disconnecting/offloading/deleting nodes have been added.

  was:
Allow a node in the cluster to be decommissioned, rebalancing flowfiles on the 
node to be decommissioned to the other active nodes.  This work depends on 
NIFI-5516.

Similar to the client sending PUT request a DISCONNECTING message to 
cluster/nodes/\{id}, an OFFLOADING message can be sent as a PUT request to the 
same URI to initiate an OFFLOAD for a DISCONNECTED node. The OFFLOADING request 
will be idempotent.

Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.  
After the node completes offloading, it will transition to the OFFLOADED state.
OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
request via the UI/CLI, or restarting NiFi on the node.

The steps to decommission a node and remove it from the cluster are:
 # Send request to disconnect the node
 # Once disconnect completes, send request to offload the node.
 # Once offload completes, send request to delete node.
 # Once the delete request has finished, the NiFi service on the host can be 
stopped/removed.

When an error occurs and the node can not complete offloading, the user can:
 # Send request to delete the node from the cluster
 # Diagnose why the node had issues with the offload (out of memory, no network 
connection, etc) and address the issue
 # Restart NiFi on the node to so that it will reconnect to the cluster
 # Go through the steps to decommission a node

Toolkit CLI commands for retrieving a list of nodes and 
connecting/disconnecting/offloading/deleting nodes have been added.


> Prepare Nodes to be Decommissioned from Cluster
> ---
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be decommissioned to the other active nodes.  This work depends 
> on NIFI-5516.
> Similar to the client sending PUT request a DISCONNECTING message to 
> cluster/nodes/\{id}, an OFFLOADING message can be sent as a PUT request to 
> the same URI to initiate an OFFLOAD for a DISCONNECTED node. The OFFLOADING 
> request will be idempotent.
> Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.
> OFFLOADING nodes will transition to the OFFLOADED state once all flowfiles 
> have been rebalanced to other connected nodes in the cluster.
> OFFLOADING nodes that remain in the OFFLOADING state (due to errors 
> encountered while offloading) can be reconnecte

[jira] [Commented] (MINIFICPP-618) Add C2 triggers for local updates

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631027#comment-16631027
 ] 

ASF GitHub Bot commented on MINIFICPP-618:
--

Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/405#discussion_r221072929
  
--- Diff: libminifi/include/properties/Properties.h ---
@@ -60,6 +60,9 @@ class Properties {
   // Get the config value
   bool get(std::string key, std::string &value);
 
+  // Get the config value
--- End diff --

No apology, please. It's my fault for opening early!. I truly appreciate 
the input and hope that when I re-open it you have an opportunity to take a 
look!


> Add C2 triggers for local updates
> -
>
> Key: MINIFICPP-618
> URL: https://issues.apache.org/jira/browse/MINIFICPP-618
> Project: NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #405: MINIFICPP-618: Add C2 triggers, first of ...

2018-09-27 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/405#discussion_r221072929
  
--- Diff: libminifi/include/properties/Properties.h ---
@@ -60,6 +60,9 @@ class Properties {
   // Get the config value
   bool get(std::string key, std::string &value);
 
+  // Get the config value
--- End diff --

No apology, please. It's my fault for opening early!. I truly appreciate 
the input and hope that when I re-open it you have an opportunity to take a 
look!


---


[GitHub] nifi-minifi-cpp pull request #405: MINIFICPP-618: Add C2 triggers, first of ...

2018-09-27 Thread ottobackwards
Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/405#discussion_r221070316
  
--- Diff: libminifi/include/properties/Properties.h ---
@@ -60,6 +60,9 @@ class Properties {
   // Get the config value
   bool get(std::string key, std::string &value);
 
+  // Get the config value
--- End diff --

sorry


---


[jira] [Commented] (MINIFICPP-618) Add C2 triggers for local updates

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631011#comment-16631011
 ] 

ASF GitHub Bot commented on MINIFICPP-618:
--

Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/405#discussion_r221070316
  
--- Diff: libminifi/include/properties/Properties.h ---
@@ -60,6 +60,9 @@ class Properties {
   // Get the config value
   bool get(std::string key, std::string &value);
 
+  // Get the config value
--- End diff --

sorry


> Add C2 triggers for local updates
> -
>
> Key: MINIFICPP-618
> URL: https://issues.apache.org/jira/browse/MINIFICPP-618
> Project: NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-618) Add C2 triggers for local updates

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631008#comment-16631008
 ] 

ASF GitHub Bot commented on MINIFICPP-618:
--

Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/405#discussion_r221069962
  
--- Diff: libminifi/include/properties/Properties.h ---
@@ -60,6 +60,9 @@ class Properties {
   // Get the config value
   bool get(std::string key, std::string &value);
 
+  // Get the config value
--- End diff --

I think you're getting intermediate commits as I work through issues with 
another Apache person -- this one isn't quite ready for review. I'll close the 
PR in the meantime. 


> Add C2 triggers for local updates
> -
>
> Key: MINIFICPP-618
> URL: https://issues.apache.org/jira/browse/MINIFICPP-618
> Project: NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-618) Add C2 triggers for local updates

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631009#comment-16631009
 ] 

ASF GitHub Bot commented on MINIFICPP-618:
--

Github user phrocker closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/405


> Add C2 triggers for local updates
> -
>
> Key: MINIFICPP-618
> URL: https://issues.apache.org/jira/browse/MINIFICPP-618
> Project: NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #405: MINIFICPP-618: Add C2 triggers, first of ...

2018-09-27 Thread phrocker
Github user phrocker closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/405


---


[GitHub] nifi-minifi-cpp pull request #405: MINIFICPP-618: Add C2 triggers, first of ...

2018-09-27 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/405#discussion_r221069962
  
--- Diff: libminifi/include/properties/Properties.h ---
@@ -60,6 +60,9 @@ class Properties {
   // Get the config value
   bool get(std::string key, std::string &value);
 
+  // Get the config value
--- End diff --

I think you're getting intermediate commits as I work through issues with 
another Apache person -- this one isn't quite ready for review. I'll close the 
PR in the meantime. 


---


[jira] [Commented] (MINIFICPP-618) Add C2 triggers for local updates

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631004#comment-16631004
 ] 

ASF GitHub Bot commented on MINIFICPP-618:
--

Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/405#discussion_r221068265
  
--- Diff: libminifi/include/properties/Properties.h ---
@@ -60,6 +60,9 @@ class Properties {
   // Get the config value
   bool get(std::string key, std::string &value);
 
+  // Get the config value
--- End diff --

This should document the alternate keys and why it is different than the 
above


> Add C2 triggers for local updates
> -
>
> Key: MINIFICPP-618
> URL: https://issues.apache.org/jira/browse/MINIFICPP-618
> Project: NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #405: MINIFICPP-618: Add C2 triggers, first of ...

2018-09-27 Thread ottobackwards
Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/405#discussion_r221068265
  
--- Diff: libminifi/include/properties/Properties.h ---
@@ -60,6 +60,9 @@ class Properties {
   // Get the config value
   bool get(std::string key, std::string &value);
 
+  // Get the config value
--- End diff --

This should document the alternate keys and why it is different than the 
above


---


[jira] [Updated] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires

2018-09-27 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5557:
--
Fix Version/s: 1.8.0

> PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
> -
>
> Key: NIFI-5557
> URL: https://issues.apache.org/jira/browse/NIFI-5557
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Endre Kovacs
>Assignee: Endre Kovacs
>Priority: Major
> Fix For: 1.8.0
>
>
> when using *PutHDFS* processor in a kerberized environment, with a flow 
> "traffic" which approximately matches or less frequent then the lifetime of 
> the ticket of the principal, we see this in the log:
> {code:java}
> INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler 
> Exception while invoking getFileInfo of class 
> ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for princi...@example.com to host2.example.com/ip2:8020; 
> Host Details : local host is: "host1.example.com/ip1"; destination host is: 
> "host2.example.com":8020; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
> at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222)
> {code}
> and the flowfile is routed to failure relationship.
> *To reproduce:*
> Create a principal in your KDC with two minutes ticket lifetime,
> and set up a similar flow:
> {code:java}
> GetFile => putHDFS - success- -> logAttributes
> \
>  fail
>\
>  -> logAttributes
> {code}
>  copy a file to the input directory of the getFile processor. If the influx 
> of the flowfile is much more frequent, then the expiration time of the ticket:
> {code:java}
> watch -n 5 "cp book.txt /path/to/input"
> {code}
> then the flow will successfully run without issue.
> If we adjust this, to:
> {code:java}
> watch -n 121 "cp book.txt /path/to/input"
> {code}
> then we will observe this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires

2018-09-27 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck resolved NIFI-5557.
---
Resolution: Fixed

> PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
> -
>
> Key: NIFI-5557
> URL: https://issues.apache.org/jira/browse/NIFI-5557
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Endre Kovacs
>Assignee: Endre Kovacs
>Priority: Major
>
> when using *PutHDFS* processor in a kerberized environment, with a flow 
> "traffic" which approximately matches or less frequent then the lifetime of 
> the ticket of the principal, we see this in the log:
> {code:java}
> INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler 
> Exception while invoking getFileInfo of class 
> ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for princi...@example.com to host2.example.com/ip2:8020; 
> Host Details : local host is: "host1.example.com/ip1"; destination host is: 
> "host2.example.com":8020; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
> at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222)
> {code}
> and the flowfile is routed to failure relationship.
> *To reproduce:*
> Create a principal in your KDC with two minutes ticket lifetime,
> and set up a similar flow:
> {code:java}
> GetFile => putHDFS - success- -> logAttributes
> \
>  fail
>\
>  -> logAttributes
> {code}
>  copy a file to the input directory of the getFile processor. If the influx 
> of the flowfile is much more frequent, then the expiration time of the ticket:
> {code:java}
> watch -n 5 "cp book.txt /path/to/input"
> {code}
> then the flow will successfully run without issue.
> If we adjust this, to:
> {code:java}
> watch -n 121 "cp book.txt /path/to/input"
> {code}
> then we will observe this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630995#comment-16630995
 ] 

ASF GitHub Bot commented on NIFI-5557:
--

Github user jtstorck commented on the issue:

https://github.com/apache/nifi/pull/2971
  
+1, merged to master.  Had to resolve some conflicts in my commit for 
PutHDFSTest after rebasing to master.

Thanks for your contribution, @ekovacs!


> PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
> -
>
> Key: NIFI-5557
> URL: https://issues.apache.org/jira/browse/NIFI-5557
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Endre Kovacs
>Assignee: Endre Kovacs
>Priority: Major
> Fix For: 1.8.0
>
>
> when using *PutHDFS* processor in a kerberized environment, with a flow 
> "traffic" which approximately matches or less frequent then the lifetime of 
> the ticket of the principal, we see this in the log:
> {code:java}
> INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler 
> Exception while invoking getFileInfo of class 
> ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for princi...@example.com to host2.example.com/ip2:8020; 
> Host Details : local host is: "host1.example.com/ip1"; destination host is: 
> "host2.example.com":8020; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
> at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222)
> {code}
> and the flowfile is routed to failure relationship.
> *To reproduce:*
> Create a principal in your KDC with two minutes ticket lifetime,
> and set up a similar flow:
> {code:java}
> GetFile => putHDFS - success- -> logAttributes
> \
>  fail
>\
>  -> logAttributes
> {code}
>  copy a file to the input directory of the getFile processor. If the influx 
> of the flowfile is much more frequent, then the expiration time of the ticket:
> {code:java}
> watch -n 5 "cp book.txt /path/to/input"
> {code}
> then the flow will successfully run without issue.
> If we adjust this, to:
> {code:java}
> watch -n 121 "cp book.txt /path/to/input"
> {code}
> then we will observe this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2971: NIFI-5557: handling expired ticket by rollback and penaliz...

2018-09-27 Thread jtstorck
Github user jtstorck commented on the issue:

https://github.com/apache/nifi/pull/2971
  
+1, merged to master.  Had to resolve some conflicts in my commit for 
PutHDFSTest after rebasing to master.

Thanks for your contribution, @ekovacs!


---


[jira] [Commented] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630992#comment-16630992
 ] 

ASF GitHub Bot commented on NIFI-5557:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2971


> PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
> -
>
> Key: NIFI-5557
> URL: https://issues.apache.org/jira/browse/NIFI-5557
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Endre Kovacs
>Assignee: Endre Kovacs
>Priority: Major
>
> when using *PutHDFS* processor in a kerberized environment, with a flow 
> "traffic" which approximately matches or less frequent then the lifetime of 
> the ticket of the principal, we see this in the log:
> {code:java}
> INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler 
> Exception while invoking getFileInfo of class 
> ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for princi...@example.com to host2.example.com/ip2:8020; 
> Host Details : local host is: "host1.example.com/ip1"; destination host is: 
> "host2.example.com":8020; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
> at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222)
> {code}
> and the flowfile is routed to failure relationship.
> *To reproduce:*
> Create a principal in your KDC with two minutes ticket lifetime,
> and set up a similar flow:
> {code:java}
> GetFile => putHDFS - success- -> logAttributes
> \
>  fail
>\
>  -> logAttributes
> {code}
>  copy a file to the input directory of the getFile processor. If the influx 
> of the flowfile is much more frequent, then the expiration time of the ticket:
> {code:java}
> watch -n 5 "cp book.txt /path/to/input"
> {code}
> then the flow will successfully run without issue.
> If we adjust this, to:
> {code:java}
> watch -n 121 "cp book.txt /path/to/input"
> {code}
> then we will observe this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630990#comment-16630990
 ] 

ASF subversion and git services commented on NIFI-5557:
---

Commit e24388aa7f23f056ec93b3205dd501514806672e in nifi's branch 
refs/heads/master from [~jtstorck]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e24388a ]

NIFI-5557 Added test in PutHDFSTest for IOException with a nested GSSException
Resolved most of the code warnings in PutHDFSTest

This closes #2971.


> PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
> -
>
> Key: NIFI-5557
> URL: https://issues.apache.org/jira/browse/NIFI-5557
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Endre Kovacs
>Assignee: Endre Kovacs
>Priority: Major
>
> when using *PutHDFS* processor in a kerberized environment, with a flow 
> "traffic" which approximately matches or less frequent then the lifetime of 
> the ticket of the principal, we see this in the log:
> {code:java}
> INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler 
> Exception while invoking getFileInfo of class 
> ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for princi...@example.com to host2.example.com/ip2:8020; 
> Host Details : local host is: "host1.example.com/ip1"; destination host is: 
> "host2.example.com":8020; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
> at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222)
> {code}
> and the flowfile is routed to failure relationship.
> *To reproduce:*
> Create a principal in your KDC with two minutes ticket lifetime,
> and set up a similar flow:
> {code:java}
> GetFile => putHDFS - success- -> logAttributes
> \
>  fail
>\
>  -> logAttributes
> {code}
>  copy a file to the input directory of the getFile processor. If the influx 
> of the flowfile is much more frequent, then the expiration time of the ticket:
> {code:java}
> watch -n 5 "cp book.txt /path/to/input"
> {code}
> then the flow will successfully run without issue.
> If we adjust this, to:
> {code:java}
> watch -n 121 "cp book.txt /path/to/input"
> {code}
> then we will observe this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...

2018-09-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2971


---


[jira] [Commented] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630989#comment-16630989
 ] 

ASF subversion and git services commented on NIFI-5557:
---

Commit 0f55cbfb9f49087492a333c59b63e146a1444d55 in nifi's branch 
refs/heads/master from Endre Zoltan Kovacs
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=0f55cbf ]

NIFI-5557: handling expired ticket by rollback and penalization


> PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
> -
>
> Key: NIFI-5557
> URL: https://issues.apache.org/jira/browse/NIFI-5557
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Endre Kovacs
>Assignee: Endre Kovacs
>Priority: Major
>
> when using *PutHDFS* processor in a kerberized environment, with a flow 
> "traffic" which approximately matches or less frequent then the lifetime of 
> the ticket of the principal, we see this in the log:
> {code:java}
> INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler 
> Exception while invoking getFileInfo of class 
> ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for princi...@example.com to host2.example.com/ip2:8020; 
> Host Details : local host is: "host1.example.com/ip1"; destination host is: 
> "host2.example.com":8020; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
> at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222)
> {code}
> and the flowfile is routed to failure relationship.
> *To reproduce:*
> Create a principal in your KDC with two minutes ticket lifetime,
> and set up a similar flow:
> {code:java}
> GetFile => putHDFS - success- -> logAttributes
> \
>  fail
>\
>  -> logAttributes
> {code}
>  copy a file to the input directory of the getFile processor. If the influx 
> of the flowfile is much more frequent, then the expiration time of the ticket:
> {code:java}
> watch -n 5 "cp book.txt /path/to/input"
> {code}
> then the flow will successfully run without issue.
> If we adjust this, to:
> {code:java}
> watch -n 121 "cp book.txt /path/to/input"
> {code}
> then we will observe this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-624) Internal C2 configuration options are not consistent in naming

2018-09-27 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-624:
-
Fix Version/s: 0.6.0

> Internal C2 configuration options are not consistent in naming
> --
>
> Key: MINIFICPP-624
> URL: https://issues.apache.org/jira/browse/MINIFICPP-624
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Make consistent by supporting backwards compatibility



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630972#comment-16630972
 ] 

ASF GitHub Bot commented on NIFI-5612:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/3032
  
Reviewing...


> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
>   ... 15 common frames omitted
> {code}
> I don't know if I can share the database schema – still working with my team 
> on that – but looking at it, I think it has something to do with the 
> signedness of int(1) or tinyint(1) because those two are the only numerical 
> types common to all of the table.
>  
> *Edit 2018-09-24, so that my update doesn't get buried:*
> I am able to reproduce the exception using
>  * Vagrant 2.1.1
>  * Virtualbox 5.2.18 r124319
>  * Ubuntu 18.04
>  * MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in 
> use on the system where I observed this failure first)
>  * MySQL Connector/J 5.1.46
>  * NiFi 1.7.1
> With this table definition and data:
> {code:sql}
> create table fails ( 
>   fails int(1) unsigned NOT NULL default '0' 
> ) ENGINE=InnoDB AUTO_INCREMENT=1652

[GitHub] nifi issue #3032: NIFI-5612: Support JDBC drivers that return Long for unsig...

2018-09-27 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/3032
  
Reviewing...


---


[jira] [Created] (MINIFICPP-624) Internal C2 configuration options are not consistent in naming

2018-09-27 Thread Mr TheSegfault (JIRA)
Mr TheSegfault created MINIFICPP-624:


 Summary: Internal C2 configuration options are not consistent in 
naming
 Key: MINIFICPP-624
 URL: https://issues.apache.org/jira/browse/MINIFICPP-624
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: Mr TheSegfault
Assignee: Mr TheSegfault


Make consistent by supporting backwards compatibility



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-27 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5612:
---
Component/s: (was: Core Framework)
 Extensions

> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
>   ... 15 common frames omitted
> {code}
> I don't know if I can share the database schema – still working with my team 
> on that – but looking at it, I think it has something to do with the 
> signedness of int(1) or tinyint(1) because those two are the only numerical 
> types common to all of the table.
>  
> *Edit 2018-09-24, so that my update doesn't get buried:*
> I am able to reproduce the exception using
>  * Vagrant 2.1.1
>  * Virtualbox 5.2.18 r124319
>  * Ubuntu 18.04
>  * MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in 
> use on the system where I observed this failure first)
>  * MySQL Connector/J 5.1.46
>  * NiFi 1.7.1
> With this table definition and data:
> {code:sql}
> create table fails ( 
>   fails int(1) unsigned NOT NULL default '0' 
> ) ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
> insert into fails values ();
> {code}
> and an ExecuteSQL processor set up t

[jira] [Updated] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-27 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5612:
---
Status: Patch Available  (was: Open)

> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.7.1, 1.6.0, 1.5.0
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
>   ... 15 common frames omitted
> {code}
> I don't know if I can share the database schema – still working with my team 
> on that – but looking at it, I think it has something to do with the 
> signedness of int(1) or tinyint(1) because those two are the only numerical 
> types common to all of the table.
>  
> *Edit 2018-09-24, so that my update doesn't get buried:*
> I am able to reproduce the exception using
>  * Vagrant 2.1.1
>  * Virtualbox 5.2.18 r124319
>  * Ubuntu 18.04
>  * MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in 
> use on the system where I observed this failure first)
>  * MySQL Connector/J 5.1.46
>  * NiFi 1.7.1
> With this table definition and data:
> {code:sql}
> create table fails ( 
>   fails int(1) unsigned NOT NULL default '0' 
> ) ENGINE=InnoDB AUTO_INCREMENT=16527 DEFAULT CHARSET=latin1;
> insert into fails values ();
> {code}
> and an ExecuteSQL processor set up to access that table.



--
Thi

[jira] [Updated] (NIFI-5640) Improve efficiency of Avro Record Reader

2018-09-27 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5640:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Improve efficiency of Avro Record Reader
> 
>
> Key: NIFI-5640
> URL: https://issues.apache.org/jira/browse/NIFI-5640
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> There are a few things that we are doing in the Avro Reader that cause subpar 
> performance. Firstly, in the AvroTypeUtil, when converting an Avro 
> GenericRecord to our Record, the building of the RecordSchema is slow because 
> we call toString() (which is quite expensive) on the Avro schema in order to 
> provide a textual version to RecordSchema. However, the text is typically not 
> used and it is optional to provide the schema text, so we should avoid 
> calling Schema#toString() whenever possible.
> The AvroTypeUtil class also calls #getNonNullSubSchemas() a lot. In some 
> cases we don't really need to do this and can avoid creating the sublist. In 
> other cases, we do need to call it. However, the method uses the stream() 
> method on an existing List just to filter out 0 or 1 elements. While use of 
> the stream() method makes the code very readable, it is quite a bit more 
> expensive than just iterating over the existing list and adding to an 
> ArrayList. We should avoid use of the {{stream()}} method for trivial pieces 
> of code in time-critical parts of the codebase.
> Additionally, I've found that Avro's GenericDatumReader is extremely 
> inefficient, at least in some cases, when reading Strings because it uses an 
> IdentityHashMap to cache details about the schema. But IdentityHashMap is far 
> slower than if it were to just use HashMap so we could subclass the reader in 
> order to avoid the slow caching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5640) Improve efficiency of Avro Record Reader

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630959#comment-16630959
 ] 

ASF GitHub Bot commented on NIFI-5640:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3036


> Improve efficiency of Avro Record Reader
> 
>
> Key: NIFI-5640
> URL: https://issues.apache.org/jira/browse/NIFI-5640
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> There are a few things that we are doing in the Avro Reader that cause subpar 
> performance. Firstly, in the AvroTypeUtil, when converting an Avro 
> GenericRecord to our Record, the building of the RecordSchema is slow because 
> we call toString() (which is quite expensive) on the Avro schema in order to 
> provide a textual version to RecordSchema. However, the text is typically not 
> used and it is optional to provide the schema text, so we should avoid 
> calling Schema#toString() whenever possible.
> The AvroTypeUtil class also calls #getNonNullSubSchemas() a lot. In some 
> cases we don't really need to do this and can avoid creating the sublist. In 
> other cases, we do need to call it. However, the method uses the stream() 
> method on an existing List just to filter out 0 or 1 elements. While use of 
> the stream() method makes the code very readable, it is quite a bit more 
> expensive than just iterating over the existing list and adding to an 
> ArrayList. We should avoid use of the {{stream()}} method for trivial pieces 
> of code in time-critical parts of the codebase.
> Additionally, I've found that Avro's GenericDatumReader is extremely 
> inefficient, at least in some cases, when reading Strings because it uses an 
> IdentityHashMap to cache details about the schema. But IdentityHashMap is far 
> slower than if it were to just use HashMap so we could subclass the reader in 
> order to avoid the slow caching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3036: NIFI-5640: Improved efficiency of Avro Reader and s...

2018-09-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3036


---


[jira] [Commented] (NIFI-5640) Improve efficiency of Avro Record Reader

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630956#comment-16630956
 ] 

ASF GitHub Bot commented on NIFI-5640:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/3036
  
+1 LGTM, ran full build and tried various Avro conversions, 
reading/writing. Thanks for the improvement! Merging to master


> Improve efficiency of Avro Record Reader
> 
>
> Key: NIFI-5640
> URL: https://issues.apache.org/jira/browse/NIFI-5640
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> There are a few things that we are doing in the Avro Reader that cause subpar 
> performance. Firstly, in the AvroTypeUtil, when converting an Avro 
> GenericRecord to our Record, the building of the RecordSchema is slow because 
> we call toString() (which is quite expensive) on the Avro schema in order to 
> provide a textual version to RecordSchema. However, the text is typically not 
> used and it is optional to provide the schema text, so we should avoid 
> calling Schema#toString() whenever possible.
> The AvroTypeUtil class also calls #getNonNullSubSchemas() a lot. In some 
> cases we don't really need to do this and can avoid creating the sublist. In 
> other cases, we do need to call it. However, the method uses the stream() 
> method on an existing List just to filter out 0 or 1 elements. While use of 
> the stream() method makes the code very readable, it is quite a bit more 
> expensive than just iterating over the existing list and adding to an 
> ArrayList. We should avoid use of the {{stream()}} method for trivial pieces 
> of code in time-critical parts of the codebase.
> Additionally, I've found that Avro's GenericDatumReader is extremely 
> inefficient, at least in some cases, when reading Strings because it uses an 
> IdentityHashMap to cache details about the schema. But IdentityHashMap is far 
> slower than if it were to just use HashMap so we could subclass the reader in 
> order to avoid the slow caching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5640) Improve efficiency of Avro Record Reader

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630957#comment-16630957
 ] 

ASF subversion and git services commented on NIFI-5640:
---

Commit 2e1005e884cef70ea9c2eb1152d70e546ad2b5c3 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=2e1005e ]

NIFI-5640: Improved efficiency of Avro Reader and some methods of AvroTypeUtil. 
Also switched ServiceStateTransition to using read/write locks instead of 
synchronized blocks because profiling showed that significant time was spent in 
determining state of a Controller Service when attempting to use it. Switching 
to a ReadLock should provide better performance there.

Signed-off-by: Matthew Burgess 

This closes #3036


> Improve efficiency of Avro Record Reader
> 
>
> Key: NIFI-5640
> URL: https://issues.apache.org/jira/browse/NIFI-5640
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> There are a few things that we are doing in the Avro Reader that cause subpar 
> performance. Firstly, in the AvroTypeUtil, when converting an Avro 
> GenericRecord to our Record, the building of the RecordSchema is slow because 
> we call toString() (which is quite expensive) on the Avro schema in order to 
> provide a textual version to RecordSchema. However, the text is typically not 
> used and it is optional to provide the schema text, so we should avoid 
> calling Schema#toString() whenever possible.
> The AvroTypeUtil class also calls #getNonNullSubSchemas() a lot. In some 
> cases we don't really need to do this and can avoid creating the sublist. In 
> other cases, we do need to call it. However, the method uses the stream() 
> method on an existing List just to filter out 0 or 1 elements. While use of 
> the stream() method makes the code very readable, it is quite a bit more 
> expensive than just iterating over the existing list and adding to an 
> ArrayList. We should avoid use of the {{stream()}} method for trivial pieces 
> of code in time-critical parts of the codebase.
> Additionally, I've found that Avro's GenericDatumReader is extremely 
> inefficient, at least in some cases, when reading Strings because it uses an 
> IdentityHashMap to cache details about the schema. But IdentityHashMap is far 
> slower than if it were to just use HashMap so we could subclass the reader in 
> order to avoid the slow caching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3036: NIFI-5640: Improved efficiency of Avro Reader and some met...

2018-09-27 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/3036
  
+1 LGTM, ran full build and tried various Avro conversions, 
reading/writing. Thanks for the improvement! Merging to master


---


[jira] [Commented] (NIFI-5641) GetMongo should be able to copy attributes from input flowfile to result set flowfiles

2018-09-27 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630907#comment-16630907
 ] 

Mike Thomsen commented on NIFI-5641:


[~markap14] so create(FlowFile) copies all of the attributes?

> GetMongo should be able to copy attributes from input flowfile to result set 
> flowfiles
> --
>
> Key: NIFI-5641
> URL: https://issues.apache.org/jira/browse/NIFI-5641
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Minor
>
> User requested that GetMongo should be able to copy the attributes that come 
> from the input flowfile to the result set flowfiles. Should be implemented as 
> an optional property, off by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5640) Improve efficiency of Avro Record Reader

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630900#comment-16630900
 ] 

ASF GitHub Bot commented on NIFI-5640:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/3036
  
Reviewing...


> Improve efficiency of Avro Record Reader
> 
>
> Key: NIFI-5640
> URL: https://issues.apache.org/jira/browse/NIFI-5640
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> There are a few things that we are doing in the Avro Reader that cause subpar 
> performance. Firstly, in the AvroTypeUtil, when converting an Avro 
> GenericRecord to our Record, the building of the RecordSchema is slow because 
> we call toString() (which is quite expensive) on the Avro schema in order to 
> provide a textual version to RecordSchema. However, the text is typically not 
> used and it is optional to provide the schema text, so we should avoid 
> calling Schema#toString() whenever possible.
> The AvroTypeUtil class also calls #getNonNullSubSchemas() a lot. In some 
> cases we don't really need to do this and can avoid creating the sublist. In 
> other cases, we do need to call it. However, the method uses the stream() 
> method on an existing List just to filter out 0 or 1 elements. While use of 
> the stream() method makes the code very readable, it is quite a bit more 
> expensive than just iterating over the existing list and adding to an 
> ArrayList. We should avoid use of the {{stream()}} method for trivial pieces 
> of code in time-critical parts of the codebase.
> Additionally, I've found that Avro's GenericDatumReader is extremely 
> inefficient, at least in some cases, when reading Strings because it uses an 
> IdentityHashMap to cache details about the schema. But IdentityHashMap is far 
> slower than if it were to just use HashMap so we could subclass the reader in 
> order to avoid the slow caching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3036: NIFI-5640: Improved efficiency of Avro Reader and some met...

2018-09-27 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/3036
  
Reviewing...


---


[jira] [Created] (MINIFICPP-623) Add thread backtrace to describe command

2018-09-27 Thread Mr TheSegfault (JIRA)
Mr TheSegfault created MINIFICPP-623:


 Summary: Add thread backtrace to describe command
 Key: MINIFICPP-623
 URL: https://issues.apache.org/jira/browse/MINIFICPP-623
 Project: NiFi MiNiFi C++
  Issue Type: New Feature
Reporter: Mr TheSegfault


We should be able perform a remote jstack ( as it were ) to get debug 
information remotely through the predefined c2 capabilities



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5642) QueryCassandra processor : output FlowFiles as soon fetch_size is reached

2018-09-27 Thread JIRA
André Gomes Lamas Otero created NIFI-5642:
-

 Summary: QueryCassandra processor : output FlowFiles as soon 
fetch_size is reached
 Key: NIFI-5642
 URL: https://issues.apache.org/jira/browse/NIFI-5642
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.7.1
Reporter: André Gomes Lamas Otero


When I'm using QueryCassandra alongside with fetch_size parameter I expected 
that as soon my reader reaches the fetch_size the processor outputs some data 
to be processed by the next processor, but QueryCassandra reads all the data, 
then output the flow files.

I'll start to work on a patch for this situation, I'll appreciate any 
suggestion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630785#comment-16630785
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r221013395
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/RemoteQueuePartition.java
 ---
@@ -0,0 +1,337 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.partition;
+
+import org.apache.nifi.cluster.protocol.NodeIdentifier;
+import org.apache.nifi.controller.queue.DropFlowFileRequest;
+import org.apache.nifi.controller.queue.FlowFileQueueContents;
+import org.apache.nifi.controller.queue.LoadBalancedFlowFileQueue;
+import org.apache.nifi.controller.queue.QueueSize;
+import org.apache.nifi.controller.queue.RemoteQueuePartitionDiagnostics;
+import 
org.apache.nifi.controller.queue.StandardRemoteQueuePartitionDiagnostics;
+import org.apache.nifi.controller.queue.SwappablePriorityQueue;
+import 
org.apache.nifi.controller.queue.clustered.TransferFailureDestination;
+import 
org.apache.nifi.controller.queue.clustered.client.async.AsyncLoadBalanceClientRegistry;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionCompleteCallback;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionFailureCallback;
+import org.apache.nifi.controller.repository.ContentNotFoundException;
+import org.apache.nifi.controller.repository.ContentRepository;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.FlowFileRepository;
+import org.apache.nifi.controller.repository.RepositoryRecord;
+import org.apache.nifi.controller.repository.StandardRepositoryRecord;
+import org.apache.nifi.controller.repository.SwapSummary;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.flowfile.FlowFilePrioritizer;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+/**
+ * A Queue Partition that is responsible for transferring FlowFiles to 
another node in the cluster
+ */
+public class RemoteQueuePartition implements QueuePartition {
+private static final Logger logger = 
LoggerFactory.getLogger(RemoteQueuePartition.class);
+
+private final NodeIdentifier nodeIdentifier;
+private final SwappablePriorityQueue priorityQueue;
+private final LoadBalancedFlowFileQueue flowFileQueue;
+private final TransferFailureDestination failureDestination;
+
+private final FlowFileRepository flowFileRepo;
+private final ProvenanceEventRepository provRepo;
+private final ContentRepository contentRepo;
+private final AsyncLoadBalanceClientRegistry clientRegistry;
+
+private boolean running = false;
+private final String description;
+
+public RemoteQueuePartition(final NodeIdentifier nodeId, final 
SwappablePriorityQueue priorityQueue, final TransferFailureDestination 
failureDestination

[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r221013395
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/RemoteQueuePartition.java
 ---
@@ -0,0 +1,337 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.partition;
+
+import org.apache.nifi.cluster.protocol.NodeIdentifier;
+import org.apache.nifi.controller.queue.DropFlowFileRequest;
+import org.apache.nifi.controller.queue.FlowFileQueueContents;
+import org.apache.nifi.controller.queue.LoadBalancedFlowFileQueue;
+import org.apache.nifi.controller.queue.QueueSize;
+import org.apache.nifi.controller.queue.RemoteQueuePartitionDiagnostics;
+import 
org.apache.nifi.controller.queue.StandardRemoteQueuePartitionDiagnostics;
+import org.apache.nifi.controller.queue.SwappablePriorityQueue;
+import 
org.apache.nifi.controller.queue.clustered.TransferFailureDestination;
+import 
org.apache.nifi.controller.queue.clustered.client.async.AsyncLoadBalanceClientRegistry;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionCompleteCallback;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionFailureCallback;
+import org.apache.nifi.controller.repository.ContentNotFoundException;
+import org.apache.nifi.controller.repository.ContentRepository;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.FlowFileRepository;
+import org.apache.nifi.controller.repository.RepositoryRecord;
+import org.apache.nifi.controller.repository.StandardRepositoryRecord;
+import org.apache.nifi.controller.repository.SwapSummary;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.flowfile.FlowFilePrioritizer;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+/**
+ * A Queue Partition that is responsible for transferring FlowFiles to 
another node in the cluster
+ */
+public class RemoteQueuePartition implements QueuePartition {
+private static final Logger logger = 
LoggerFactory.getLogger(RemoteQueuePartition.class);
+
+private final NodeIdentifier nodeIdentifier;
+private final SwappablePriorityQueue priorityQueue;
+private final LoadBalancedFlowFileQueue flowFileQueue;
+private final TransferFailureDestination failureDestination;
+
+private final FlowFileRepository flowFileRepo;
+private final ProvenanceEventRepository provRepo;
+private final ContentRepository contentRepo;
+private final AsyncLoadBalanceClientRegistry clientRegistry;
+
+private boolean running = false;
+private final String description;
+
+public RemoteQueuePartition(final NodeIdentifier nodeId, final 
SwappablePriorityQueue priorityQueue, final TransferFailureDestination 
failureDestination,
+final FlowFileRepository flowFileRepo, 
final ProvenanceEventRepository provRepo, final ContentRepository 
contentRepository,
+final AsyncLoadBalanceClientRegistry 
clientRegistry

[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220982191
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/server/StandardLoadBalanceProtocol.java
 ---
@@ -0,0 +1,578 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.server;
+
+import org.apache.nifi.connectable.Connection;
+import org.apache.nifi.controller.FlowController;
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.queue.LoadBalanceCompression;
+import org.apache.nifi.controller.queue.LoadBalancedFlowFileQueue;
+import org.apache.nifi.controller.repository.ContentRepository;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.FlowFileRepository;
+import org.apache.nifi.controller.repository.RepositoryRecord;
+import org.apache.nifi.controller.repository.StandardFlowFileRecord;
+import org.apache.nifi.controller.repository.StandardRepositoryRecord;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.io.LimitedInputStream;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.remote.StandardVersionNegotiator;
+import org.apache.nifi.remote.VersionNegotiator;
+import org.apache.nifi.security.util.CertificateUtils;
+import org.apache.nifi.stream.io.ByteCountingInputStream;
+import org.apache.nifi.stream.io.LimitingInputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.net.ssl.SSLPeerUnverifiedException;
+import javax.net.ssl.SSLSession;
+import javax.net.ssl.SSLSocket;
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.DataInputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.Socket;
+import java.net.SocketTimeoutException;
+import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
+import java.security.cert.Certificate;
+import java.security.cert.CertificateException;
+import java.security.cert.X509Certificate;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.stream.Collectors;
+import java.util.zip.CRC32;
+import java.util.zip.CheckedInputStream;
+import java.util.zip.Checksum;
+import java.util.zip.GZIPInputStream;
+
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.ABORT_PROTOCOL_NEGOTIATION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.ABORT_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.COMPLETE_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.CONFIRM_CHECKSUM;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.CONFIRM_COMPLETE_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.DATA_FRAME_FOLLOWS;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.MORE_FLOWFILES;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.NO_DATA_FRAME;
+import static 
o

[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630661#comment-16630661
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220982191
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/server/StandardLoadBalanceProtocol.java
 ---
@@ -0,0 +1,578 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.server;
+
+import org.apache.nifi.connectable.Connection;
+import org.apache.nifi.controller.FlowController;
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.queue.LoadBalanceCompression;
+import org.apache.nifi.controller.queue.LoadBalancedFlowFileQueue;
+import org.apache.nifi.controller.repository.ContentRepository;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.FlowFileRepository;
+import org.apache.nifi.controller.repository.RepositoryRecord;
+import org.apache.nifi.controller.repository.StandardFlowFileRecord;
+import org.apache.nifi.controller.repository.StandardRepositoryRecord;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.io.LimitedInputStream;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.remote.StandardVersionNegotiator;
+import org.apache.nifi.remote.VersionNegotiator;
+import org.apache.nifi.security.util.CertificateUtils;
+import org.apache.nifi.stream.io.ByteCountingInputStream;
+import org.apache.nifi.stream.io.LimitingInputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.net.ssl.SSLPeerUnverifiedException;
+import javax.net.ssl.SSLSession;
+import javax.net.ssl.SSLSocket;
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.DataInputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.Socket;
+import java.net.SocketTimeoutException;
+import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
+import java.security.cert.Certificate;
+import java.security.cert.CertificateException;
+import java.security.cert.X509Certificate;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.stream.Collectors;
+import java.util.zip.CRC32;
+import java.util.zip.CheckedInputStream;
+import java.util.zip.Checksum;
+import java.util.zip.GZIPInputStream;
+
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.ABORT_PROTOCOL_NEGOTIATION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.ABORT_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.COMPLETE_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.CONFIRM_CHECKSUM;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.CONFIRM_COMPLETE_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.DATA_FRAME_FOLLOWS;

[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630653#comment-16630653
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220979881
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/server/StandardLoadBalanceProtocol.java
 ---
@@ -0,0 +1,578 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.server;
+
+import org.apache.nifi.connectable.Connection;
+import org.apache.nifi.controller.FlowController;
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.queue.LoadBalanceCompression;
+import org.apache.nifi.controller.queue.LoadBalancedFlowFileQueue;
+import org.apache.nifi.controller.repository.ContentRepository;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.FlowFileRepository;
+import org.apache.nifi.controller.repository.RepositoryRecord;
+import org.apache.nifi.controller.repository.StandardFlowFileRecord;
+import org.apache.nifi.controller.repository.StandardRepositoryRecord;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.io.LimitedInputStream;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.remote.StandardVersionNegotiator;
+import org.apache.nifi.remote.VersionNegotiator;
+import org.apache.nifi.security.util.CertificateUtils;
+import org.apache.nifi.stream.io.ByteCountingInputStream;
+import org.apache.nifi.stream.io.LimitingInputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.net.ssl.SSLPeerUnverifiedException;
+import javax.net.ssl.SSLSession;
+import javax.net.ssl.SSLSocket;
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.DataInputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.Socket;
+import java.net.SocketTimeoutException;
+import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
+import java.security.cert.Certificate;
+import java.security.cert.CertificateException;
+import java.security.cert.X509Certificate;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.stream.Collectors;
+import java.util.zip.CRC32;
+import java.util.zip.CheckedInputStream;
+import java.util.zip.Checksum;
+import java.util.zip.GZIPInputStream;
+
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.ABORT_PROTOCOL_NEGOTIATION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.ABORT_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.COMPLETE_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.CONFIRM_CHECKSUM;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.CONFIRM_COMPLETE_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.DATA_FRAME_FOLLOWS;

[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220979881
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/server/StandardLoadBalanceProtocol.java
 ---
@@ -0,0 +1,578 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.server;
+
+import org.apache.nifi.connectable.Connection;
+import org.apache.nifi.controller.FlowController;
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.queue.LoadBalanceCompression;
+import org.apache.nifi.controller.queue.LoadBalancedFlowFileQueue;
+import org.apache.nifi.controller.repository.ContentRepository;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.FlowFileRepository;
+import org.apache.nifi.controller.repository.RepositoryRecord;
+import org.apache.nifi.controller.repository.StandardFlowFileRecord;
+import org.apache.nifi.controller.repository.StandardRepositoryRecord;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.io.LimitedInputStream;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.remote.StandardVersionNegotiator;
+import org.apache.nifi.remote.VersionNegotiator;
+import org.apache.nifi.security.util.CertificateUtils;
+import org.apache.nifi.stream.io.ByteCountingInputStream;
+import org.apache.nifi.stream.io.LimitingInputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.net.ssl.SSLPeerUnverifiedException;
+import javax.net.ssl.SSLSession;
+import javax.net.ssl.SSLSocket;
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.DataInputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.Socket;
+import java.net.SocketTimeoutException;
+import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
+import java.security.cert.Certificate;
+import java.security.cert.CertificateException;
+import java.security.cert.X509Certificate;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.stream.Collectors;
+import java.util.zip.CRC32;
+import java.util.zip.CheckedInputStream;
+import java.util.zip.Checksum;
+import java.util.zip.GZIPInputStream;
+
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.ABORT_PROTOCOL_NEGOTIATION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.ABORT_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.COMPLETE_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.CONFIRM_CHECKSUM;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.CONFIRM_COMPLETE_TRANSACTION;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.DATA_FRAME_FOLLOWS;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.MORE_FLOWFILES;
+import static 
org.apache.nifi.controller.queue.clustered.protocol.LoadBalanceProtocolConstants.NO_DATA_FRAME;
+import static 
o

[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630648#comment-16630648
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220979317
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-client-dto/src/main/java/org/apache/nifi/web/api/dto/diagnostics/RemoteQueuePartitionDTO.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.web.api.dto.diagnostics;
+
+import io.swagger.annotations.ApiModelProperty;
+
+import javax.xml.bind.annotation.XmlType;
+
+@XmlType(name = "remoteQueuePartition")
+public class RemoteQueuePartitionDTO {
+private String nodeId;
+private int totalFlowFileCount;
+private long totalByteCount;
--- End diff --

Wow good catch, I completely overlooked that. Will address.


> Allow data in a Connection to be Load-Balanced across cluster
> -
>
> Key: NIFI-5516
> URL: https://issues.apache.org/jira/browse/NIFI-5516
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Allow user to configure a Connection to be load balanced across the cluster. 
> For more information, see Feature Proposal at 
> https://cwiki.apache.org/confluence/display/NIFI/Load-Balanced+Connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220979549
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-client-dto/src/main/java/org/apache/nifi/web/api/dto/diagnostics/LocalQueuePartitionDTO.java
 ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.web.api.dto.diagnostics;
+
+import io.swagger.annotations.ApiModelProperty;
+
+import javax.xml.bind.annotation.XmlType;
+
+@XmlType(name = "localQueuePartition")
+public class LocalQueuePartitionDTO {
+private int totalFlowFileCount;
+private long totalByteCount;
--- End diff --

Yup :)


---


[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630651#comment-16630651
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220979549
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-client-dto/src/main/java/org/apache/nifi/web/api/dto/diagnostics/LocalQueuePartitionDTO.java
 ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.web.api.dto.diagnostics;
+
+import io.swagger.annotations.ApiModelProperty;
+
+import javax.xml.bind.annotation.XmlType;
+
+@XmlType(name = "localQueuePartition")
+public class LocalQueuePartitionDTO {
+private int totalFlowFileCount;
+private long totalByteCount;
--- End diff --

Yup :)


> Allow data in a Connection to be Load-Balanced across cluster
> -
>
> Key: NIFI-5516
> URL: https://issues.apache.org/jira/browse/NIFI-5516
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Allow user to configure a Connection to be load balanced across the cluster. 
> For more information, see Feature Proposal at 
> https://cwiki.apache.org/confluence/display/NIFI/Load-Balanced+Connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220979317
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-client-dto/src/main/java/org/apache/nifi/web/api/dto/diagnostics/RemoteQueuePartitionDTO.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.web.api.dto.diagnostics;
+
+import io.swagger.annotations.ApiModelProperty;
+
+import javax.xml.bind.annotation.XmlType;
+
+@XmlType(name = "remoteQueuePartition")
+public class RemoteQueuePartitionDTO {
+private String nodeId;
+private int totalFlowFileCount;
+private long totalByteCount;
--- End diff --

Wow good catch, I completely overlooked that. Will address.


---


[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220978179
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/server/ConnectionLoadBalanceServer.java
 ---
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.server;
+
+import org.apache.nifi.engine.FlowEngine;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.reporting.Severity;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLServerSocket;
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.ServerSocket;
+import java.net.Socket;
+import java.net.SocketTimeoutException;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+
+public class ConnectionLoadBalanceServer {
+private static final Logger logger = 
LoggerFactory.getLogger(ConnectionLoadBalanceServer.class);
+
+private final String hostname;
+private final int port;
+private final SSLContext sslContext;
+private final ExecutorService threadPool;
+private final LoadBalanceProtocol loadBalanceProtocol;
+private final int connectionTimeoutMillis;
+private final int numThreads;
+private final EventReporter eventReporter;
+
+private volatile Set communicationActions = 
Collections.emptySet();
+private final BlockingQueue connectionQueue = new 
LinkedBlockingQueue<>();
+
+private volatile AcceptConnection acceptConnection;
+private volatile ServerSocket serverSocket;
+private volatile boolean stopped = true;
+
+public ConnectionLoadBalanceServer(final String hostname, final int 
port, final SSLContext sslContext, final int numThreads, final 
LoadBalanceProtocol loadBalanceProtocol,
+   final EventReporter eventReporter, 
final int connectionTimeoutMillis) {
+this.hostname = hostname;
+this.port = port;
+this.sslContext = sslContext;
+this.loadBalanceProtocol = loadBalanceProtocol;
+this.connectionTimeoutMillis = connectionTimeoutMillis;
+this.numThreads = numThreads;
+this.eventReporter = eventReporter;
+
+threadPool = new FlowEngine(numThreads, "Load Balance Server");
+}
+
+public void start() throws IOException {
+if (!stopped) {
+return;
+}
+
+stopped = false;
+if (serverSocket != null) {
+return;
+}
+
+try {
+serverSocket = createServerSocket();
+} catch (final Exception e) {
+throw new IOException("Could not begin listening for incoming 
connections in order to load balance data across the cluster. Please verify the 
values of the " +
+"'nifi.cluster.load.balance.port' and 
'nifi.cluster.load.balance.host' properties as well as the 'nifi.security.*' 
properties", e);
+}
+
+final Set actions = new HashSet<>(numThreads);
+for (int i=0; i < numThreads; i++) {
+final CommunicateAction action = new 
CommunicateAction(loadBalanceProtocol);
+actions.add(action);
+threadPool.submit(action);
+}
+
+this.communicationActions = actions;
+
+acceptConnection = new AcceptConnection(serverSocket);
+final Thread receiveConnectionThread = new 
Thread(acceptConnection);
+receiveConnectionThread.setName("Receive Queue Load-Balan

[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630643#comment-16630643
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220978179
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/server/ConnectionLoadBalanceServer.java
 ---
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.server;
+
+import org.apache.nifi.engine.FlowEngine;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.reporting.Severity;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLServerSocket;
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.ServerSocket;
+import java.net.Socket;
+import java.net.SocketTimeoutException;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+
+public class ConnectionLoadBalanceServer {
+private static final Logger logger = 
LoggerFactory.getLogger(ConnectionLoadBalanceServer.class);
+
+private final String hostname;
+private final int port;
+private final SSLContext sslContext;
+private final ExecutorService threadPool;
+private final LoadBalanceProtocol loadBalanceProtocol;
+private final int connectionTimeoutMillis;
+private final int numThreads;
+private final EventReporter eventReporter;
+
+private volatile Set communicationActions = 
Collections.emptySet();
+private final BlockingQueue connectionQueue = new 
LinkedBlockingQueue<>();
+
+private volatile AcceptConnection acceptConnection;
+private volatile ServerSocket serverSocket;
+private volatile boolean stopped = true;
+
+public ConnectionLoadBalanceServer(final String hostname, final int 
port, final SSLContext sslContext, final int numThreads, final 
LoadBalanceProtocol loadBalanceProtocol,
+   final EventReporter eventReporter, 
final int connectionTimeoutMillis) {
+this.hostname = hostname;
+this.port = port;
+this.sslContext = sslContext;
+this.loadBalanceProtocol = loadBalanceProtocol;
+this.connectionTimeoutMillis = connectionTimeoutMillis;
+this.numThreads = numThreads;
+this.eventReporter = eventReporter;
+
+threadPool = new FlowEngine(numThreads, "Load Balance Server");
+}
+
+public void start() throws IOException {
+if (!stopped) {
+return;
+}
+
+stopped = false;
+if (serverSocket != null) {
+return;
+}
+
+try {
+serverSocket = createServerSocket();
+} catch (final Exception e) {
+throw new IOException("Could not begin listening for incoming 
connections in order to load balance data across the cluster. Please verify the 
values of the " +
+"'nifi.cluster.load.balance.port' and 
'nifi.cluster.load.balance.host' properties as well as the 'nifi.security.*' 
properties", e);
+}
+
+final Set actions = new HashSet<>(numThreads);
+for (int i=0; i < numThreads; i++) {
+final CommunicateAction action = new 
CommunicateAction(loadBalanceProtocol);
+actions.add(action);
+threadPool.submit(action);
+}
+
+this.commu

[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630636#comment-16630636
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220976418
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/FlowFilePartitioner.java
 ---
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.partition;
+
+import org.apache.nifi.controller.repository.FlowFileRecord;
+
+public interface FlowFilePartitioner {
+
+/**
+ * Determines which partition the given FlowFile should go to
+ *
+ * @param flowFile the FlowFile to partition
+ * @param partitions the partitions to choose from
+ * @param localPartition the local partition, which is also included 
in the given array of partitions
+ * @return the partition for the FlowFile
+ */
+QueuePartition getPartition(FlowFileRecord flowFile, QueuePartition[] 
partitions,  QueuePartition localPartition);
+
+/**
+ * @return true if a change in the size of a cluster 
should result in re-balancing all FlowFiles in queue,
+ * false if a change in the size of a cluster 
does not require re-balancing.
+ */
+boolean isRebalanceOnClusterResize();
--- End diff --

I disagree - I think we should stick with standard java convention is 
`isXYZ()` being a simple method that returns a boolean.


> Allow data in a Connection to be Load-Balanced across cluster
> -
>
> Key: NIFI-5516
> URL: https://issues.apache.org/jira/browse/NIFI-5516
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Allow user to configure a Connection to be load balanced across the cluster. 
> For more information, see Feature Proposal at 
> https://cwiki.apache.org/confluence/display/NIFI/Load-Balanced+Connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220976418
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/FlowFilePartitioner.java
 ---
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.partition;
+
+import org.apache.nifi.controller.repository.FlowFileRecord;
+
+public interface FlowFilePartitioner {
+
+/**
+ * Determines which partition the given FlowFile should go to
+ *
+ * @param flowFile the FlowFile to partition
+ * @param partitions the partitions to choose from
+ * @param localPartition the local partition, which is also included 
in the given array of partitions
+ * @return the partition for the FlowFile
+ */
+QueuePartition getPartition(FlowFileRecord flowFile, QueuePartition[] 
partitions,  QueuePartition localPartition);
+
+/**
+ * @return true if a change in the size of a cluster 
should result in re-balancing all FlowFiles in queue,
+ * false if a change in the size of a cluster 
does not require re-balancing.
+ */
+boolean isRebalanceOnClusterResize();
--- End diff --

I disagree - I think we should stick with standard java convention is 
`isXYZ()` being a simple method that returns a boolean.


---


[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630635#comment-16630635
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220975941
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/client/async/nio/NioAsyncLoadBalanceClientTask.java
 ---
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.client.async.nio;
+
+import org.apache.nifi.cluster.coordination.ClusterCoordinator;
+import org.apache.nifi.cluster.coordination.node.NodeConnectionState;
+import org.apache.nifi.cluster.coordination.node.NodeConnectionStatus;
+import org.apache.nifi.cluster.protocol.NodeIdentifier;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.reporting.Severity;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class NioAsyncLoadBalanceClientTask implements Runnable {
+private static final Logger logger = 
LoggerFactory.getLogger(NioAsyncLoadBalanceClientTask.class);
+private static final String EVENT_CATEGORY = "Load-Balanced 
Connection";
+
+private final NioAsyncLoadBalanceClientRegistry clientRegistry;
+private final ClusterCoordinator clusterCoordinator;
+private final EventReporter eventReporter;
+private volatile boolean running = true;
+
+public NioAsyncLoadBalanceClientTask(final 
NioAsyncLoadBalanceClientRegistry clientRegistry, final ClusterCoordinator 
clusterCoordinator, final EventReporter eventReporter) {
+this.clientRegistry = clientRegistry;
+this.clusterCoordinator = clusterCoordinator;
+this.eventReporter = eventReporter;
+}
+
+@Override
+public void run() {
+while (running) {
+try {
+boolean success = false;
+for (final NioAsyncLoadBalanceClient client : 
clientRegistry.getAllClients()) {
+if (!client.isRunning()) {
+logger.trace("Client {} is not running so will not 
communicate with it", client);
+continue;
+}
+
+if (client.isPenalized()) {
+logger.trace("Client {} is penalized so will not 
communicate with it", client);
+continue;
+}
+
+final NodeIdentifier clientNodeId = 
client.getNodeIdentifier();
+final NodeConnectionStatus connectionStatus = 
clusterCoordinator.getConnectionStatus(clientNodeId);
+final NodeConnectionState connectionState = 
connectionStatus.getState();
+if (connectionState != NodeConnectionState.CONNECTED) {
--- End diff --

That's a good catch.


> Allow data in a Connection to be Load-Balanced across cluster
> -
>
> Key: NIFI-5516
> URL: https://issues.apache.org/jira/browse/NIFI-5516
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Allow user to configure a Connection to be load balanced across the cluster. 
> For more information, see Feature Proposal at 
> https://cwiki.apache.org/confluence/display/NIFI/Load-Balanced+Connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220975941
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/client/async/nio/NioAsyncLoadBalanceClientTask.java
 ---
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.client.async.nio;
+
+import org.apache.nifi.cluster.coordination.ClusterCoordinator;
+import org.apache.nifi.cluster.coordination.node.NodeConnectionState;
+import org.apache.nifi.cluster.coordination.node.NodeConnectionStatus;
+import org.apache.nifi.cluster.protocol.NodeIdentifier;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.reporting.Severity;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class NioAsyncLoadBalanceClientTask implements Runnable {
+private static final Logger logger = 
LoggerFactory.getLogger(NioAsyncLoadBalanceClientTask.class);
+private static final String EVENT_CATEGORY = "Load-Balanced 
Connection";
+
+private final NioAsyncLoadBalanceClientRegistry clientRegistry;
+private final ClusterCoordinator clusterCoordinator;
+private final EventReporter eventReporter;
+private volatile boolean running = true;
+
+public NioAsyncLoadBalanceClientTask(final 
NioAsyncLoadBalanceClientRegistry clientRegistry, final ClusterCoordinator 
clusterCoordinator, final EventReporter eventReporter) {
+this.clientRegistry = clientRegistry;
+this.clusterCoordinator = clusterCoordinator;
+this.eventReporter = eventReporter;
+}
+
+@Override
+public void run() {
+while (running) {
+try {
+boolean success = false;
+for (final NioAsyncLoadBalanceClient client : 
clientRegistry.getAllClients()) {
+if (!client.isRunning()) {
+logger.trace("Client {} is not running so will not 
communicate with it", client);
+continue;
+}
+
+if (client.isPenalized()) {
+logger.trace("Client {} is penalized so will not 
communicate with it", client);
+continue;
+}
+
+final NodeIdentifier clientNodeId = 
client.getNodeIdentifier();
+final NodeConnectionStatus connectionStatus = 
clusterCoordinator.getConnectionStatus(clientNodeId);
+final NodeConnectionState connectionState = 
connectionStatus.getState();
+if (connectionState != NodeConnectionState.CONNECTED) {
--- End diff --

That's a good catch.


---


[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630634#comment-16630634
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220975792
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/RemoteQueuePartition.java
 ---
@@ -0,0 +1,337 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.partition;
+
+import org.apache.nifi.cluster.protocol.NodeIdentifier;
+import org.apache.nifi.controller.queue.DropFlowFileRequest;
+import org.apache.nifi.controller.queue.FlowFileQueueContents;
+import org.apache.nifi.controller.queue.LoadBalancedFlowFileQueue;
+import org.apache.nifi.controller.queue.QueueSize;
+import org.apache.nifi.controller.queue.RemoteQueuePartitionDiagnostics;
+import 
org.apache.nifi.controller.queue.StandardRemoteQueuePartitionDiagnostics;
+import org.apache.nifi.controller.queue.SwappablePriorityQueue;
+import 
org.apache.nifi.controller.queue.clustered.TransferFailureDestination;
+import 
org.apache.nifi.controller.queue.clustered.client.async.AsyncLoadBalanceClientRegistry;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionCompleteCallback;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionFailureCallback;
+import org.apache.nifi.controller.repository.ContentNotFoundException;
+import org.apache.nifi.controller.repository.ContentRepository;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.FlowFileRepository;
+import org.apache.nifi.controller.repository.RepositoryRecord;
+import org.apache.nifi.controller.repository.StandardRepositoryRecord;
+import org.apache.nifi.controller.repository.SwapSummary;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.flowfile.FlowFilePrioritizer;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+/**
+ * A Queue Partition that is responsible for transferring FlowFiles to 
another node in the cluster
+ */
+public class RemoteQueuePartition implements QueuePartition {
+private static final Logger logger = 
LoggerFactory.getLogger(RemoteQueuePartition.class);
+
+private final NodeIdentifier nodeIdentifier;
+private final SwappablePriorityQueue priorityQueue;
+private final LoadBalancedFlowFileQueue flowFileQueue;
+private final TransferFailureDestination failureDestination;
+
+private final FlowFileRepository flowFileRepo;
+private final ProvenanceEventRepository provRepo;
+private final ContentRepository contentRepo;
+private final AsyncLoadBalanceClientRegistry clientRegistry;
+
+private boolean running = false;
+private final String description;
+
+public RemoteQueuePartition(final NodeIdentifier nodeId, final 
SwappablePriorityQueue priorityQueue, final TransferFailureDestination 
failureDestination

[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220975792
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/RemoteQueuePartition.java
 ---
@@ -0,0 +1,337 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.partition;
+
+import org.apache.nifi.cluster.protocol.NodeIdentifier;
+import org.apache.nifi.controller.queue.DropFlowFileRequest;
+import org.apache.nifi.controller.queue.FlowFileQueueContents;
+import org.apache.nifi.controller.queue.LoadBalancedFlowFileQueue;
+import org.apache.nifi.controller.queue.QueueSize;
+import org.apache.nifi.controller.queue.RemoteQueuePartitionDiagnostics;
+import 
org.apache.nifi.controller.queue.StandardRemoteQueuePartitionDiagnostics;
+import org.apache.nifi.controller.queue.SwappablePriorityQueue;
+import 
org.apache.nifi.controller.queue.clustered.TransferFailureDestination;
+import 
org.apache.nifi.controller.queue.clustered.client.async.AsyncLoadBalanceClientRegistry;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionCompleteCallback;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionFailureCallback;
+import org.apache.nifi.controller.repository.ContentNotFoundException;
+import org.apache.nifi.controller.repository.ContentRepository;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.FlowFileRepository;
+import org.apache.nifi.controller.repository.RepositoryRecord;
+import org.apache.nifi.controller.repository.StandardRepositoryRecord;
+import org.apache.nifi.controller.repository.SwapSummary;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.flowfile.FlowFilePrioritizer;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+/**
+ * A Queue Partition that is responsible for transferring FlowFiles to 
another node in the cluster
+ */
+public class RemoteQueuePartition implements QueuePartition {
+private static final Logger logger = 
LoggerFactory.getLogger(RemoteQueuePartition.class);
+
+private final NodeIdentifier nodeIdentifier;
+private final SwappablePriorityQueue priorityQueue;
+private final LoadBalancedFlowFileQueue flowFileQueue;
+private final TransferFailureDestination failureDestination;
+
+private final FlowFileRepository flowFileRepo;
+private final ProvenanceEventRepository provRepo;
+private final ContentRepository contentRepo;
+private final AsyncLoadBalanceClientRegistry clientRegistry;
+
+private boolean running = false;
+private final String description;
+
+public RemoteQueuePartition(final NodeIdentifier nodeId, final 
SwappablePriorityQueue priorityQueue, final TransferFailureDestination 
failureDestination,
+final FlowFileRepository flowFileRepo, 
final ProvenanceEventRepository provRepo, final ContentRepository 
contentRepository,
+final AsyncLoadBalanceClientRegistry 
clientRegistry

[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630631#comment-16630631
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220975312
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/client/async/nio/RegisteredPartition.java
 ---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.client.async.nio;
+
+import org.apache.nifi.controller.queue.LoadBalanceCompression;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionCompleteCallback;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionFailureCallback;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+
+import java.util.function.BooleanSupplier;
+import java.util.function.Supplier;
+
+public class RegisteredPartition {
+private final String connectionId;
+private final Supplier flowFileRecordSupplier;
+private final TransactionFailureCallback failureCallback;
+private final BooleanSupplier emptySupplier;
+private final TransactionCompleteCallback successCallback;
+private final LoadBalanceCompression compression;
+
+public RegisteredPartition(final String connectionId, final 
BooleanSupplier emptySupplier, final Supplier flowFileSupplier, 
final TransactionFailureCallback failureCallback,
+   final TransactionCompleteCallback 
successCallback, final LoadBalanceCompression compression) {
+this.connectionId = connectionId;
+this.emptySupplier = emptySupplier;
+this.flowFileRecordSupplier = flowFileSupplier;
+this.failureCallback = failureCallback;
+this.successCallback = successCallback;
+this.compression = compression;
--- End diff --

Thanks, I'll check this out. I may have overlooked something.


> Allow data in a Connection to be Load-Balanced across cluster
> -
>
> Key: NIFI-5516
> URL: https://issues.apache.org/jira/browse/NIFI-5516
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Allow user to configure a Connection to be load balanced across the cluster. 
> For more information, see Feature Proposal at 
> https://cwiki.apache.org/confluence/display/NIFI/Load-Balanced+Connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220975068
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/AbstractFlowFileQueue.java
 ---
@@ -0,0 +1,460 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue;
+
+import org.apache.nifi.controller.ProcessScheduler;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.FlowFileRepository;
+import org.apache.nifi.controller.repository.RepositoryRecord;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.util.FormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+
+public abstract class AbstractFlowFileQueue implements FlowFileQueue {
+private static final Logger logger = 
LoggerFactory.getLogger(AbstractFlowFileQueue.class);
+private final String identifier;
+private final FlowFileRepository flowFileRepository;
+private final ProvenanceEventRepository provRepository;
+private final ResourceClaimManager resourceClaimManager;
+private final ProcessScheduler scheduler;
+
+private final AtomicReference expirationPeriod = new 
AtomicReference<>(new TimePeriod("0 mins", 0L));
+private final AtomicReference maxQueueSize = new 
AtomicReference<>(new MaxQueueSize("1 GB", 1024 * 1024 * 1024, 1));
+
+private final ConcurrentMap 
listRequestMap = new ConcurrentHashMap<>();
+private final ConcurrentMap 
dropRequestMap = new ConcurrentHashMap<>();
+
+private LoadBalanceStrategy loadBalanceStrategy = 
LoadBalanceStrategy.DO_NOT_LOAD_BALANCE;
+private String partitioningAttribute = null;
+
+private LoadBalanceCompression compression = 
LoadBalanceCompression.DO_NOT_COMPRESS;
+
+
+public AbstractFlowFileQueue(final String identifier, final 
ProcessScheduler scheduler,
+final FlowFileRepository flowFileRepo, final 
ProvenanceEventRepository provRepo, final ResourceClaimManager 
resourceClaimManager) {
+this.identifier = identifier;
+this.scheduler = scheduler;
+this.flowFileRepository = flowFileRepo;
+this.provRepository = provRepo;
+this.resourceClaimManager = resourceClaimManager;
+}
+
+@Override
+public String getIdentifier() {
+return identifier;
+}
+
+protected ProcessScheduler getScheduler() {
+return scheduler;
+}
+
+@Override
+public String getFlowFileExpiration() {
+return expirationPeriod.get().getPeriod();
+}
+
+@Override
+public int getFlowFileExpiration(final TimeUnit timeUnit) {
+return (int) timeUnit.convert(expirationPeriod.get().getMillis(), 
TimeUnit.MILLISECONDS);
+}
+
+@Override
+public void setFlowFileExpiration(final String flowExpirationPeriod) {
+final long millis = 
Form

[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630629#comment-16630629
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220975068
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/AbstractFlowFileQueue.java
 ---
@@ -0,0 +1,460 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue;
+
+import org.apache.nifi.controller.ProcessScheduler;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.FlowFileRepository;
+import org.apache.nifi.controller.repository.RepositoryRecord;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.util.FormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+
+public abstract class AbstractFlowFileQueue implements FlowFileQueue {
+private static final Logger logger = 
LoggerFactory.getLogger(AbstractFlowFileQueue.class);
+private final String identifier;
+private final FlowFileRepository flowFileRepository;
+private final ProvenanceEventRepository provRepository;
+private final ResourceClaimManager resourceClaimManager;
+private final ProcessScheduler scheduler;
+
+private final AtomicReference expirationPeriod = new 
AtomicReference<>(new TimePeriod("0 mins", 0L));
+private final AtomicReference maxQueueSize = new 
AtomicReference<>(new MaxQueueSize("1 GB", 1024 * 1024 * 1024, 1));
+
+private final ConcurrentMap 
listRequestMap = new ConcurrentHashMap<>();
+private final ConcurrentMap 
dropRequestMap = new ConcurrentHashMap<>();
+
+private LoadBalanceStrategy loadBalanceStrategy = 
LoadBalanceStrategy.DO_NOT_LOAD_BALANCE;
+private String partitioningAttribute = null;
+
+private LoadBalanceCompression compression = 
LoadBalanceCompression.DO_NOT_COMPRESS;
+
+
+public AbstractFlowFileQueue(final String identifier, final 
ProcessScheduler scheduler,
+final FlowFileRepository flowFileRepo, final 
ProvenanceEventRepository provRepo, final ResourceClaimManager 
resourceClaimManager) {
+this.identifier = identifier;
+this.scheduler = scheduler;
+this.flowFileRepository = flowFileRepo;
+this.provRepository = provRepo;
+this.resourceClaimManager = resourceClaimManager;
+}
+
+@Override
+public String getIdentifier() {
+return identifier;
+}
+
+protected ProcessScheduler getScheduler() {
+return scheduler;
+}
+
+@Override
+public String getFlowFileExpiration() {
+return expirationPeriod.get().getPeriod();
+}
+
+@Override
+public int getFlowFileExpiration(final TimeUnit timeUnit) {

[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-27 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r220975312
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/client/async/nio/RegisteredPartition.java
 ---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.queue.clustered.client.async.nio;
+
+import org.apache.nifi.controller.queue.LoadBalanceCompression;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionCompleteCallback;
+import 
org.apache.nifi.controller.queue.clustered.client.async.TransactionFailureCallback;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+
+import java.util.function.BooleanSupplier;
+import java.util.function.Supplier;
+
+public class RegisteredPartition {
+private final String connectionId;
+private final Supplier flowFileRecordSupplier;
+private final TransactionFailureCallback failureCallback;
+private final BooleanSupplier emptySupplier;
+private final TransactionCompleteCallback successCallback;
+private final LoadBalanceCompression compression;
+
+public RegisteredPartition(final String connectionId, final 
BooleanSupplier emptySupplier, final Supplier flowFileSupplier, 
final TransactionFailureCallback failureCallback,
+   final TransactionCompleteCallback 
successCallback, final LoadBalanceCompression compression) {
+this.connectionId = connectionId;
+this.emptySupplier = emptySupplier;
+this.flowFileRecordSupplier = flowFileSupplier;
+this.failureCallback = failureCallback;
+this.successCallback = successCallback;
+this.compression = compression;
--- End diff --

Thanks, I'll check this out. I may have overlooked something.


---


[GitHub] nifi pull request #2979: [WIP] An experiment that I am looking at to elimina...

2018-09-27 Thread markap14
Github user markap14 closed the pull request at:

https://github.com/apache/nifi/pull/2979


---


[jira] [Commented] (NIFI-5641) GetMongo should be able to copy attributes from input flowfile to result set flowfiles

2018-09-27 Thread Mark Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630596#comment-16630596
 ] 

Mark Payne commented on NIFI-5641:
--

[~mike.thomsen] unless I am missing something, this already happens:
{code:java}
outgoingFlowFile = (input == null) ? session.create() : 
session.create(input);{code}
The only time that we call {{session.create()}} we do by passing in the 
incoming FlowFile, if there is one. This will properly copy the attributes of 
the incoming FlowFile to the outgoing FlowFile. It appears that you reviewed & 
merged this commit already for 1.8.0.

> GetMongo should be able to copy attributes from input flowfile to result set 
> flowfiles
> --
>
> Key: NIFI-5641
> URL: https://issues.apache.org/jira/browse/NIFI-5641
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Minor
>
> User requested that GetMongo should be able to copy the attributes that come 
> from the input flowfile to the result set flowfiles. Should be implemented as 
> an optional property, off by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5641) GetMongo should be able to copy attributes from input flowfile to result set flowfiles

2018-09-27 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-5641:
--

 Summary: GetMongo should be able to copy attributes from input 
flowfile to result set flowfiles
 Key: NIFI-5641
 URL: https://issues.apache.org/jira/browse/NIFI-5641
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mike Thomsen
Assignee: Mike Thomsen


User requested that GetMongo should be able to copy the attributes that come 
from the input flowfile to the result set flowfiles. Should be implemented as 
an optional property, off by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5640) Improve efficiency of Avro Record Reader

2018-09-27 Thread Mark Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630503#comment-16630503
 ] 

Mark Payne commented on NIFI-5640:
--

re: Performance. I didn't setup any performance tests specifically for 
AvroRecordReader alone. However, I did test ConsumeKafkaRecord_1_0 using the 
AvroReader. Before the PR, I was able to pull 60,000 msgs/sec (49 MB/sec) from 
Kafka in my particular setup from a single node. After the PR, I was able to 
pull 130,000 msgs/sec (106 MB/sec) sustained rate. At that point, my disks were 
fully utilized so I was able to pull any faster. Had I used a box with faster 
or more disks, I likely would have achieved even better performance. So given 
that the entire process of pulling messages from Kafka, parsing the Avro, 
turning it into an intermediate record, then writing it out as a JSON FlowFile 
more than doubled, this means that the performance of the Avro Reader was 
improved by far more than 100%.

> Improve efficiency of Avro Record Reader
> 
>
> Key: NIFI-5640
> URL: https://issues.apache.org/jira/browse/NIFI-5640
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> There are a few things that we are doing in the Avro Reader that cause subpar 
> performance. Firstly, in the AvroTypeUtil, when converting an Avro 
> GenericRecord to our Record, the building of the RecordSchema is slow because 
> we call toString() (which is quite expensive) on the Avro schema in order to 
> provide a textual version to RecordSchema. However, the text is typically not 
> used and it is optional to provide the schema text, so we should avoid 
> calling Schema#toString() whenever possible.
> The AvroTypeUtil class also calls #getNonNullSubSchemas() a lot. In some 
> cases we don't really need to do this and can avoid creating the sublist. In 
> other cases, we do need to call it. However, the method uses the stream() 
> method on an existing List just to filter out 0 or 1 elements. While use of 
> the stream() method makes the code very readable, it is quite a bit more 
> expensive than just iterating over the existing list and adding to an 
> ArrayList. We should avoid use of the {{stream()}} method for trivial pieces 
> of code in time-critical parts of the codebase.
> Additionally, I've found that Avro's GenericDatumReader is extremely 
> inefficient, at least in some cases, when reading Strings because it uses an 
> IdentityHashMap to cache details about the schema. But IdentityHashMap is far 
> slower than if it were to just use HashMap so we could subclass the reader in 
> order to avoid the slow caching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5640) Improve efficiency of Avro Record Reader

2018-09-27 Thread Mark Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-5640:
-
Fix Version/s: 1.8.0
   Status: Patch Available  (was: Open)

> Improve efficiency of Avro Record Reader
> 
>
> Key: NIFI-5640
> URL: https://issues.apache.org/jira/browse/NIFI-5640
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> There are a few things that we are doing in the Avro Reader that cause subpar 
> performance. Firstly, in the AvroTypeUtil, when converting an Avro 
> GenericRecord to our Record, the building of the RecordSchema is slow because 
> we call toString() (which is quite expensive) on the Avro schema in order to 
> provide a textual version to RecordSchema. However, the text is typically not 
> used and it is optional to provide the schema text, so we should avoid 
> calling Schema#toString() whenever possible.
> The AvroTypeUtil class also calls #getNonNullSubSchemas() a lot. In some 
> cases we don't really need to do this and can avoid creating the sublist. In 
> other cases, we do need to call it. However, the method uses the stream() 
> method on an existing List just to filter out 0 or 1 elements. While use of 
> the stream() method makes the code very readable, it is quite a bit more 
> expensive than just iterating over the existing list and adding to an 
> ArrayList. We should avoid use of the {{stream()}} method for trivial pieces 
> of code in time-critical parts of the codebase.
> Additionally, I've found that Avro's GenericDatumReader is extremely 
> inefficient, at least in some cases, when reading Strings because it uses an 
> IdentityHashMap to cache details about the schema. But IdentityHashMap is far 
> slower than if it were to just use HashMap so we could subclass the reader in 
> order to avoid the slow caching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5634) When retrieving RPG from REST API in a cluster, ports can be returned that are not available on all nodes

2018-09-27 Thread Matt Gilman (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-5634:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> When retrieving RPG from REST API in a cluster, ports can be returned that 
> are not available on all nodes
> -
>
> Key: NIFI-5634
> URL: https://issues.apache.org/jira/browse/NIFI-5634
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When retrieving a specific Remote Process Group, it is possible to get back 
> an RPG that shows that a port is available, even when it is not available on 
> all nodes. The merging logic appears to be flawed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3036: NIFI-5640: Improved efficiency of Avro Reader and s...

2018-09-27 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3036

NIFI-5640: Improved efficiency of Avro Reader and some methods of Avr…

…oTypeUtil. Also switched ServiceStateTransition to using read/write 
locks instead of synchronized blocks because profiling showed that significant 
time was spent in determining state of a Controller Service when attempting to 
use it. Switching to a ReadLock should provide better performance there.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5640

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3036.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3036


commit 6e1d1cc233282998953b5c8d612df7c80030
Author: Mark Payne 
Date:   2018-09-27T14:10:48Z

NIFI-5640: Improved efficiency of Avro Reader and some methods of 
AvroTypeUtil. Also switched ServiceStateTransition to using read/write locks 
instead of synchronized blocks because profiling showed that significant time 
was spent in determining state of a Controller Service when attempting to use 
it. Switching to a ReadLock should provide better performance there.




---


[jira] [Commented] (NIFI-5634) When retrieving RPG from REST API in a cluster, ports can be returned that are not available on all nodes

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630493#comment-16630493
 ] 

ASF GitHub Bot commented on NIFI-5634:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/3030
  
Thanks @markap14! This has been merged to master.


> When retrieving RPG from REST API in a cluster, ports can be returned that 
> are not available on all nodes
> -
>
> Key: NIFI-5634
> URL: https://issues.apache.org/jira/browse/NIFI-5634
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When retrieving a specific Remote Process Group, it is possible to get back 
> an RPG that shows that a port is available, even when it is not available on 
> all nodes. The merging logic appears to be flawed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5640) Improve efficiency of Avro Record Reader

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630497#comment-16630497
 ] 

ASF GitHub Bot commented on NIFI-5640:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3036

NIFI-5640: Improved efficiency of Avro Reader and some methods of Avr…

…oTypeUtil. Also switched ServiceStateTransition to using read/write locks 
instead of synchronized blocks because profiling showed that significant time 
was spent in determining state of a Controller Service when attempting to use 
it. Switching to a ReadLock should provide better performance there.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5640

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3036.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3036


commit 6e1d1cc233282998953b5c8d612df7c80030
Author: Mark Payne 
Date:   2018-09-27T14:10:48Z

NIFI-5640: Improved efficiency of Avro Reader and some methods of 
AvroTypeUtil. Also switched ServiceStateTransition to using read/write locks 
instead of synchronized blocks because profiling showed that significant time 
was spent in determining state of a Controller Service when attempting to use 
it. Switching to a ReadLock should provide better performance there.




> Improve efficiency of Avro Record Reader
> 
>
> Key: NIFI-5640
> URL: https://issues.apache.org/jira/browse/NIFI-5640
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> There are a few things that we are doing in the Avro Reader that cause subpar 
> performance. Firstly, in the AvroTypeUtil, when converting an Avro 
> GenericRecord to our Record, the building of the RecordSchema is slow because 
> we call toString() (which is quite expensive) on the Avro schema in order to 
> provide a textual version to RecordSchema. However, the text is typically not 
> used and it is optional to provide the schema text, so we should avoid 
> calling Schema#toString() whenever possible.
> The AvroTypeUtil class also calls #getNonNullSubSchemas() a lot. In some 
> cases we don't really need to do this and can avoid creating the sublist. In 
> other cases, we do need to call it. However, the method uses the stream() 
> method on an existing List just to filter out 0 or 1 elements. While use of 
> the stream() method makes the code very readable, it is quite a bit more 
> expensive than just iterating over the existing list and adding to an 
> ArrayList. We should avoid use of the {{stream()}} method for trivial pieces 
> of code in time-critical parts of the codebase.
> Additionally, I've found that Avro's GenericDatumReader is extremely 
> inefficient, at least in some cases, when reading Str

[jira] [Commented] (NIFI-5634) When retrieving RPG from REST API in a cluster, ports can be returned that are not available on all nodes

2018-09-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630495#comment-16630495
 ] 

ASF GitHub Bot commented on NIFI-5634:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3030


> When retrieving RPG from REST API in a cluster, ports can be returned that 
> are not available on all nodes
> -
>
> Key: NIFI-5634
> URL: https://issues.apache.org/jira/browse/NIFI-5634
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When retrieving a specific Remote Process Group, it is possible to get back 
> an RPG that shows that a port is available, even when it is not available on 
> all nodes. The merging logic appears to be flawed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3030: NIFI-5634: When merging RPG entities, ensure that w...

2018-09-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3030


---


[jira] [Commented] (NIFI-5634) When retrieving RPG from REST API in a cluster, ports can be returned that are not available on all nodes

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16630490#comment-16630490
 ] 

ASF subversion and git services commented on NIFI-5634:
---

Commit ad4c886fbf2af2bc98ebe12200c4b119df67b90f in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=ad4c886 ]

NIFI-5634: When merging RPG entities, ensure that we only send back the ports 
that are common to all nodes - even if that means sending back no ports

This closes #3030


> When retrieving RPG from REST API in a cluster, ports can be returned that 
> are not available on all nodes
> -
>
> Key: NIFI-5634
> URL: https://issues.apache.org/jira/browse/NIFI-5634
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When retrieving a specific Remote Process Group, it is possible to get back 
> an RPG that shows that a port is available, even when it is not available on 
> all nodes. The merging logic appears to be flawed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >