[jira] [Commented] (NIFI-1705) AttributesToCSV

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490262#comment-16490262
 ] 

ASF GitHub Bot commented on NIFI-1705:
--

Github user bdesert commented on the issue:

https://github.com/apache/nifi/pull/2711
  
@joetrite , @MikeThomsen ,
So, I tested the processor. It works OK.
But I've got a question.
In JSON it's easy - json structure will include both attribute name and 
attribute value. in CSV - there will be only values on the output. Don't you 
think it would be useful to add a header, or attribute with avro-like generated 
schema? I agree that specified attributes will be in provided order, but if 
regex is used - then user won't be able to relate a value to a name. The same 
is about core attributes - since we add only existing not empty.
If we decide to add avro-like schema, we gonna have problems - attribute 
names could be non Avro-safe.


> AttributesToCSV
> ---
>
> Key: NIFI-1705
> URL: https://issues.apache.org/jira/browse/NIFI-1705
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Reporter: Randy Gelhausen
>Priority: Major
>
> Create a new processor which converts a Flowfile's attributes into CSV 
> content.
> Should support the same configuration options as the AttributesToJSON 
> processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2711: NIFI-1705 - Adding AttributesToCSV processor

2018-05-24 Thread bdesert
Github user bdesert commented on the issue:

https://github.com/apache/nifi/pull/2711
  
@joetrite , @MikeThomsen ,
So, I tested the processor. It works OK.
But I've got a question.
In JSON it's easy - json structure will include both attribute name and 
attribute value. in CSV - there will be only values on the output. Don't you 
think it would be useful to add a header, or attribute with avro-like generated 
schema? I agree that specified attributes will be in provided order, but if 
regex is used - then user won't be able to relate a value to a name. The same 
is about core attributes - since we add only existing not empty.
If we decide to add avro-like schema, we gonna have problems - attribute 
names could be non Avro-safe.


---


[jira] [Commented] (NIFI-1705) AttributesToCSV

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489935#comment-16489935
 ] 

ASF GitHub Bot commented on NIFI-1705:
--

Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2711#discussion_r190750155
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AttributesToCSV.java
 ---
@@ -0,0 +1,300 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.util.Map;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.List;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+import java.util.Collections;
+import java.util.Arrays;
+import java.util.ArrayList;
+
+
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@Tags({"csv", "attributes", "flowfile"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Generates a CSV representation of the input 
FlowFile Attributes. The resulting CSV " +
+"can be written to either a newly generated attribute named 
'CSVAttributes' or written to the FlowFile as content.  " +
+"If the attribute value contains a comma, newline or double quote, 
then the attribute value will be " +
+"escaped with double quotes.  Any double quote characters in the 
attribute value are escaped with " +
+"another double quote.")
+@WritesAttribute(attribute = "CSVAttributes", description = "CSV 
representation of Attributes")
+public class AttributesToCSV extends AbstractProcessor {
+private static final String OUTPUT_ATTRIBUTE_NAME = "CSVAttributes";
+private static final String OUTPUT_SEPARATOR = ",";
+private static final String OUTPUT_MIME_TYPE = "text/csv";
+private static final String SPLIT_REGEX = OUTPUT_SEPARATOR + 
"(?=(?:[^\"]*\"[^\"]*\")*[^\"]*$)";
+
+static final AllowableValue OUTPUT_OVERWRITE_CONTENT = new 
AllowableValue("flowfile-content", "flowfile-content", "The resulting CSV 
string will be placed into the content of the flowfile." +
+"Existing flowfile context will be overwritten. 
'CSVAttributes' will not be written to at all (neither null nor empty 
string).");
+static final AllowableValue OUTPUT_NEW_ATTRIBUTE= new 
AllowableValue("flowfile-attribute", "flowfile-attribute", "The resulting CSV 
string will be placed into a new flowfile" +
+" attribute named 'CSVAttributes'.  The content of the 

[GitHub] nifi pull request #2711: NIFI-1705 - Adding AttributesToCSV processor

2018-05-24 Thread bdesert
Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2711#discussion_r190750155
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AttributesToCSV.java
 ---
@@ -0,0 +1,300 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.util.Map;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.List;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+import java.util.Collections;
+import java.util.Arrays;
+import java.util.ArrayList;
+
+
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@Tags({"csv", "attributes", "flowfile"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Generates a CSV representation of the input 
FlowFile Attributes. The resulting CSV " +
+"can be written to either a newly generated attribute named 
'CSVAttributes' or written to the FlowFile as content.  " +
+"If the attribute value contains a comma, newline or double quote, 
then the attribute value will be " +
+"escaped with double quotes.  Any double quote characters in the 
attribute value are escaped with " +
+"another double quote.")
+@WritesAttribute(attribute = "CSVAttributes", description = "CSV 
representation of Attributes")
+public class AttributesToCSV extends AbstractProcessor {
+private static final String OUTPUT_ATTRIBUTE_NAME = "CSVAttributes";
+private static final String OUTPUT_SEPARATOR = ",";
+private static final String OUTPUT_MIME_TYPE = "text/csv";
+private static final String SPLIT_REGEX = OUTPUT_SEPARATOR + 
"(?=(?:[^\"]*\"[^\"]*\")*[^\"]*$)";
+
+static final AllowableValue OUTPUT_OVERWRITE_CONTENT = new 
AllowableValue("flowfile-content", "flowfile-content", "The resulting CSV 
string will be placed into the content of the flowfile." +
+"Existing flowfile context will be overwritten. 
'CSVAttributes' will not be written to at all (neither null nor empty 
string).");
+static final AllowableValue OUTPUT_NEW_ATTRIBUTE= new 
AllowableValue("flowfile-attribute", "flowfile-attribute", "The resulting CSV 
string will be placed into a new flowfile" +
+" attribute named 'CSVAttributes'.  The content of the 
flowfile will not be changed.");
+
+public static final PropertyDescriptor ATTRIBUTES_LIST = new 
PropertyDescriptor.Builder()
+.name("attribute-list")
+.displayName("Attribute List")
+  

[jira] [Commented] (NIFI-1705) AttributesToCSV

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489815#comment-16489815
 ] 

ASF GitHub Bot commented on NIFI-1705:
--

Github user joetrite commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2711#discussion_r190733355
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AttributesToCSV.java
 ---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Collections;
+import java.util.stream.Collectors;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@Tags({"csv", "attributes", "flowfile"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Generates a CSV representation of the input 
FlowFile Attributes. The resulting CSV " +
+"can be written to either a newly generated attribute named 
'CSVAttributes' or written to the FlowFile as content.  " +
+"If the attribute value contains a comma, newline or double quote, 
then the attribute value will be " +
+"escaped with double quotes.  Any double quote characters in the 
attribute value are escaped with " +
+"another double quote.  If the attribute value does not contain a 
comma, newline or double quote, then the " +
+"attribute value is returned unchanged.")
+@WritesAttribute(attribute = "CSVAttributes", description = "CSV 
representation of Attributes")
+public class AttributesToCSV extends AbstractProcessor {
+
+private static final String OUTPUT_NEW_ATTRIBUTE = 
"flowfile-attribute";
+private static final String OUTPUT_OVERWRITE_CONTENT = 
"flowfile-content";
+private static final String OUTPUT_ATTRIBUTE_NAME = "CSVAttributes";
+private static final String OUTPUT_SEPARATOR = ",";
+private static final String OUTPUT_MIME_TYPE = "text/csv";
+
+
+public static final PropertyDescriptor ATTRIBUTES_LIST = new 
PropertyDescriptor.Builder()
+.name("attribute-list")
+.displayName("Attribute List")
+.description("Comma separated list of attributes to be 
included in the resulting CSV. If this value " +
+"is left empty then all existing Attributes will be 
included. This list of attributes is " +
+"case sensitive and does not support attribute names 
that contain commas. If an 

[jira] [Commented] (NIFI-1705) AttributesToCSV

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489816#comment-16489816
 ] 

ASF GitHub Bot commented on NIFI-1705:
--

Github user joetrite commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2711#discussion_r190733381
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AttributesToCSV.java
 ---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Collections;
+import java.util.stream.Collectors;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@Tags({"csv", "attributes", "flowfile"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Generates a CSV representation of the input 
FlowFile Attributes. The resulting CSV " +
+"can be written to either a newly generated attribute named 
'CSVAttributes' or written to the FlowFile as content.  " +
+"If the attribute value contains a comma, newline or double quote, 
then the attribute value will be " +
+"escaped with double quotes.  Any double quote characters in the 
attribute value are escaped with " +
+"another double quote.  If the attribute value does not contain a 
comma, newline or double quote, then the " +
+"attribute value is returned unchanged.")
+@WritesAttribute(attribute = "CSVAttributes", description = "CSV 
representation of Attributes")
+public class AttributesToCSV extends AbstractProcessor {
+
+private static final String OUTPUT_NEW_ATTRIBUTE = 
"flowfile-attribute";
+private static final String OUTPUT_OVERWRITE_CONTENT = 
"flowfile-content";
+private static final String OUTPUT_ATTRIBUTE_NAME = "CSVAttributes";
+private static final String OUTPUT_SEPARATOR = ",";
+private static final String OUTPUT_MIME_TYPE = "text/csv";
+
+
+public static final PropertyDescriptor ATTRIBUTES_LIST = new 
PropertyDescriptor.Builder()
+.name("attribute-list")
+.displayName("Attribute List")
+.description("Comma separated list of attributes to be 
included in the resulting CSV. If this value " +
+"is left empty then all existing Attributes will be 
included. This list of attributes is " +
+"case sensitive and does not support attribute names 
that contain commas. If an 

[GitHub] nifi pull request #2711: NIFI-1705 - Adding AttributesToCSV processor

2018-05-24 Thread joetrite
Github user joetrite commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2711#discussion_r190733381
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AttributesToCSV.java
 ---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Collections;
+import java.util.stream.Collectors;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@Tags({"csv", "attributes", "flowfile"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Generates a CSV representation of the input 
FlowFile Attributes. The resulting CSV " +
+"can be written to either a newly generated attribute named 
'CSVAttributes' or written to the FlowFile as content.  " +
+"If the attribute value contains a comma, newline or double quote, 
then the attribute value will be " +
+"escaped with double quotes.  Any double quote characters in the 
attribute value are escaped with " +
+"another double quote.  If the attribute value does not contain a 
comma, newline or double quote, then the " +
+"attribute value is returned unchanged.")
+@WritesAttribute(attribute = "CSVAttributes", description = "CSV 
representation of Attributes")
+public class AttributesToCSV extends AbstractProcessor {
+
+private static final String OUTPUT_NEW_ATTRIBUTE = 
"flowfile-attribute";
+private static final String OUTPUT_OVERWRITE_CONTENT = 
"flowfile-content";
+private static final String OUTPUT_ATTRIBUTE_NAME = "CSVAttributes";
+private static final String OUTPUT_SEPARATOR = ",";
+private static final String OUTPUT_MIME_TYPE = "text/csv";
+
+
+public static final PropertyDescriptor ATTRIBUTES_LIST = new 
PropertyDescriptor.Builder()
+.name("attribute-list")
+.displayName("Attribute List")
+.description("Comma separated list of attributes to be 
included in the resulting CSV. If this value " +
+"is left empty then all existing Attributes will be 
included. This list of attributes is " +
+"case sensitive and does not support attribute names 
that contain commas. If an attribute specified in the list is not found it will 
be emitted " +
+"to the resulting CSV with an empty string or null 
depending on the 'Null Value' property. " +
+"If a core attribute is 

[GitHub] nifi pull request #2711: NIFI-1705 - Adding AttributesToCSV processor

2018-05-24 Thread joetrite
Github user joetrite commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2711#discussion_r190733355
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AttributesToCSV.java
 ---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Collections;
+import java.util.stream.Collectors;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@Tags({"csv", "attributes", "flowfile"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Generates a CSV representation of the input 
FlowFile Attributes. The resulting CSV " +
+"can be written to either a newly generated attribute named 
'CSVAttributes' or written to the FlowFile as content.  " +
+"If the attribute value contains a comma, newline or double quote, 
then the attribute value will be " +
+"escaped with double quotes.  Any double quote characters in the 
attribute value are escaped with " +
+"another double quote.  If the attribute value does not contain a 
comma, newline or double quote, then the " +
+"attribute value is returned unchanged.")
+@WritesAttribute(attribute = "CSVAttributes", description = "CSV 
representation of Attributes")
+public class AttributesToCSV extends AbstractProcessor {
+
+private static final String OUTPUT_NEW_ATTRIBUTE = 
"flowfile-attribute";
+private static final String OUTPUT_OVERWRITE_CONTENT = 
"flowfile-content";
+private static final String OUTPUT_ATTRIBUTE_NAME = "CSVAttributes";
+private static final String OUTPUT_SEPARATOR = ",";
+private static final String OUTPUT_MIME_TYPE = "text/csv";
+
+
+public static final PropertyDescriptor ATTRIBUTES_LIST = new 
PropertyDescriptor.Builder()
+.name("attribute-list")
+.displayName("Attribute List")
+.description("Comma separated list of attributes to be 
included in the resulting CSV. If this value " +
+"is left empty then all existing Attributes will be 
included. This list of attributes is " +
+"case sensitive and does not support attribute names 
that contain commas. If an attribute specified in the list is not found it will 
be emitted " +
+"to the resulting CSV with an empty string or null 
depending on the 'Null Value' property. " +
+"If a core attribute is 

[jira] [Commented] (NIFI-5066) Support enable and disable component action when multiple components selected or when selecting a process group.

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489783#comment-16489783
 ] 

ASF GitHub Bot commented on NIFI-5066:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2739
  
@mcgilman I tried testing this. Created a Process Group with 2 processors. 
One was STOPPED, the other DISABLED. Clicked on canvas to clear the selection. 
Then clicked "Disable" in the Operate palette. Got back an error: 
"92e58351-0163-1000-ab38-7947aeead66e is not stopped".


> Support enable and disable component action when multiple components selected 
> or when selecting a process group.
> 
>
> Key: NIFI-5066
> URL: https://issues.apache.org/jira/browse/NIFI-5066
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matthew Clarke
>Assignee: Matt Gilman
>Priority: Major
> Fix For: 1.7.0
>
>
> Currently NiFi validates all processors that are in a STOPPED state.  To 
> reduce impact when flows contain very large numbers of STOPPED processors, 
> users should be disabling these STOPPED processors.  NiFi's "Enable" and 
> "Disable" buttons do not support being used when more then one processor is 
> selected.  When needing to enable or disable large numbers of processors, 
> this is less then ideal. The Enable and Disable buttons should work similar 
> to how the Start and Stop buttons work.
> Have multiple components selected or a process group selected.  Select 
> "Enable" or "Disabled" button.  Any eligible component (those that are not 
> running) should be either enabled or disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2739: NIFI-5066: Fixing enable/disable verification

2018-05-24 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2739
  
@mcgilman I tried testing this. Created a Process Group with 2 processors. 
One was STOPPED, the other DISABLED. Clicked on canvas to clear the selection. 
Then clicked "Disable" in the Operate palette. Got back an error: 
"92e58351-0163-1000-ab38-7947aeead66e is not stopped".


---


[jira] [Commented] (NIFI-5231) Record stats processor

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489768#comment-16489768
 ] 

ASF GitHub Bot commented on NIFI-5231:
--

Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r190714515
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+public class RecordStats extends AbstractProcessor {
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(ProcessContext context) {
+cache = new RecordPathCache(25);
+}
+
+@Override
+public Set getRelationships() {
+return new HashSet() {{
+add(REL_SUCCESS);
+add(REL_FAILURE);
+}};
+}
+
+@Override
+public void onTrigger(ProcessContext context, ProcessSession session) 
throws ProcessException {
+FlowFile input = session.get();
+if (input == null) {
+return;
+}
+
+try {
+Map paths = getRecordPaths(context);
+Map stats = getStats(input, paths, context, 
session);
+
+input = 

[jira] [Commented] (NIFI-5231) Record stats processor

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489769#comment-16489769
 ] 

ASF GitHub Bot commented on NIFI-5231:
--

Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r190723484
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+public class RecordStats extends AbstractProcessor {
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(ProcessContext context) {
+cache = new RecordPathCache(25);
+}
+
+@Override
+public Set getRelationships() {
+return new HashSet() {{
+add(REL_SUCCESS);
+add(REL_FAILURE);
+}};
+}
+
+@Override
+public void onTrigger(ProcessContext context, ProcessSession session) 
throws ProcessException {
+FlowFile input = session.get();
+if (input == null) {
+return;
+}
+
+try {
+Map paths = getRecordPaths(context);
+Map stats = getStats(input, paths, context, 
session);
+
+input = 

[jira] [Commented] (NIFI-5231) Record stats processor

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489770#comment-16489770
 ] 

ASF GitHub Bot commented on NIFI-5231:
--

Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r190721904
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+public class RecordStats extends AbstractProcessor {
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(ProcessContext context) {
+cache = new RecordPathCache(25);
+}
+
+@Override
+public Set getRelationships() {
+return new HashSet() {{
+add(REL_SUCCESS);
+add(REL_FAILURE);
+}};
+}
+
+@Override
+public void onTrigger(ProcessContext context, ProcessSession session) 
throws ProcessException {
+FlowFile input = session.get();
+if (input == null) {
+return;
+}
+
+try {
+Map paths = getRecordPaths(context);
+Map stats = getStats(input, paths, context, 
session);
+
+input = 

[GitHub] nifi pull request #2737: NIFI-5231 Added RecordStats processor.

2018-05-24 Thread ottobackwards
Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r190723484
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+public class RecordStats extends AbstractProcessor {
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(ProcessContext context) {
+cache = new RecordPathCache(25);
+}
+
+@Override
+public Set getRelationships() {
+return new HashSet() {{
+add(REL_SUCCESS);
+add(REL_FAILURE);
+}};
+}
+
+@Override
+public void onTrigger(ProcessContext context, ProcessSession session) 
throws ProcessException {
+FlowFile input = session.get();
+if (input == null) {
+return;
+}
+
+try {
+Map paths = getRecordPaths(context);
+Map stats = getStats(input, paths, context, 
session);
+
+input = session.putAllAttributes(input, stats);
+
+session.transfer(input, REL_SUCCESS);
+
+} catch (Exception ex) {
+getLogger().error("Error processing stats.", ex);
+session.transfer(input, 

[GitHub] nifi pull request #2737: NIFI-5231 Added RecordStats processor.

2018-05-24 Thread ottobackwards
Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r190721904
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+public class RecordStats extends AbstractProcessor {
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(ProcessContext context) {
+cache = new RecordPathCache(25);
+}
+
+@Override
+public Set getRelationships() {
+return new HashSet() {{
+add(REL_SUCCESS);
+add(REL_FAILURE);
+}};
+}
+
+@Override
+public void onTrigger(ProcessContext context, ProcessSession session) 
throws ProcessException {
+FlowFile input = session.get();
+if (input == null) {
+return;
+}
+
+try {
+Map paths = getRecordPaths(context);
+Map stats = getStats(input, paths, context, 
session);
+
+input = session.putAllAttributes(input, stats);
+
+session.transfer(input, REL_SUCCESS);
+
+} catch (Exception ex) {
+getLogger().error("Error processing stats.", ex);
+session.transfer(input, 

[GitHub] nifi pull request #2737: NIFI-5231 Added RecordStats processor.

2018-05-24 Thread ottobackwards
Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r190714515
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+public class RecordStats extends AbstractProcessor {
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(ProcessContext context) {
+cache = new RecordPathCache(25);
+}
+
+@Override
+public Set getRelationships() {
+return new HashSet() {{
+add(REL_SUCCESS);
+add(REL_FAILURE);
+}};
+}
+
+@Override
+public void onTrigger(ProcessContext context, ProcessSession session) 
throws ProcessException {
+FlowFile input = session.get();
+if (input == null) {
+return;
+}
+
+try {
+Map paths = getRecordPaths(context);
+Map stats = getStats(input, paths, context, 
session);
+
+input = session.putAllAttributes(input, stats);
+
+session.transfer(input, REL_SUCCESS);
+
+} catch (Exception ex) {
+getLogger().error("Error processing stats.", ex);
+session.transfer(input, 

[jira] [Updated] (NIFI-5235) SQLExecute with MS SQL Server fails with org.apache.avro.SchemaParseException: Empty name

2018-05-24 Thread Ravi Pratap Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Pratap Singh updated NIFI-5235:

Description: 
This issue started happening when we upgraded from nifi-1.2.0.3.0.2.0-76 to 
nifi-1.5.0.3.1.1.0-35. While trying to launch SQLExecute Processor against MS 
SQL server, I received the following error :

 
{code:java}
2018-05-24 15:52:06,625 ERROR [Timer-Driven Process Thread-58] 
o.a.nifi.processors.standard.ExecuteSQL 
ExecuteSQL[id=9eda8a6b-21c0-31b8-b665-56464223632e] 
ExecuteSQL[id=9eda8a6b-21c0-31b8-b665-56464223632e] failed to process due to 
org.apache.avro.SchemaParseException: Empty name; rolling back session: {}
org.apache.avro.SchemaParseException: Empty name
at org.apache.avro.Schema.validateName(Schema.java:1144)
at org.apache.avro.Schema.access$200(Schema.java:81)
at org.apache.avro.Schema$Field.(Schema.java:403)
at 
org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2124)
at 
org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2120)
at 
org.apache.avro.SchemaBuilder$FieldBuilder.access$5200(SchemaBuilder.java:2034)
at org.apache.avro.SchemaBuilder$FieldDefault.noDefault(SchemaBuilder.java:2146)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:488)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at org.apache.nifi.processors.standard.ExecuteSQL$2.process(ExecuteSQL.java:230)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:218)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748){code}
Here is the processor configuration
{code:java}
Database Driver Class Name : com.microsoft.sqlserver.jdbc.SQLServerDriver
Database Driver Jar Url : file:///path/to/sqljdbc4.jar{code}

  was:
This issue started happening when we upgraded from nifi-1.2.0.3.0.2.0-76 to 
nifi-1.5.0.3.1.1.0-35. While trying to launch SQLExecute Processor against MS 
SQL server, I received the following error :

2018-05-24 15:52:06,625 ERROR [Timer-Driven Process Thread-58] 
o.a.nifi.processors.standard.ExecuteSQL 
ExecuteSQL[id=9eda8a6b-21c0-31b8-b665-56464223632e] 
ExecuteSQL[id=9eda8a6b-21c0-31b8-b665-56464223632e] failed to process due to 
org.apache.avro.SchemaParseException: Empty name; rolling back session: {}
org.apache.avro.SchemaParseException: Empty name
 at org.apache.avro.Schema.validateName(Schema.java:1144)
 at org.apache.avro.Schema.access$200(Schema.java:81)
 at org.apache.avro.Schema$Field.(Schema.java:403)
 at 
org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2124)
 at 
org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2120)
 at 
org.apache.avro.SchemaBuilder$FieldBuilder.access$5200(SchemaBuilder.java:2034)
 at 
org.apache.avro.SchemaBuilder$FieldDefault.noDefault(SchemaBuilder.java:2146)
 at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:488)
 at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
 at 
org.apache.nifi.processors.standard.ExecuteSQL$2.process(ExecuteSQL.java:230)
 at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
 at 
org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:218)
 at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
 at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
 at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 at 

[jira] [Created] (NIFI-5235) SQLExecute with MS SQL Server fails with org.apache.avro.SchemaParseException: Empty name

2018-05-24 Thread Ravi Pratap Singh (JIRA)
Ravi Pratap Singh created NIFI-5235:
---

 Summary: SQLExecute with MS SQL Server fails with 
org.apache.avro.SchemaParseException: Empty name
 Key: NIFI-5235
 URL: https://issues.apache.org/jira/browse/NIFI-5235
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.5.0
 Environment: Redhat (64 bit), HP
Reporter: Ravi Pratap Singh


This issue started happening when we upgraded from nifi-1.2.0.3.0.2.0-76 to 
nifi-1.5.0.3.1.1.0-35. While trying to launch SQLExecute Processor against MS 
SQL server, I received the following error :

2018-05-24 15:52:06,625 ERROR [Timer-Driven Process Thread-58] 
o.a.nifi.processors.standard.ExecuteSQL 
ExecuteSQL[id=9eda8a6b-21c0-31b8-b665-56464223632e] 
ExecuteSQL[id=9eda8a6b-21c0-31b8-b665-56464223632e] failed to process due to 
org.apache.avro.SchemaParseException: Empty name; rolling back session: {}
org.apache.avro.SchemaParseException: Empty name
 at org.apache.avro.Schema.validateName(Schema.java:1144)
 at org.apache.avro.Schema.access$200(Schema.java:81)
 at org.apache.avro.Schema$Field.(Schema.java:403)
 at 
org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2124)
 at 
org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2120)
 at 
org.apache.avro.SchemaBuilder$FieldBuilder.access$5200(SchemaBuilder.java:2034)
 at 
org.apache.avro.SchemaBuilder$FieldDefault.noDefault(SchemaBuilder.java:2146)
 at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:488)
 at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
 at 
org.apache.nifi.processors.standard.ExecuteSQL$2.process(ExecuteSQL.java:230)
 at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
 at 
org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:218)
 at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
 at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
 at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:748)
{code}


Here is the processor configuration

Database Driver Class Name : com.microsoft.sqlserver.jdbc.SQLServerDriver
Database Driver Jar Url : file:///path/to/sqljdbc4.jar




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5066) Support enable and disable component action when multiple components selected or when selecting a process group.

2018-05-24 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-5066:
--
Status: Patch Available  (was: Reopened)

> Support enable and disable component action when multiple components selected 
> or when selecting a process group.
> 
>
> Key: NIFI-5066
> URL: https://issues.apache.org/jira/browse/NIFI-5066
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matthew Clarke
>Assignee: Matt Gilman
>Priority: Major
> Fix For: 1.7.0
>
>
> Currently NiFi validates all processors that are in a STOPPED state.  To 
> reduce impact when flows contain very large numbers of STOPPED processors, 
> users should be disabling these STOPPED processors.  NiFi's "Enable" and 
> "Disable" buttons do not support being used when more then one processor is 
> selected.  When needing to enable or disable large numbers of processors, 
> this is less then ideal. The Enable and Disable buttons should work similar 
> to how the Start and Stop buttons work.
> Have multiple components selected or a process group selected.  Select 
> "Enable" or "Disabled" button.  Any eligible component (those that are not 
> running) should be either enabled or disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5066) Support enable and disable component action when multiple components selected or when selecting a process group.

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489709#comment-16489709
 ] 

ASF GitHub Bot commented on NIFI-5066:
--

GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2739

NIFI-5066: Fixing enable/disable verification

NIFI-5066:
- Ensuring we verify we can enable/disable when appropriate.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-5066

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2739.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2739


commit d1f43d0278f43f5570d9fae221717d062a204e4d
Author: Matt Gilman 
Date:   2018-05-24T20:12:02Z

NIFI-5066:
- Ensuring we verify we can enable/disable when appropriate.




> Support enable and disable component action when multiple components selected 
> or when selecting a process group.
> 
>
> Key: NIFI-5066
> URL: https://issues.apache.org/jira/browse/NIFI-5066
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matthew Clarke
>Assignee: Matt Gilman
>Priority: Major
> Fix For: 1.7.0
>
>
> Currently NiFi validates all processors that are in a STOPPED state.  To 
> reduce impact when flows contain very large numbers of STOPPED processors, 
> users should be disabling these STOPPED processors.  NiFi's "Enable" and 
> "Disable" buttons do not support being used when more then one processor is 
> selected.  When needing to enable or disable large numbers of processors, 
> this is less then ideal. The Enable and Disable buttons should work similar 
> to how the Start and Stop buttons work.
> Have multiple components selected or a process group selected.  Select 
> "Enable" or "Disabled" button.  Any eligible component (those that are not 
> running) should be either enabled or disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2739: NIFI-5066: Fixing enable/disable verification

2018-05-24 Thread mcgilman
GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2739

NIFI-5066: Fixing enable/disable verification

NIFI-5066:
- Ensuring we verify we can enable/disable when appropriate.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-5066

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2739.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2739


commit d1f43d0278f43f5570d9fae221717d062a204e4d
Author: Matt Gilman 
Date:   2018-05-24T20:12:02Z

NIFI-5066:
- Ensuring we verify we can enable/disable when appropriate.




---


[jira] [Commented] (NIFI-4227) Create a ForkRecord processor

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489694#comment-16489694
 ] 

ASF GitHub Bot commented on NIFI-4227:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2037


> Create a ForkRecord processor
> -
>
> Key: NIFI-4227
> URL: https://issues.apache.org/jira/browse/NIFI-4227
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.7.0
>
> Attachments: TestForkRecord.xml
>
>
> I'd like a way to fork a record containing an array of records into multiple 
> records, each one being an element of the array. In addition, if configured 
> to, I'd like the option to add to each new record the parent fields.
> For example, if I've:
> {noformat}
> [{
>   "id": 1,
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "accounts": [{
>   "id": 42,
>   "balance": 4750.89
>   }, {
>   "id": 43,
>   "balance": 48212.38
>   }]
> }, 
> {
>   "id": 2,
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "accounts": [{
>   "id": 45,
>   "balance": 6578.45
>   }, {
>   "id": 46,
>   "balance": 34567.21
>   }]
> }]
> {noformat}
> Then, I want to generate records looking like:
> {noformat}
> [{
>   "id": 42,
>   "balance": 4750.89
> }, {
>   "id": 43,
>   "balance": 48212.38
> }, {
>   "id": 45,
>   "balance": 6578.45
> }, {
>   "id": 46,
>   "balance": 34567.21
> }]
> {noformat}
> Or, if parent fields are included, looking like:
> {noformat}
> [{
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "id": 42,
>   "balance": 4750.89
> }, {
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "id": 43,
>   "balance": 48212.38
> }, {
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "id": 45,
>   "balance": 6578.45
> }, {
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "id": 46,
>   "balance": 34567.21
> }]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2037: NIFI-4227 - add a ForkRecord processor

2018-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2037


---


[jira] [Commented] (NIFI-4227) Create a ForkRecord processor

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489690#comment-16489690
 ] 

ASF GitHub Bot commented on NIFI-4227:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2037
  
@pvillard31 sorry it's taken me so long to get back to this - it seems to 
have slipped between the cracks. I did another review and all looks good 
(except it seems that the description of "Extract Mode" and "Split Mode" were 
reversed and "Additional Details" confirmed that so i switched it). Did some 
tests and all works as expected. Definitely a nice addition to the palette! 
Thanks! +1 merged to master.


> Create a ForkRecord processor
> -
>
> Key: NIFI-4227
> URL: https://issues.apache.org/jira/browse/NIFI-4227
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.7.0
>
> Attachments: TestForkRecord.xml
>
>
> I'd like a way to fork a record containing an array of records into multiple 
> records, each one being an element of the array. In addition, if configured 
> to, I'd like the option to add to each new record the parent fields.
> For example, if I've:
> {noformat}
> [{
>   "id": 1,
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "accounts": [{
>   "id": 42,
>   "balance": 4750.89
>   }, {
>   "id": 43,
>   "balance": 48212.38
>   }]
> }, 
> {
>   "id": 2,
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "accounts": [{
>   "id": 45,
>   "balance": 6578.45
>   }, {
>   "id": 46,
>   "balance": 34567.21
>   }]
> }]
> {noformat}
> Then, I want to generate records looking like:
> {noformat}
> [{
>   "id": 42,
>   "balance": 4750.89
> }, {
>   "id": 43,
>   "balance": 48212.38
> }, {
>   "id": 45,
>   "balance": 6578.45
> }, {
>   "id": 46,
>   "balance": 34567.21
> }]
> {noformat}
> Or, if parent fields are included, looking like:
> {noformat}
> [{
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "id": 42,
>   "balance": 4750.89
> }, {
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "id": 43,
>   "balance": 48212.38
> }, {
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "id": 45,
>   "balance": 6578.45
> }, {
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "id": 46,
>   "balance": 34567.21
> }]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4227) Create a ForkRecord processor

2018-05-24 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4227:
-
   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> Create a ForkRecord processor
> -
>
> Key: NIFI-4227
> URL: https://issues.apache.org/jira/browse/NIFI-4227
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.7.0
>
> Attachments: TestForkRecord.xml
>
>
> I'd like a way to fork a record containing an array of records into multiple 
> records, each one being an element of the array. In addition, if configured 
> to, I'd like the option to add to each new record the parent fields.
> For example, if I've:
> {noformat}
> [{
>   "id": 1,
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "accounts": [{
>   "id": 42,
>   "balance": 4750.89
>   }, {
>   "id": 43,
>   "balance": 48212.38
>   }]
> }, 
> {
>   "id": 2,
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "accounts": [{
>   "id": 45,
>   "balance": 6578.45
>   }, {
>   "id": 46,
>   "balance": 34567.21
>   }]
> }]
> {noformat}
> Then, I want to generate records looking like:
> {noformat}
> [{
>   "id": 42,
>   "balance": 4750.89
> }, {
>   "id": 43,
>   "balance": 48212.38
> }, {
>   "id": 45,
>   "balance": 6578.45
> }, {
>   "id": 46,
>   "balance": 34567.21
> }]
> {noformat}
> Or, if parent fields are included, looking like:
> {noformat}
> [{
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "id": 42,
>   "balance": 4750.89
> }, {
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "id": 43,
>   "balance": 48212.38
> }, {
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "id": 45,
>   "balance": 6578.45
> }, {
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "id": 46,
>   "balance": 34567.21
> }]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2037: NIFI-4227 - add a ForkRecord processor

2018-05-24 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2037
  
@pvillard31 sorry it's taken me so long to get back to this - it seems to 
have slipped between the cracks. I did another review and all looks good 
(except it seems that the description of "Extract Mode" and "Split Mode" were 
reversed and "Additional Details" confirmed that so i switched it). Did some 
tests and all works as expected. Definitely a nice addition to the palette! 
Thanks! +1 merged to master.


---


[jira] [Commented] (NIFI-4227) Create a ForkRecord processor

2018-05-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489687#comment-16489687
 ] 

ASF subversion and git services commented on NIFI-4227:
---

Commit 397e88c8582d6fffe7fd3c21be1663a0c4c22877 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=397e88c ]

NIFI-4227: Fixed typo


> Create a ForkRecord processor
> -
>
> Key: NIFI-4227
> URL: https://issues.apache.org/jira/browse/NIFI-4227
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Attachments: TestForkRecord.xml
>
>
> I'd like a way to fork a record containing an array of records into multiple 
> records, each one being an element of the array. In addition, if configured 
> to, I'd like the option to add to each new record the parent fields.
> For example, if I've:
> {noformat}
> [{
>   "id": 1,
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "accounts": [{
>   "id": 42,
>   "balance": 4750.89
>   }, {
>   "id": 43,
>   "balance": 48212.38
>   }]
> }, 
> {
>   "id": 2,
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "accounts": [{
>   "id": 45,
>   "balance": 6578.45
>   }, {
>   "id": 46,
>   "balance": 34567.21
>   }]
> }]
> {noformat}
> Then, I want to generate records looking like:
> {noformat}
> [{
>   "id": 42,
>   "balance": 4750.89
> }, {
>   "id": 43,
>   "balance": 48212.38
> }, {
>   "id": 45,
>   "balance": 6578.45
> }, {
>   "id": 46,
>   "balance": 34567.21
> }]
> {noformat}
> Or, if parent fields are included, looking like:
> {noformat}
> [{
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "id": 42,
>   "balance": 4750.89
> }, {
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "id": 43,
>   "balance": 48212.38
> }, {
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "id": 45,
>   "balance": 6578.45
> }, {
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "id": 46,
>   "balance": 34567.21
> }]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4227) Create a ForkRecord processor

2018-05-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489686#comment-16489686
 ] 

ASF subversion and git services commented on NIFI-4227:
---

Commit be0ed704231fb51ed233b58575e1378fc2c93e1d in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=be0ed70 ]

NIFI-4227 - add a ForkRecord processor
Added split/extract modes, unit tests, and additional details

Signed-off-by: Mark Payne 


> Create a ForkRecord processor
> -
>
> Key: NIFI-4227
> URL: https://issues.apache.org/jira/browse/NIFI-4227
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Attachments: TestForkRecord.xml
>
>
> I'd like a way to fork a record containing an array of records into multiple 
> records, each one being an element of the array. In addition, if configured 
> to, I'd like the option to add to each new record the parent fields.
> For example, if I've:
> {noformat}
> [{
>   "id": 1,
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "accounts": [{
>   "id": 42,
>   "balance": 4750.89
>   }, {
>   "id": 43,
>   "balance": 48212.38
>   }]
> }, 
> {
>   "id": 2,
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "accounts": [{
>   "id": 45,
>   "balance": 6578.45
>   }, {
>   "id": 46,
>   "balance": 34567.21
>   }]
> }]
> {noformat}
> Then, I want to generate records looking like:
> {noformat}
> [{
>   "id": 42,
>   "balance": 4750.89
> }, {
>   "id": 43,
>   "balance": 48212.38
> }, {
>   "id": 45,
>   "balance": 6578.45
> }, {
>   "id": 46,
>   "balance": 34567.21
> }]
> {noformat}
> Or, if parent fields are included, looking like:
> {noformat}
> [{
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "id": 42,
>   "balance": 4750.89
> }, {
>   "name": "John Doe",
>   "address": "123 My Street",
>   "city": "My City", 
>   "state": "MS",
>   "zipCode": "1",
>   "country": "USA",
>   "id": 43,
>   "balance": 48212.38
> }, {
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "id": 45,
>   "balance": 6578.45
> }, {
>   "name": "Jane Doe",
>   "address": "345 My Street",
>   "city": "Her City", 
>   "state": "NY",
>   "zipCode": "2",
>   "country": "USA",
>   "id": 46,
>   "balance": 34567.21
> }]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5231) Record stats processor

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489679#comment-16489679
 ] 

ASF GitHub Bot commented on NIFI-5231:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2737
  
I'm not too familiar with the deep internals of the framework either. What 
we've seen is that with the records API it just makes sense to leverage the 
provenance system because it already tracks the attributes in a clean way you 
can leverage for stuff like giving managers a nice little ELK dashboard for the 
warm fuzzies.


> Record stats processor
> --
>
> Key: NIFI-5231
> URL: https://issues.apache.org/jira/browse/NIFI-5231
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> Should the following:
>  
>  # Take a record reader.
>  # Count the # of records and add a record_count attribute to the flowfile.
>  # Allow user-defined properties that do the following:
>  ## Map attribute name -> record path.
>  ## Provide aggregate value counts for each record path statement.
>  ## Provide total count for record path operation.
>  ## Put those values on the flowfile as attributes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2737: NIFI-5231 Added RecordStats processor.

2018-05-24 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2737
  
I'm not too familiar with the deep internals of the framework either. What 
we've seen is that with the records API it just makes sense to leverage the 
provenance system because it already tracks the attributes in a clean way you 
can leverage for stuff like giving managers a nice little ELK dashboard for the 
warm fuzzies.


---


[GitHub] nifi pull request #2738: NIFI-5233 - Add EL support with Variable Registry s...

2018-05-24 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2738#discussion_r190710420
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase-client-service-api/src/main/java/org/apache/nifi/hbase/HBaseClientService.java
 ---
@@ -42,24 +42,28 @@
   " such as hbase-site.xml and core-site.xml for kerberos, " +
   "including full paths to the files.")
 .addValidator(new ConfigFilesValidator())
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
 PropertyDescriptor ZOOKEEPER_QUORUM = new PropertyDescriptor.Builder()
 .name("ZooKeeper Quorum")
 .description("Comma-separated list of ZooKeeper hosts for 
HBase. Required if Hadoop Configuration Files are not provided.")
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
 PropertyDescriptor ZOOKEEPER_CLIENT_PORT = new 
PropertyDescriptor.Builder()
 .name("ZooKeeper Client Port")
 .description("The port on which ZooKeeper is accepting client 
connections. Required if Hadoop Configuration Files are not provided.")
 .addValidator(StandardValidators.PORT_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
 PropertyDescriptor ZOOKEEPER_ZNODE_PARENT = new 
PropertyDescriptor.Builder()
 .name("ZooKeeper ZNode Parent")
 .description("The ZooKeeper ZNode Parent value for HBase 
(example: /hbase). Required if Hadoop Configuration Files are not provided.")
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
 PropertyDescriptor HBASE_CLIENT_RETRIES = new 
PropertyDescriptor.Builder()
--- End diff --

Why did you skip the client retrieves?


---


[jira] [Commented] (NIFI-5233) Enable expression language in Hadoop Configuration Files property of Hbase Client Service

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489678#comment-16489678
 ] 

ASF GitHub Bot commented on NIFI-5233:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2738#discussion_r190710420
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase-client-service-api/src/main/java/org/apache/nifi/hbase/HBaseClientService.java
 ---
@@ -42,24 +42,28 @@
   " such as hbase-site.xml and core-site.xml for kerberos, " +
   "including full paths to the files.")
 .addValidator(new ConfigFilesValidator())
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
 PropertyDescriptor ZOOKEEPER_QUORUM = new PropertyDescriptor.Builder()
 .name("ZooKeeper Quorum")
 .description("Comma-separated list of ZooKeeper hosts for 
HBase. Required if Hadoop Configuration Files are not provided.")
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
 PropertyDescriptor ZOOKEEPER_CLIENT_PORT = new 
PropertyDescriptor.Builder()
 .name("ZooKeeper Client Port")
 .description("The port on which ZooKeeper is accepting client 
connections. Required if Hadoop Configuration Files are not provided.")
 .addValidator(StandardValidators.PORT_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
 PropertyDescriptor ZOOKEEPER_ZNODE_PARENT = new 
PropertyDescriptor.Builder()
 .name("ZooKeeper ZNode Parent")
 .description("The ZooKeeper ZNode Parent value for HBase 
(example: /hbase). Required if Hadoop Configuration Files are not provided.")
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
 PropertyDescriptor HBASE_CLIENT_RETRIES = new 
PropertyDescriptor.Builder()
--- End diff --

Why did you skip the client retrieves?


> Enable expression language in Hadoop Configuration Files property of Hbase 
> Client Service
> -
>
> Key: NIFI-5233
> URL: https://issues.apache.org/jira/browse/NIFI-5233
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Gergely Devai
>Assignee: Pierre Villard
>Priority: Minor
>  Labels: easyfix
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In some Hadoop related processors (e.g. GetHDFS, DeleteHDFS) the "Hadoop 
> Configuration Files" property supports expression language. This is 
> convenient, as the lengthy paths to the config files can be stored in a 
> property file loaded by Nifi at startup or in an environment variable, and 
> the name of the property/environment variable can be referenced in the 
> processors' configuration.
> The controller service HBase_1_1_2_ClientService also has the "Hadoop 
> Configuration Files" property, but it does not support expression language. 
> For the convenience reasons described above and for uniformity, it is 
> desirable to allow expression language in that property as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #339: MINIFCPP-509: resolve typo in port

2018-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/339


---


[GitHub] nifi-minifi-cpp issue #339: MINIFCPP-509: resolve typo in port

2018-05-24 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/339
  
shame on me for not building after those last changes. will merge


---


[GitHub] nifi-minifi-cpp pull request #341: Minificpp 507

2018-05-24 Thread achristianson
GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/341

Minificpp 507

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [x] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [x] If applicable, have you updated the LICENSE file?
- [x] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-507

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/341.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #341


commit 96d31e11147b1c255640bf4f83eb0766d755cd5b
Author: Andrew I. Christianson 
Date:   2018-05-24T19:34:29Z

MINIFICPP-500 Incorporate mutually-exclusive property metadata into agent 
information

commit bce7e010be9e21a62e9e0c75418898ba9cb570bc
Author: Andrew I. Christianson 
Date:   2018-05-24T19:40:13Z

MINIFICPP-507 Added NOOP appveyor.xml




---


[jira] [Resolved] (MINIFICPP-477) Add missing encode/decode heading to EL docs

2018-05-24 Thread Andrew Christianson (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Christianson resolved MINIFICPP-477.
---
Resolution: Fixed

> Add missing encode/decode heading to EL docs
> 
>
> Key: MINIFICPP-477
> URL: https://issues.apache.org/jira/browse/MINIFICPP-477
> Project: NiFi MiNiFi C++
>  Issue Type: Documentation
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Encode/decode intro section is missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-500) Incorporate mutally-exlusive property metadata into agent manifest

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489652#comment-16489652
 ] 

ASF GitHub Bot commented on MINIFICPP-500:
--

GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/340

MINIFICPP-500 Incorporate mutually-exclusive property metadata into a…

…gent information

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [x] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [x] If applicable, have you updated the LICENSE file?
- [x] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-500

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/340.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #340


commit 96d31e11147b1c255640bf4f83eb0766d755cd5b
Author: Andrew I. Christianson 
Date:   2018-05-24T19:34:29Z

MINIFICPP-500 Incorporate mutually-exclusive property metadata into agent 
information




> Incorporate mutally-exlusive property metadata into agent manifest
> --
>
> Key: MINIFICPP-500
> URL: https://issues.apache.org/jira/browse/MINIFICPP-500
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Report properties which are mutually-exclusive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #340: MINIFICPP-500 Incorporate mutually-exclus...

2018-05-24 Thread achristianson
GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/340

MINIFICPP-500 Incorporate mutually-exclusive property metadata into a…

…gent information

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [x] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [x] If applicable, have you updated the LICENSE file?
- [x] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-500

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/340.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #340


commit 96d31e11147b1c255640bf4f83eb0766d755cd5b
Author: Andrew I. Christianson 
Date:   2018-05-24T19:34:29Z

MINIFICPP-500 Incorporate mutually-exclusive property metadata into agent 
information




---


[jira] [Reopened] (NIFI-5066) Support enable and disable component action when multiple components selected or when selecting a process group.

2018-05-24 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reopened NIFI-5066:
---

Reopening issue because verification needs to consider the enable/disable case 
separate from the schedule/unschedule.

> Support enable and disable component action when multiple components selected 
> or when selecting a process group.
> 
>
> Key: NIFI-5066
> URL: https://issues.apache.org/jira/browse/NIFI-5066
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matthew Clarke
>Assignee: Matt Gilman
>Priority: Major
> Fix For: 1.7.0
>
>
> Currently NiFi validates all processors that are in a STOPPED state.  To 
> reduce impact when flows contain very large numbers of STOPPED processors, 
> users should be disabling these STOPPED processors.  NiFi's "Enable" and 
> "Disable" buttons do not support being used when more then one processor is 
> selected.  When needing to enable or disable large numbers of processors, 
> this is less then ideal. The Enable and Disable buttons should work similar 
> to how the Start and Stop buttons work.
> Have multiple components selected or a process group selected.  Select 
> "Enable" or "Disabled" button.  Any eligible component (those that are not 
> running) should be either enabled or disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-3492) Allow configuration of default back pressure

2018-05-24 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-3492.
-
Resolution: Duplicate

Fixed in NIFI-3599

> Allow configuration of default back pressure
> 
>
> Key: NIFI-3492
> URL: https://issues.apache.org/jira/browse/NIFI-3492
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.1.1
>Reporter: Brandon DeVries
>Priority: Major
>
> NiFi 1.x sets a default back pressure of 10K files / 1 GB (hardcoded in 
> StandardFlowFileQueue) instead of the "unlimited" default in 0.x. This is 
> better in a lot of ways... however those values are potentially a bit 
> arbitrary, and not appropriate for every system.
> These values should be configurable, and exposed in nifi.properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #339: MINIFCPP-509: resolve typo in port

2018-05-24 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/339

MINIFCPP-509: resolve typo in port

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-509

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/339.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #339


commit fb35cc9b0d16bb3d1aea2aabce7405370fdfc7ac
Author: Marc Parisi 
Date:   2018-05-24T19:19:50Z

MINIFCPP-509: resolve typo in port




---


[jira] [Created] (MINIFICPP-509) Fix typo in capi

2018-05-24 Thread marco polo (JIRA)
marco polo created MINIFICPP-509:


 Summary: Fix typo in capi
 Key: MINIFICPP-509
 URL: https://issues.apache.org/jira/browse/MINIFICPP-509
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: marco polo
Assignee: marco polo






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5171) YandexTranslate Processor Improvement

2018-05-24 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489558#comment-16489558
 ] 

Mark Payne commented on NIFI-5171:
--

[~veteranbv] thanks for the improvement and the fix! Code looks good, was able 
to verify locally that it was broken before and works now - and that I don't 
have to specify the source language. Very cool, many thanks! Merged to master.

> YandexTranslate Processor Improvement
> -
>
> Key: NIFI-5171
> URL: https://issues.apache.org/jira/browse/NIFI-5171
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Henry Sowell
>Priority: Minor
> Fix For: 1.7.0
>
>
> I'm currently working to improve the YandexTranslate processor to 
> automatically detect the input language. Here are the core items to change:
>  * Default input language blank
>  * Receive back detected language from API
> If default language set, current behavior would remain as-is. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5171) YandexTranslate Processor Improvement

2018-05-24 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-5171:
-
   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> YandexTranslate Processor Improvement
> -
>
> Key: NIFI-5171
> URL: https://issues.apache.org/jira/browse/NIFI-5171
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Henry Sowell
>Priority: Minor
> Fix For: 1.7.0
>
>
> I'm currently working to improve the YandexTranslate processor to 
> automatically detect the input language. Here are the core items to change:
>  * Default input language blank
>  * Receive back detected language from API
> If default language set, current behavior would remain as-is. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5171) YandexTranslate Processor Improvement

2018-05-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489555#comment-16489555
 ] 

ASF subversion and git services commented on NIFI-5171:
---

Commit 3b3d6d4eb2c9de250f22598f16b05811cf3ff17f in nifi's branch 
refs/heads/master from veteranbv
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3b3d6d4 ]

NIFI-5171: fixed Yandex Jersey issues by adding dependency to POM and modified 
API call to now detect for languages
Added license to YandexTranslate NOTICE file
Updated to use StringUtils.isBlank for detecting sourceLanguage field being 
blank and languages to file to align with new logic

Signed-off-by: Mark Payne 


> YandexTranslate Processor Improvement
> -
>
> Key: NIFI-5171
> URL: https://issues.apache.org/jira/browse/NIFI-5171
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Henry Sowell
>Priority: Minor
>
> I'm currently working to improve the YandexTranslate processor to 
> automatically detect the input language. Here are the core items to change:
>  * Default input language blank
>  * Receive back detected language from API
> If default language set, current behavior would remain as-is. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-494) Resolve C2 issues with memory access

2018-05-24 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-494.
---
Resolution: Fixed

2821d71d6ee965c7390def385bcd3b5ae0d4c376

> Resolve C2 issues with memory access
> 
>
> Key: MINIFICPP-494
> URL: https://issues.apache.org/jira/browse/MINIFICPP-494
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-492) Resolve Site to site to site issues with volatile repo in C API

2018-05-24 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-492.
---
   Resolution: Fixed
Fix Version/s: 0.5.0

> Resolve Site to site to site issues with volatile repo in C API
> ---
>
> Key: MINIFICPP-492
> URL: https://issues.apache.org/jira/browse/MINIFICPP-492
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-486) Build synchronous and asynchronous C2 control functions

2018-05-24 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-486.
---
Resolution: Fixed

2821d71d6ee965c7390def385bcd3b5ae0d4c376

> Build synchronous and asynchronous C2 control functions
> ---
>
> Key: MINIFICPP-486
> URL: https://issues.apache.org/jira/browse/MINIFICPP-486
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>
> Create heartbeat functions that allow developers to create synchronous and 
> asynchronous C functions



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-492) Resolve Site to site to site issues with volatile repo in C API

2018-05-24 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489539#comment-16489539
 ] 

Aldrin Piri commented on MINIFICPP-492:
---

2821d71d6ee965c7390def385bcd3b5ae0d4c376

> Resolve Site to site to site issues with volatile repo in C API
> ---
>
> Key: MINIFICPP-492
> URL: https://issues.apache.org/jira/browse/MINIFICPP-492
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-491) Disable logging within C API

2018-05-24 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-491.
---
Resolution: Fixed

> Disable logging within C API
> 
>
> Key: MINIFICPP-491
> URL: https://issues.apache.org/jira/browse/MINIFICPP-491
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #327: MINIFICPP-491: Disable logging for C api

2018-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/327


---


[jira] [Commented] (MINIFICPP-491) Disable logging within C API

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489536#comment-16489536
 ] 

ASF GitHub Bot commented on MINIFICPP-491:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/327


> Disable logging within C API
> 
>
> Key: MINIFICPP-491
> URL: https://issues.apache.org/jira/browse/MINIFICPP-491
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-491) Disable logging within C API

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489531#comment-16489531
 ] 

ASF GitHub Bot commented on MINIFICPP-491:
--

Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/327
  
Thanks, clears that up.  Will get this merged in.


> Disable logging within C API
> 
>
> Key: MINIFICPP-491
> URL: https://issues.apache.org/jira/browse/MINIFICPP-491
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #327: MINIFICPP-491: Disable logging for C api

2018-05-24 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/327
  
Thanks, clears that up.  Will get this merged in.


---


[jira] [Resolved] (MINIFICPP-501) Incorporate dependent property metadata into agent information

2018-05-24 Thread Andrew Christianson (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Christianson resolved MINIFICPP-501.
---
Resolution: Fixed

> Incorporate dependent property metadata into agent information
> --
>
> Key: MINIFICPP-501
> URL: https://issues.apache.org/jira/browse/MINIFICPP-501
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Report which properties a property is dependent upon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-502) Support required processor properties

2018-05-24 Thread Andrew Christianson (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Christianson resolved MINIFICPP-502.
---
Resolution: Fixed

> Support required processor properties
> -
>
> Key: MINIFICPP-502
> URL: https://issues.apache.org/jira/browse/MINIFICPP-502
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Allow developers to specify properties as required, and validate this in the 
> configuration reader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-503) Parser.yy not synced to docker builds

2018-05-24 Thread Andrew Christianson (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Christianson resolved MINIFICPP-503.
---
Resolution: Fixed

> Parser.yy not synced to docker builds
> -
>
> Key: MINIFICPP-503
> URL: https://issues.apache.org/jira/browse/MINIFICPP-503
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Parser.yy is not transferred into the docker image, resulting in the 
> following build error:
>  
> {{make[2]: *** No rule to make target 
> '../extensions/expression-language/Parser.yy', needed by 
> '../extensions/expression-language/Parser.cpp'. Stop.}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-498) Add travis builds targeting newer compilers (>= gcc 6)

2018-05-24 Thread Andrew Christianson (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Christianson resolved MINIFICPP-498.
---
Resolution: Fixed

> Add travis builds targeting newer compilers (>= gcc 6)
> --
>
> Key: MINIFICPP-498
> URL: https://issues.apache.org/jira/browse/MINIFICPP-498
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Some minificpp features are only available for newer compilers (lua/sol2, 
> regex, date manipulation), and we should cover these in CI via updating 
> travis.yml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-491) Disable logging within C API

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489515#comment-16489515
 ] 

ASF GitHub Bot commented on MINIFICPP-491:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/327#discussion_r190681005
  
--- Diff: LibExample/CMakeLists.txt ---
@@ -59,3 +59,14 @@ else ()
target_link_libraries (generate_flow -Wl,--whole-archive 
minifi-http-curl -Wl,--no-whole-archive)
 endif ()
 
+
+add_executable(monitor_directory monitor_directory.c)
+
+# Link against minifi, yaml-cpp, civetweb-cpp, uuid, openssl and rocksdb
--- End diff --

should we need to link against yaml?  not sure I understand its function in 
the lib sense.


> Disable logging within C API
> 
>
> Key: MINIFICPP-491
> URL: https://issues.apache.org/jira/browse/MINIFICPP-491
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #327: MINIFICPP-491: Disable logging for C api

2018-05-24 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/327#discussion_r190681005
  
--- Diff: LibExample/CMakeLists.txt ---
@@ -59,3 +59,14 @@ else ()
target_link_libraries (generate_flow -Wl,--whole-archive 
minifi-http-curl -Wl,--no-whole-archive)
 endif ()
 
+
+add_executable(monitor_directory monitor_directory.c)
+
+# Link against minifi, yaml-cpp, civetweb-cpp, uuid, openssl and rocksdb
--- End diff --

should we need to link against yaml?  not sure I understand its function in 
the lib sense.


---


[jira] [Commented] (MINIFICPP-491) Disable logging within C API

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489514#comment-16489514
 ] 

ASF GitHub Bot commented on MINIFICPP-491:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/327#discussion_r189354320
  
--- Diff: libminifi/include/capi/api.h ---
@@ -18,27 +18,32 @@
 #ifndef LIBMINIFI_INCLUDE_CAPI_NANOFI_H_
 #define LIBMINIFI_INCLUDE_CAPI_NANOFI_H_
 
+#include 
 #include 
+#include "processors.h"
+
+int initialize_api();
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
 #define API_VERSION "0.01"
+
+void enable_logging();
+
 /
  * ##
  *  BASE NIFI OPERATIONS
  * ##
  */
 
-
 /**
  * NiFi Port struct
  */
 typedef struct {
-char *pord_id;
-}nifi_port;
-
+  char *pord_id;
--- End diff --

-> port_id


> Disable logging within C API
> 
>
> Key: MINIFICPP-491
> URL: https://issues.apache.org/jira/browse/MINIFICPP-491
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #327: MINIFICPP-491: Disable logging for C api

2018-05-24 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/327#discussion_r189354320
  
--- Diff: libminifi/include/capi/api.h ---
@@ -18,27 +18,32 @@
 #ifndef LIBMINIFI_INCLUDE_CAPI_NANOFI_H_
 #define LIBMINIFI_INCLUDE_CAPI_NANOFI_H_
 
+#include 
 #include 
+#include "processors.h"
+
+int initialize_api();
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
 #define API_VERSION "0.01"
+
+void enable_logging();
+
 /
  * ##
  *  BASE NIFI OPERATIONS
  * ##
  */
 
-
 /**
  * NiFi Port struct
  */
 typedef struct {
-char *pord_id;
-}nifi_port;
-
+  char *pord_id;
--- End diff --

-> port_id


---


[jira] [Commented] (MINIFICPP-491) Disable logging within C API

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489513#comment-16489513
 ] 

ASF GitHub Bot commented on MINIFICPP-491:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/327#discussion_r189353771
  
--- Diff: blocks/comms.h ---
@@ -0,0 +1,47 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef BLOCKS_COMMS_H_
+#define BLOCKS_COMMS_H_
+
+#include 
+#include "capi/api.h"
+#include "capi/processors.h"
+
+#define SUCCESS 0x00
+#define FINISHE_EARLY 0x01
--- End diff --

-> FINISHED_EARLY


> Disable logging within C API
> 
>
> Key: MINIFICPP-491
> URL: https://issues.apache.org/jira/browse/MINIFICPP-491
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #327: MINIFICPP-491: Disable logging for C api

2018-05-24 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/327#discussion_r189353771
  
--- Diff: blocks/comms.h ---
@@ -0,0 +1,47 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef BLOCKS_COMMS_H_
+#define BLOCKS_COMMS_H_
+
+#include 
+#include "capi/api.h"
+#include "capi/processors.h"
+
+#define SUCCESS 0x00
+#define FINISHE_EARLY 0x01
--- End diff --

-> FINISHED_EARLY


---


[GitHub] nifi-minifi pull request #125: MINIFI-430 Provide a batch script for executi...

2018-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi/pull/125


---


[jira] [Updated] (MINIFICPP-486) Build synchronous and asynchronous C2 control functions

2018-05-24 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-486:
-
Fix Version/s: 0.5.0

> Build synchronous and asynchronous C2 control functions
> ---
>
> Key: MINIFICPP-486
> URL: https://issues.apache.org/jira/browse/MINIFICPP-486
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>
> Create heartbeat functions that allow developers to create synchronous and 
> asynchronous C functions



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-491) Disable logging within C API

2018-05-24 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-491:
-
Fix Version/s: 0.5.0

> Disable logging within C API
> 
>
> Key: MINIFICPP-491
> URL: https://issues.apache.org/jira/browse/MINIFICPP-491
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-494) Resolve C2 issues with memory access

2018-05-24 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-494:
-
Fix Version/s: 0.5.0

> Resolve C2 issues with memory access
> 
>
> Key: MINIFICPP-494
> URL: https://issues.apache.org/jira/browse/MINIFICPP-494
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-502) Support required processor properties

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489456#comment-16489456
 ] 

ASF GitHub Bot commented on MINIFICPP-502:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/334


> Support required processor properties
> -
>
> Key: MINIFICPP-502
> URL: https://issues.apache.org/jira/browse/MINIFICPP-502
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Allow developers to specify properties as required, and validate this in the 
> configuration reader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-501) Incorporate dependent property metadata into agent information

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489457#comment-16489457
 ] 

ASF GitHub Bot commented on MINIFICPP-501:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/333


> Incorporate dependent property metadata into agent information
> --
>
> Key: MINIFICPP-501
> URL: https://issues.apache.org/jira/browse/MINIFICPP-501
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Report which properties a property is dependent upon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #334: MINIFICPP-502 Add validation to config pa...

2018-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/334


---


[GitHub] nifi-minifi-cpp pull request #333: MINIFICPP-501 Incorporate dependent prope...

2018-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/333


---


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489428#comment-16489428
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2448
  
@pvillard31 @mattyb149 I changed MockPropertyValue's 
`evaluateExpressionLanguage(FlowFile)` to AFAICT finally fix the issue. It only 
shows up on processors like this one where incoming connectinos are optional.


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2448: NIFI-4838 Added configurable progressive commits to GetMon...

2018-05-24 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2448
  
@pvillard31 @mattyb149 I changed MockPropertyValue's 
`evaluateExpressionLanguage(FlowFile)` to AFAICT finally fix the issue. It only 
shows up on processors like this one where incoming connectinos are optional.


---


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489425#comment-16489425
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r190665372
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -51,14 +53,22 @@
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Map;
 import java.util.Set;
 
 
 @Tags({ "mongodb", "read", "get" })
 @InputRequirement(Requirement.INPUT_FORBIDDEN)
 @CapabilityDescription("Creates FlowFiles from documents in MongoDB")
+@WritesAttributes( value = {
+@WritesAttribute(attribute = "progress.estimate", description = "The 
estimated total documents that match the query. Written if estimation is 
enabled."),
+@WritesAttribute(attribute = "progress.segment.start", description = 
"Where the first part of the segment is in the total result set. Written if 
estimation is enabled."),
+@WritesAttribute(attribute = "progress.segment.end", description = 
"Where the last part of the segment is in the total result set. Written if 
estimation is enabled."),
+@WritesAttribute(attribute = "progress.index", description = "When 
results are written one-by-one to flowfiles, this is is set to indicate 
estimated progress. Written if estimation is enabled.")
--- End diff --

Done.


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2448: NIFI-4838 Added configurable progressive commits to...

2018-05-24 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r190665372
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -51,14 +53,22 @@
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Map;
 import java.util.Set;
 
 
 @Tags({ "mongodb", "read", "get" })
 @InputRequirement(Requirement.INPUT_FORBIDDEN)
 @CapabilityDescription("Creates FlowFiles from documents in MongoDB")
+@WritesAttributes( value = {
+@WritesAttribute(attribute = "progress.estimate", description = "The 
estimated total documents that match the query. Written if estimation is 
enabled."),
+@WritesAttribute(attribute = "progress.segment.start", description = 
"Where the first part of the segment is in the total result set. Written if 
estimation is enabled."),
+@WritesAttribute(attribute = "progress.segment.end", description = 
"Where the last part of the segment is in the total result set. Written if 
estimation is enabled."),
+@WritesAttribute(attribute = "progress.index", description = "When 
results are written one-by-one to flowfiles, this is is set to indicate 
estimated progress. Written if estimation is enabled.")
--- End diff --

Done.


---


[GitHub] nifi-minifi pull request #128: MINIFI-457 Allowing the usage of both compone...

2018-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi/pull/128


---


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489385#comment-16489385
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r190658948
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/test/java/org/apache/nifi/processors/mongodb/GetMongoIT.java
 ---
@@ -67,20 +69,23 @@
 
 @Before
 public void setup() {
-runner = TestRunners.newTestRunner(GetMongo.class);
+runner = TestRunners.newTestRunner(TestGetMongo.class);
 runner.setVariable("uri", MONGO_URI);
 runner.setVariable("db", DB_NAME);
 runner.setVariable("collection", COLLECTION_NAME);
 runner.setProperty(AbstractMongoProcessor.URI, "${uri}");
 runner.setProperty(AbstractMongoProcessor.DATABASE_NAME, "${db}");
 runner.setProperty(AbstractMongoProcessor.COLLECTION_NAME, 
"${collection}");
-runner.setProperty(GetMongo.USE_PRETTY_PRINTING, GetMongo.YES_PP);
+runner.setProperty(GetMongo.USE_PRETTY_PRINTING, GetMongo.GM_TRUE);
 runner.setIncomingConnection(false);
+runner.setValidateExpressionUsage(true);
--- End diff --

Removed.


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489383#comment-16489383
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r190658775
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/AbstractMongoProcessor.java
 ---
@@ -123,6 +124,7 @@
 .displayName("Batch Size")
 .description("The number of elements returned from the server 
in one batch.")
 .required(false)
+.expressionLanguageSupported(true)
--- End diff --

Done.


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2448: NIFI-4838 Added configurable progressive commits to...

2018-05-24 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r190658948
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/test/java/org/apache/nifi/processors/mongodb/GetMongoIT.java
 ---
@@ -67,20 +69,23 @@
 
 @Before
 public void setup() {
-runner = TestRunners.newTestRunner(GetMongo.class);
+runner = TestRunners.newTestRunner(TestGetMongo.class);
 runner.setVariable("uri", MONGO_URI);
 runner.setVariable("db", DB_NAME);
 runner.setVariable("collection", COLLECTION_NAME);
 runner.setProperty(AbstractMongoProcessor.URI, "${uri}");
 runner.setProperty(AbstractMongoProcessor.DATABASE_NAME, "${db}");
 runner.setProperty(AbstractMongoProcessor.COLLECTION_NAME, 
"${collection}");
-runner.setProperty(GetMongo.USE_PRETTY_PRINTING, GetMongo.YES_PP);
+runner.setProperty(GetMongo.USE_PRETTY_PRINTING, GetMongo.GM_TRUE);
 runner.setIncomingConnection(false);
+runner.setValidateExpressionUsage(true);
--- End diff --

Removed.


---


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489382#comment-16489382
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r190658748
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/AbstractMongoProcessor.java
 ---
@@ -115,6 +115,7 @@
 .description("How many results to put into a flowfile at once. 
The whole body will be treated as a JSON array of results.")
 .required(false)
 .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.expressionLanguageSupported(true)
--- End diff --

Done.


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2448: NIFI-4838 Added configurable progressive commits to...

2018-05-24 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r190658775
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/AbstractMongoProcessor.java
 ---
@@ -123,6 +124,7 @@
 .displayName("Batch Size")
 .description("The number of elements returned from the server 
in one batch.")
 .required(false)
+.expressionLanguageSupported(true)
--- End diff --

Done.


---


[GitHub] nifi pull request #2448: NIFI-4838 Added configurable progressive commits to...

2018-05-24 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r190658748
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/AbstractMongoProcessor.java
 ---
@@ -115,6 +115,7 @@
 .description("How many results to put into a flowfile at once. 
The whole body will be treated as a JSON array of results.")
 .required(false)
 .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.expressionLanguageSupported(true)
--- End diff --

Done.


---


[jira] [Commented] (MINIFICPP-498) Add travis builds targeting newer compilers (>= gcc 6)

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489376#comment-16489376
 ] 

ASF GitHub Bot commented on MINIFICPP-498:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/336


> Add travis builds targeting newer compilers (>= gcc 6)
> --
>
> Key: MINIFICPP-498
> URL: https://issues.apache.org/jira/browse/MINIFICPP-498
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Some minificpp features are only available for newer compilers (lua/sol2, 
> regex, date manipulation), and we should cover these in CI via updating 
> travis.yml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #336: MINIFICPP-498 Test newer compiler (gcc >=...

2018-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/336


---


[jira] [Commented] (NIFI-5044) SelectHiveQL accept only one statement

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489335#comment-16489335
 ] 

ASF GitHub Bot commented on NIFI-5044:
--

Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2695#discussion_r190653106
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/test/java/org/apache/nifi/processors/hive/TestSelectHiveQL.java
 ---
@@ -198,6 +200,51 @@ public void testWithSqlException() throws SQLException 
{
 runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
 }
 
+@Test
+public void invokeOnTriggerExceptionInPreQieriesNoIncomingFlows()
+throws InitializationException, ClassNotFoundException, 
SQLException, IOException {
+
+doOnTrigger(QUERY_WITHOUT_EL, false, CSV,
+"select 'no exception' from persons; select exception from 
persons",
+null);
+
+runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
--- End diff --

>In general the behavior should remain the same whenever possible

Currently, if there is an error in SQL Query - it will go to "failure" 
relationship (even if there are no incoming connections)

![image](https://user-images.githubusercontent.com/19496093/40499024-ae42cbec-5f4e-11e8-9ff0-348dd6793b2a.png)

![image](https://user-images.githubusercontent.com/19496093/40498559-68c064c2-5f4d-11e8-9a18-88fcb082b18e.png)

So, I follow current error handling strategy. It's just wasn't accurate 
about:

> since we weren't issuing a flow file on failure before

because we do issue FF on failure (on establishing connection it is 
different though, and not impacted by this change).

@mattyb149 , Any word on this?


> SelectHiveQL accept only one statement
> --
>
> Key: NIFI-5044
> URL: https://issues.apache.org/jira/browse/NIFI-5044
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: Davide Isoardi
>Assignee: Ed Berezitsky
>Priority: Critical
>  Labels: features, patch, pull-request-available
> Attachments: 
> 0001-NIFI-5044-SelectHiveQL-accept-only-one-statement.patch
>
>
> In [this 
> |[https://github.com/apache/nifi/commit/bbc714e73ba245de7bc32fd9958667c847101f7d]
>  ] commit claims to add support to running multiple statements both on 
> SelectHiveQL and PutHiveQL; instead, it adds only the support to PutHiveQL, 
> so SelectHiveQL still lacks this important feature. @Matt Burgess, I saw that 
> you worked on that, is there any reason for this? If not, can we support it?
> If I try to execute this query:
> {quote}set hive.vectorized.execution.enabled = false; SELECT * FROM table_name
> {quote}
> I have this error:
>  
> {quote}2018-04-05 13:35:40,572 ERROR [Timer-Driven Process Thread-146] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=243d4c17-b1fe-14af--ee8ce15e] Unable to execute 
> HiveQL select query set hive.vectorized.execution.enabled = false; SELECT * 
> FROM table_name for 
> StandardFlowFileRecord[uuid=0e035558-07ce-473b-b0d4-ac00b8b1df93,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1522824912161-2753, 
> container=default, section=705], offset=838441, 
> length=25],offset=0,name=cliente_attributi.csv,size=25] due to 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!; routing to failure: {}
>  org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL$2.process(SelectHiveQL.java:305)
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:275)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.lambda$onTrigger$0(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:106)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  at 
> 

[GitHub] nifi pull request #2695: NIFI-5044 SelectHiveQL accept only one statement

2018-05-24 Thread bdesert
Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2695#discussion_r190653106
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/test/java/org/apache/nifi/processors/hive/TestSelectHiveQL.java
 ---
@@ -198,6 +200,51 @@ public void testWithSqlException() throws SQLException 
{
 runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
 }
 
+@Test
+public void invokeOnTriggerExceptionInPreQieriesNoIncomingFlows()
+throws InitializationException, ClassNotFoundException, 
SQLException, IOException {
+
+doOnTrigger(QUERY_WITHOUT_EL, false, CSV,
+"select 'no exception' from persons; select exception from 
persons",
+null);
+
+runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
--- End diff --

>In general the behavior should remain the same whenever possible

Currently, if there is an error in SQL Query - it will go to "failure" 
relationship (even if there are no incoming connections)

![image](https://user-images.githubusercontent.com/19496093/40499024-ae42cbec-5f4e-11e8-9ff0-348dd6793b2a.png)

![image](https://user-images.githubusercontent.com/19496093/40498559-68c064c2-5f4d-11e8-9a18-88fcb082b18e.png)

So, I follow current error handling strategy. It's just wasn't accurate 
about:

> since we weren't issuing a flow file on failure before

because we do issue FF on failure (on establishing connection it is 
different though, and not impacted by this change).

@mattyb149 , Any word on this?


---


[jira] [Updated] (NIFI-5073) JMSConnectionFactory doesn't resolve 'variables' properly

2018-05-24 Thread Sivaprasanna Sethuraman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaprasanna Sethuraman updated NIFI-5073:
--
   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> JMSConnectionFactory doesn't resolve 'variables' properly
> -
>
> Key: NIFI-5073
> URL: https://issues.apache.org/jira/browse/NIFI-5073
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Matthew Clarke
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
> Fix For: 1.7.0
>
> Attachments: 
> 0001-NIFI-5073-JMSConnectionFactoryProvider-now-resolves-.patch
>
>
> Create a new process Group.
> Add "Variables" to the process group:
> for example:
> broker_uri=tcp://localhost:4141
> client_libs=/NiFi/custom-lib-dir/MQlib
> con_factory=blah
> Then while that process group is selected, create  a controller service.
> Create JMSConnectionFactory.
> Configure this controller service to use EL for PG defined variables above:
> ${con_factory}, ${con_factory}, and ${broker_uri}
> The controller service will remain invalid because the EL statements are not 
> properly resolved to their set values.
> Doing the exact same thing above using the external NiFi registry file works 
> as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5073) JMSConnectionFactory doesn't resolve 'variables' properly

2018-05-24 Thread Sivaprasanna Sethuraman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489286#comment-16489286
 ] 

Sivaprasanna Sethuraman commented on NIFI-5073:
---

Great. I'm changing this to 'Resolved'

> JMSConnectionFactory doesn't resolve 'variables' properly
> -
>
> Key: NIFI-5073
> URL: https://issues.apache.org/jira/browse/NIFI-5073
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Matthew Clarke
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
> Fix For: 1.7.0
>
> Attachments: 
> 0001-NIFI-5073-JMSConnectionFactoryProvider-now-resolves-.patch
>
>
> Create a new process Group.
> Add "Variables" to the process group:
> for example:
> broker_uri=tcp://localhost:4141
> client_libs=/NiFi/custom-lib-dir/MQlib
> con_factory=blah
> Then while that process group is selected, create  a controller service.
> Create JMSConnectionFactory.
> Configure this controller service to use EL for PG defined variables above:
> ${con_factory}, ${con_factory}, and ${broker_uri}
> The controller service will remain invalid because the EL statements are not 
> properly resolved to their set values.
> Doing the exact same thing above using the external NiFi registry file works 
> as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFIREG-160) Implement a hook provider

2018-05-24 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran resolved NIFIREG-160.
-
Resolution: Fixed

> Implement a hook provider
> -
>
> Key: NIFIREG-160
> URL: https://issues.apache.org/jira/browse/NIFIREG-160
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Pierre Villard
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 0.2.0
>
>
> In order to extend NiFi Registry and NiFi CLI features to integrate with 
> automated deployment pipelines, it would be useful to provide a hook 
> extension point that can be configured by users to trigger actions when a new 
> flow snapshot version is committed in the Registry.
> A first implementation of this extension point could be a "script hook": a 
> script would be executed when a new flow snapshot version is committed. 
> Arguments passed to the script would be: bucket ID, flow ID, version, author 
> and comment.
> This would enable a lot of scenarios including automatically deploy flows 
> from one environment to another.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5044) SelectHiveQL accept only one statement

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489255#comment-16489255
 ] 

ASF GitHub Bot commented on NIFI-5044:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2695#discussion_r190633734
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -113,6 +126,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
 .build();
 
+public static final PropertyDescriptor HIVEQL_POST_QUERY = new 
PropertyDescriptor.Builder()
+.name("hive-post-query")
+.displayName("HiveQL Post-Query")
+.description("HiveQL post-query to execute. 
Semicolon-delimited list of queries. "
++ "Example: 'set tez.queue.name=default; set 
hive.exec.orc.split.strategy=HYBRID; set 
hive.exec.reducers.bytes.per.reducer=258998272'. "
++ "Note, the results/outputs of these queries will be 
suppressed if successful executed.")
+.required(false)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
--- End diff --

Yeah, tbh, I'm really not sure it's worth the effort. I'm fine with current 
approach.


> SelectHiveQL accept only one statement
> --
>
> Key: NIFI-5044
> URL: https://issues.apache.org/jira/browse/NIFI-5044
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: Davide Isoardi
>Assignee: Ed Berezitsky
>Priority: Critical
>  Labels: features, patch, pull-request-available
> Attachments: 
> 0001-NIFI-5044-SelectHiveQL-accept-only-one-statement.patch
>
>
> In [this 
> |[https://github.com/apache/nifi/commit/bbc714e73ba245de7bc32fd9958667c847101f7d]
>  ] commit claims to add support to running multiple statements both on 
> SelectHiveQL and PutHiveQL; instead, it adds only the support to PutHiveQL, 
> so SelectHiveQL still lacks this important feature. @Matt Burgess, I saw that 
> you worked on that, is there any reason for this? If not, can we support it?
> If I try to execute this query:
> {quote}set hive.vectorized.execution.enabled = false; SELECT * FROM table_name
> {quote}
> I have this error:
>  
> {quote}2018-04-05 13:35:40,572 ERROR [Timer-Driven Process Thread-146] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=243d4c17-b1fe-14af--ee8ce15e] Unable to execute 
> HiveQL select query set hive.vectorized.execution.enabled = false; SELECT * 
> FROM table_name for 
> StandardFlowFileRecord[uuid=0e035558-07ce-473b-b0d4-ac00b8b1df93,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1522824912161-2753, 
> container=default, section=705], offset=838441, 
> length=25],offset=0,name=cliente_attributi.csv,size=25] due to 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!; routing to failure: {}
>  org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL$2.process(SelectHiveQL.java:305)
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:275)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.lambda$onTrigger$0(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:106)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> 

[jira] [Commented] (NIFI-5044) SelectHiveQL accept only one statement

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489254#comment-16489254
 ] 

ASF GitHub Bot commented on NIFI-5044:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2695#discussion_r190637825
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/test/java/org/apache/nifi/processors/hive/TestSelectHiveQL.java
 ---
@@ -198,6 +200,51 @@ public void testWithSqlException() throws SQLException 
{
 runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
 }
 
+@Test
+public void invokeOnTriggerExceptionInPreQieriesNoIncomingFlows()
+throws InitializationException, ClassNotFoundException, 
SQLException, IOException {
+
+doOnTrigger(QUERY_WITHOUT_EL, false, CSV,
+"select 'no exception' from persons; select exception from 
persons",
+null);
+
+runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
--- End diff --

Based on the comment from @mattyb149 in the JIRA:
> In general the behavior should remain the same whenever possible, so if 
you have a SelectHiveQL that doesn't have incoming (non-loop) connections, then 
the query must be supplied, and whether it (or the pre-post queries) have EL in 
them, then since we weren't issuing a flow file on failure before, we shouldn't 
now either. So when I said "Route to Failure with penalization for everything 
else", that's only when there is a flow file to route. If there isn't, then we 
should yield (and remove any created flow files and/or rollback our session 
anyway).

Not sure to understand the assertion here since there is no incoming ff, 
right?


> SelectHiveQL accept only one statement
> --
>
> Key: NIFI-5044
> URL: https://issues.apache.org/jira/browse/NIFI-5044
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: Davide Isoardi
>Assignee: Ed Berezitsky
>Priority: Critical
>  Labels: features, patch, pull-request-available
> Attachments: 
> 0001-NIFI-5044-SelectHiveQL-accept-only-one-statement.patch
>
>
> In [this 
> |[https://github.com/apache/nifi/commit/bbc714e73ba245de7bc32fd9958667c847101f7d]
>  ] commit claims to add support to running multiple statements both on 
> SelectHiveQL and PutHiveQL; instead, it adds only the support to PutHiveQL, 
> so SelectHiveQL still lacks this important feature. @Matt Burgess, I saw that 
> you worked on that, is there any reason for this? If not, can we support it?
> If I try to execute this query:
> {quote}set hive.vectorized.execution.enabled = false; SELECT * FROM table_name
> {quote}
> I have this error:
>  
> {quote}2018-04-05 13:35:40,572 ERROR [Timer-Driven Process Thread-146] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=243d4c17-b1fe-14af--ee8ce15e] Unable to execute 
> HiveQL select query set hive.vectorized.execution.enabled = false; SELECT * 
> FROM table_name for 
> StandardFlowFileRecord[uuid=0e035558-07ce-473b-b0d4-ac00b8b1df93,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1522824912161-2753, 
> container=default, section=705], offset=838441, 
> length=25],offset=0,name=cliente_attributi.csv,size=25] due to 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!; routing to failure: {}
>  org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL$2.process(SelectHiveQL.java:305)
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:275)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.lambda$onTrigger$0(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:106)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  at 

[jira] [Commented] (NIFI-5044) SelectHiveQL accept only one statement

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489253#comment-16489253
 ] 

ASF GitHub Bot commented on NIFI-5044:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2695#discussion_r190633475
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -113,6 +126,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
 .build();
 
+public static final PropertyDescriptor HIVEQL_POST_QUERY = new 
PropertyDescriptor.Builder()
+.name("hive-post-query")
+.displayName("HiveQL Post-Query")
+.description("HiveQL post-query to execute. 
Semicolon-delimited list of queries. "
++ "Example: 'set tez.queue.name=default; set 
hive.exec.orc.split.strategy=HYBRID; set 
hive.exec.reducers.bytes.per.reducer=258998272'. "
--- End diff --

don't have anything in mind :) maybe just removing the examples to be sure 
it's not misleading somehow?


> SelectHiveQL accept only one statement
> --
>
> Key: NIFI-5044
> URL: https://issues.apache.org/jira/browse/NIFI-5044
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: Davide Isoardi
>Assignee: Ed Berezitsky
>Priority: Critical
>  Labels: features, patch, pull-request-available
> Attachments: 
> 0001-NIFI-5044-SelectHiveQL-accept-only-one-statement.patch
>
>
> In [this 
> |[https://github.com/apache/nifi/commit/bbc714e73ba245de7bc32fd9958667c847101f7d]
>  ] commit claims to add support to running multiple statements both on 
> SelectHiveQL and PutHiveQL; instead, it adds only the support to PutHiveQL, 
> so SelectHiveQL still lacks this important feature. @Matt Burgess, I saw that 
> you worked on that, is there any reason for this? If not, can we support it?
> If I try to execute this query:
> {quote}set hive.vectorized.execution.enabled = false; SELECT * FROM table_name
> {quote}
> I have this error:
>  
> {quote}2018-04-05 13:35:40,572 ERROR [Timer-Driven Process Thread-146] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=243d4c17-b1fe-14af--ee8ce15e] Unable to execute 
> HiveQL select query set hive.vectorized.execution.enabled = false; SELECT * 
> FROM table_name for 
> StandardFlowFileRecord[uuid=0e035558-07ce-473b-b0d4-ac00b8b1df93,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1522824912161-2753, 
> container=default, section=705], offset=838441, 
> length=25],offset=0,name=cliente_attributi.csv,size=25] due to 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!; routing to failure: {}
>  org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL$2.process(SelectHiveQL.java:305)
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:275)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.lambda$onTrigger$0(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:106)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
>  Caused by: java.sql.SQLException: 

[jira] [Commented] (NIFI-5044) SelectHiveQL accept only one statement

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489256#comment-16489256
 ] 

ASF GitHub Bot commented on NIFI-5044:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2695#discussion_r190637767
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/test/java/org/apache/nifi/processors/hive/TestSelectHiveQL.java
 ---
@@ -198,6 +200,51 @@ public void testWithSqlException() throws SQLException 
{
 runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
 }
 
+@Test
+public void invokeOnTriggerExceptionInPreQieriesNoIncomingFlows()
+throws InitializationException, ClassNotFoundException, 
SQLException, IOException {
+
+doOnTrigger(QUERY_WITHOUT_EL, false, CSV,
+"select 'no exception' from persons; select exception from 
persons",
+null);
+
+runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
+}
+
+@Test
+public void invokeOnTriggerExceptionInPreQieriesWithIncomingFlows()
+throws InitializationException, ClassNotFoundException, 
SQLException, IOException {
+
+doOnTrigger(QUERY_WITHOUT_EL, true, CSV,
+"select 'no exception' from persons; select exception from 
persons",
+null);
+
+runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
+}
+
+@Test
+public void invokeOnTriggerExceptionInPostQieriesNoIncomingFlows()
+throws InitializationException, ClassNotFoundException, 
SQLException, IOException {
+
+doOnTrigger(QUERY_WITHOUT_EL, false, CSV,
+null,
+"select 'no exception' from persons; select exception from 
persons");
+
+runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
--- End diff --

Same here


> SelectHiveQL accept only one statement
> --
>
> Key: NIFI-5044
> URL: https://issues.apache.org/jira/browse/NIFI-5044
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: Davide Isoardi
>Assignee: Ed Berezitsky
>Priority: Critical
>  Labels: features, patch, pull-request-available
> Attachments: 
> 0001-NIFI-5044-SelectHiveQL-accept-only-one-statement.patch
>
>
> In [this 
> |[https://github.com/apache/nifi/commit/bbc714e73ba245de7bc32fd9958667c847101f7d]
>  ] commit claims to add support to running multiple statements both on 
> SelectHiveQL and PutHiveQL; instead, it adds only the support to PutHiveQL, 
> so SelectHiveQL still lacks this important feature. @Matt Burgess, I saw that 
> you worked on that, is there any reason for this? If not, can we support it?
> If I try to execute this query:
> {quote}set hive.vectorized.execution.enabled = false; SELECT * FROM table_name
> {quote}
> I have this error:
>  
> {quote}2018-04-05 13:35:40,572 ERROR [Timer-Driven Process Thread-146] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=243d4c17-b1fe-14af--ee8ce15e] Unable to execute 
> HiveQL select query set hive.vectorized.execution.enabled = false; SELECT * 
> FROM table_name for 
> StandardFlowFileRecord[uuid=0e035558-07ce-473b-b0d4-ac00b8b1df93,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1522824912161-2753, 
> container=default, section=705], offset=838441, 
> length=25],offset=0,name=cliente_attributi.csv,size=25] due to 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!; routing to failure: {}
>  org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL$2.process(SelectHiveQL.java:305)
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:275)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.lambda$onTrigger$0(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:106)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>  at 
> 

[GitHub] nifi pull request #2695: NIFI-5044 SelectHiveQL accept only one statement

2018-05-24 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2695#discussion_r190637767
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/test/java/org/apache/nifi/processors/hive/TestSelectHiveQL.java
 ---
@@ -198,6 +200,51 @@ public void testWithSqlException() throws SQLException 
{
 runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
 }
 
+@Test
+public void invokeOnTriggerExceptionInPreQieriesNoIncomingFlows()
+throws InitializationException, ClassNotFoundException, 
SQLException, IOException {
+
+doOnTrigger(QUERY_WITHOUT_EL, false, CSV,
+"select 'no exception' from persons; select exception from 
persons",
+null);
+
+runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
+}
+
+@Test
+public void invokeOnTriggerExceptionInPreQieriesWithIncomingFlows()
+throws InitializationException, ClassNotFoundException, 
SQLException, IOException {
+
+doOnTrigger(QUERY_WITHOUT_EL, true, CSV,
+"select 'no exception' from persons; select exception from 
persons",
+null);
+
+runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
+}
+
+@Test
+public void invokeOnTriggerExceptionInPostQieriesNoIncomingFlows()
+throws InitializationException, ClassNotFoundException, 
SQLException, IOException {
+
+doOnTrigger(QUERY_WITHOUT_EL, false, CSV,
+null,
+"select 'no exception' from persons; select exception from 
persons");
+
+runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
--- End diff --

Same here


---


[GitHub] nifi pull request #2695: NIFI-5044 SelectHiveQL accept only one statement

2018-05-24 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2695#discussion_r190633734
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -113,6 +126,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
 .build();
 
+public static final PropertyDescriptor HIVEQL_POST_QUERY = new 
PropertyDescriptor.Builder()
+.name("hive-post-query")
+.displayName("HiveQL Post-Query")
+.description("HiveQL post-query to execute. 
Semicolon-delimited list of queries. "
++ "Example: 'set tez.queue.name=default; set 
hive.exec.orc.split.strategy=HYBRID; set 
hive.exec.reducers.bytes.per.reducer=258998272'. "
++ "Note, the results/outputs of these queries will be 
suppressed if successful executed.")
+.required(false)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
--- End diff --

Yeah, tbh, I'm really not sure it's worth the effort. I'm fine with current 
approach.


---


[GitHub] nifi pull request #2695: NIFI-5044 SelectHiveQL accept only one statement

2018-05-24 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2695#discussion_r190633475
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -113,6 +126,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
 .build();
 
+public static final PropertyDescriptor HIVEQL_POST_QUERY = new 
PropertyDescriptor.Builder()
+.name("hive-post-query")
+.displayName("HiveQL Post-Query")
+.description("HiveQL post-query to execute. 
Semicolon-delimited list of queries. "
++ "Example: 'set tez.queue.name=default; set 
hive.exec.orc.split.strategy=HYBRID; set 
hive.exec.reducers.bytes.per.reducer=258998272'. "
--- End diff --

don't have anything in mind :) maybe just removing the examples to be sure 
it's not misleading somehow?


---


[GitHub] nifi pull request #2695: NIFI-5044 SelectHiveQL accept only one statement

2018-05-24 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2695#discussion_r190637825
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/test/java/org/apache/nifi/processors/hive/TestSelectHiveQL.java
 ---
@@ -198,6 +200,51 @@ public void testWithSqlException() throws SQLException 
{
 runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
 }
 
+@Test
+public void invokeOnTriggerExceptionInPreQieriesNoIncomingFlows()
+throws InitializationException, ClassNotFoundException, 
SQLException, IOException {
+
+doOnTrigger(QUERY_WITHOUT_EL, false, CSV,
+"select 'no exception' from persons; select exception from 
persons",
+null);
+
+runner.assertAllFlowFilesTransferred(SelectHiveQL.REL_FAILURE, 1);
--- End diff --

Based on the comment from @mattyb149 in the JIRA:
> In general the behavior should remain the same whenever possible, so if 
you have a SelectHiveQL that doesn't have incoming (non-loop) connections, then 
the query must be supplied, and whether it (or the pre-post queries) have EL in 
them, then since we weren't issuing a flow file on failure before, we shouldn't 
now either. So when I said "Route to Failure with penalization for everything 
else", that's only when there is a flow file to route. If there isn't, then we 
should yield (and remove any created flow files and/or rollback our session 
anyway).

Not sure to understand the assertion here since there is no incoming ff, 
right?


---


[jira] [Commented] (NIFIREG-160) Implement a hook provider

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489248#comment-16489248
 ] 

ASF GitHub Bot commented on NIFIREG-160:


Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/118


> Implement a hook provider
> -
>
> Key: NIFIREG-160
> URL: https://issues.apache.org/jira/browse/NIFIREG-160
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Pierre Villard
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 0.2.0
>
>
> In order to extend NiFi Registry and NiFi CLI features to integrate with 
> automated deployment pipelines, it would be useful to provide a hook 
> extension point that can be configured by users to trigger actions when a new 
> flow snapshot version is committed in the Registry.
> A first implementation of this extension point could be a "script hook": a 
> script would be executed when a new flow snapshot version is committed. 
> Arguments passed to the script would be: bucket ID, flow ID, version, author 
> and comment.
> This would enable a lot of scenarios including automatically deploy flows 
> from one environment to another.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #118: NIFIREG-160 Fixing issue where version crea...

2018-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/118


---


  1   2   >