[jira] [Commented] (NIFI-4444) Upgrade Jersey Versions

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201304#comment-16201304
 ] 

ASF GitHub Bot commented on NIFI-:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2206
  
I had some trouble getting the cluster configured successfully due to 
Zookeeper port conflicts (will be filing a Jira to improve the error messaging 
there). I got it working and the flow operates successfully on both branches. 

I can't get the toolkit to work, however. Both `notify.sh` and 
`node-manager.sh` are reporting errors parsing the `-b` option for the existing 
`bootstrap.conf` file, so I will keep playing with that. 

Meanwhile, the unit tests and contrib-check all pass on the latest commit. 


> Upgrade Jersey Versions
> ---
>
> Key: NIFI-
> URL: https://issues.apache.org/jira/browse/NIFI-
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Attachments: NIFI-.xml
>
>
> Need to upgrade to a newer version of Jersey. The primary motivation is to 
> upgrade the version used within NiFi itself. However, there are a number of 
> extensions that also leverage it. Of those extensions, some utilize the older 
> version defined in dependencyManagement while others override explicitly 
> within their own bundle dependencyManagement. For this JIRA I propose 
> removing the Jersey artifacts from the root pom and allow the version to be 
> specified on a bundle by bundle basis.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2206: NIFI-4444: Upgrade to Jersey 2.x

2017-10-11 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2206
  
I had some trouble getting the cluster configured successfully due to 
Zookeeper port conflicts (will be filing a Jira to improve the error messaging 
there). I got it working and the flow operates successfully on both branches. 

I can't get the toolkit to work, however. Both `notify.sh` and 
`node-manager.sh` are reporting errors parsing the `-b` option for the existing 
`bootstrap.conf` file, so I will keep playing with that. 

Meanwhile, the unit tests and contrib-check all pass on the latest commit. 


---


[jira] [Commented] (NIFI-4457) "Maximum-value" not increasing when "initial.maxvalue" is set and "Maximum-value column" name is different from "id"

2017-10-11 Thread Koji Kawamura (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201271#comment-16201271
 ] 

Koji Kawamura commented on NIFI-4457:
-

[~mehrdad22] Would you be able to share followings with us to nail it down the 
cause of this issue:

# To understand and confirm the situation, before starting QueryDatabaseTable,
## Capture QueryDatabaseTable config and state (as you did before with the 
attached screenshot)
## Execute a SQL query directly to the source table using the 
'initial.maximum.' value specified at QueryDatabaseTable, for example: 
{code}SELECT * FROM tweets_json WHERE tweet_id > 11{code} This 
query ensures there're new records those should be fetched by 
QueryDatabaseTable. Please share query and its result. (The example query 
should return rows having grater tweet_id such as 12)
# After starting QueryDatabaseTable, please confirm following conditions:
## QueryDatabaseTable state has NOT been updated as reported

Having a reproducible dataset and operations is really helpful for us to fix 
the issue.

> "Maximum-value" not increasing when "initial.maxvalue" is set and 
> "Maximum-value column" name is different from "id" 
> -
>
> Key: NIFI-4457
> URL: https://issues.apache.org/jira/browse/NIFI-4457
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
> Environment: windows 10
>Reporter: meh
> Attachments: Picture1.png, Picture2.png, subquery.csv
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> when "Maximum-value column" name is "id" there is no problem, when i add 
> "initial.maxvalue.id" property in "QueryDatabaseTable" processor, it works 
> well and maxvalue is increasing by every running.
> !Picture1.png|thumbnail!
> but...
> when the "Maximum-value column" name is different from "id" (such as 
> "tweet_id"), after initial processor working, only given 
> "initial.maxvalue.id" is saves and that repeating just same value for every 
> run.
> !Picture2.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201268#comment-16201268
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144169649
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-processors/src/main/java/org/apache/nifi/processors/PutDruid.java
 ---
@@ -0,0 +1,206 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import org.codehaus.jackson.JsonParseException;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.apache.nifi.controller.api.DruidTranquilityService;
+import com.metamx.tranquility.tranquilizer.MessageDroppedException;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.twitter.util.Await;
+import com.twitter.util.Future;
+import com.twitter.util.FutureEventListener;
+
+import scala.runtime.BoxedUnit;
+
+@SideEffectFree
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Sends events to Apache Druid for Indexing. "
+   + "Leverages Druid Tranquility 
Controller service."
+   + "Incoming flow files are 
expected to contain 1 or many JSON objects, one JSON object per line")
+public class PutDruid extends AbstractSessionFactoryProcessor {
+
+private List properties;
+private Set relationships;
+private final Map messageStatus = new 
HashMap();
+
+public static final PropertyDescriptor DRUID_TRANQUILITY_SERVICE = new 
PropertyDescriptor.Builder()
+.name("druid_tranquility_service")
+.description("Tranquility Service to use for sending events to 
Druid")
+.required(true)
+.identifiesControllerService(DruidTranquilityService.class)
+.build();
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("SUCCESS")
+.description("Succes relationship")
+.build();
+
+public static final Relationship REL_FAIL = new Relationship.Builder()
+.name("FAIL")
+.description("FlowFiles are routed to this relationship when 
they cannot be parsed")
+.build();
+
+public static final Relationship REL_DROPPED = new 
Relationship.Builder()
+.name("DROPPED")
+.description("FlowFiles are routed to this relationship when 
they are outside of the configured time window, timestamp format is invalid, 
ect...")
+.build();
+
+

[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-11 Thread vakshorton
Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144169649
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-processors/src/main/java/org/apache/nifi/processors/PutDruid.java
 ---
@@ -0,0 +1,206 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import org.codehaus.jackson.JsonParseException;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.apache.nifi.controller.api.DruidTranquilityService;
+import com.metamx.tranquility.tranquilizer.MessageDroppedException;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.twitter.util.Await;
+import com.twitter.util.Future;
+import com.twitter.util.FutureEventListener;
+
+import scala.runtime.BoxedUnit;
+
+@SideEffectFree
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Sends events to Apache Druid for Indexing. "
+   + "Leverages Druid Tranquility 
Controller service."
+   + "Incoming flow files are 
expected to contain 1 or many JSON objects, one JSON object per line")
+public class PutDruid extends AbstractSessionFactoryProcessor {
+
+private List properties;
+private Set relationships;
+private final Map messageStatus = new 
HashMap();
+
+public static final PropertyDescriptor DRUID_TRANQUILITY_SERVICE = new 
PropertyDescriptor.Builder()
+.name("druid_tranquility_service")
+.description("Tranquility Service to use for sending events to 
Druid")
+.required(true)
+.identifiesControllerService(DruidTranquilityService.class)
+.build();
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("SUCCESS")
+.description("Succes relationship")
+.build();
+
+public static final Relationship REL_FAIL = new Relationship.Builder()
+.name("FAIL")
+.description("FlowFiles are routed to this relationship when 
they cannot be parsed")
+.build();
+
+public static final Relationship REL_DROPPED = new 
Relationship.Builder()
+.name("DROPPED")
+.description("FlowFiles are routed to this relationship when 
they are outside of the configured time window, timestamp format is invalid, 
ect...")
+.build();
+
+public void init(final ProcessorInitializationContext context){
+List properties = new ArrayList<>();
+properties.add(DRUID_TRANQUILITY_SERVICE);
+this.properties = 

[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201260#comment-16201260
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144168218
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-processors/src/main/java/org/apache/nifi/processors/PutDruid.java
 ---
@@ -0,0 +1,206 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import org.codehaus.jackson.JsonParseException;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.apache.nifi.controller.api.DruidTranquilityService;
+import com.metamx.tranquility.tranquilizer.MessageDroppedException;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.twitter.util.Await;
+import com.twitter.util.Future;
+import com.twitter.util.FutureEventListener;
+
+import scala.runtime.BoxedUnit;
+
+@SideEffectFree
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Sends events to Apache Druid for Indexing. "
+   + "Leverages Druid Tranquility 
Controller service."
+   + "Incoming flow files are 
expected to contain 1 or many JSON objects, one JSON object per line")
+public class PutDruid extends AbstractSessionFactoryProcessor {
+
+private List properties;
+private Set relationships;
+private final Map messageStatus = new 
HashMap();
+
+public static final PropertyDescriptor DRUID_TRANQUILITY_SERVICE = new 
PropertyDescriptor.Builder()
+.name("druid_tranquility_service")
+.description("Tranquility Service to use for sending events to 
Druid")
+.required(true)
+.identifiesControllerService(DruidTranquilityService.class)
+.build();
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("SUCCESS")
+.description("Succes relationship")
+.build();
+
+public static final Relationship REL_FAIL = new Relationship.Builder()
+.name("FAIL")
+.description("FlowFiles are routed to this relationship when 
they cannot be parsed")
+.build();
+
+public static final Relationship REL_DROPPED = new 
Relationship.Builder()
+.name("DROPPED")
+.description("FlowFiles are routed to this relationship when 
they are outside of the configured time window, timestamp format is invalid, 
ect...")
+.build();
+
+

[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-11 Thread vakshorton
Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144168218
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-processors/src/main/java/org/apache/nifi/processors/PutDruid.java
 ---
@@ -0,0 +1,206 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import org.codehaus.jackson.JsonParseException;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.apache.nifi.controller.api.DruidTranquilityService;
+import com.metamx.tranquility.tranquilizer.MessageDroppedException;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.twitter.util.Await;
+import com.twitter.util.Future;
+import com.twitter.util.FutureEventListener;
+
+import scala.runtime.BoxedUnit;
+
+@SideEffectFree
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Sends events to Apache Druid for Indexing. "
+   + "Leverages Druid Tranquility 
Controller service."
+   + "Incoming flow files are 
expected to contain 1 or many JSON objects, one JSON object per line")
+public class PutDruid extends AbstractSessionFactoryProcessor {
+
+private List properties;
+private Set relationships;
+private final Map messageStatus = new 
HashMap();
+
+public static final PropertyDescriptor DRUID_TRANQUILITY_SERVICE = new 
PropertyDescriptor.Builder()
+.name("druid_tranquility_service")
+.description("Tranquility Service to use for sending events to 
Druid")
+.required(true)
+.identifiesControllerService(DruidTranquilityService.class)
+.build();
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("SUCCESS")
+.description("Succes relationship")
+.build();
+
+public static final Relationship REL_FAIL = new Relationship.Builder()
+.name("FAIL")
+.description("FlowFiles are routed to this relationship when 
they cannot be parsed")
+.build();
+
+public static final Relationship REL_DROPPED = new 
Relationship.Builder()
+.name("DROPPED")
+.description("FlowFiles are routed to this relationship when 
they are outside of the configured time window, timestamp format is invalid, 
ect...")
+.build();
+
+public void init(final ProcessorInitializationContext context){
+List properties = new ArrayList<>();
+properties.add(DRUID_TRANQUILITY_SERVICE);
+this.properties = 

[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-11 Thread vakshorton
Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144166799
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-processors/src/main/java/org/apache/nifi/processors/PutDruid.java
 ---
@@ -0,0 +1,206 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import org.codehaus.jackson.JsonParseException;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.apache.nifi.controller.api.DruidTranquilityService;
+import com.metamx.tranquility.tranquilizer.MessageDroppedException;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.twitter.util.Await;
+import com.twitter.util.Future;
+import com.twitter.util.FutureEventListener;
+
+import scala.runtime.BoxedUnit;
+
+@SideEffectFree
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Sends events to Apache Druid for Indexing. "
+   + "Leverages Druid Tranquility 
Controller service."
+   + "Incoming flow files are 
expected to contain 1 or many JSON objects, one JSON object per line")
+public class PutDruid extends AbstractSessionFactoryProcessor {
+
+private List properties;
+private Set relationships;
+private final Map messageStatus = new 
HashMap();
+
+public static final PropertyDescriptor DRUID_TRANQUILITY_SERVICE = new 
PropertyDescriptor.Builder()
+.name("druid_tranquility_service")
+.description("Tranquility Service to use for sending events to 
Druid")
+.required(true)
+.identifiesControllerService(DruidTranquilityService.class)
+.build();
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("SUCCESS")
+.description("Succes relationship")
+.build();
+
+public static final Relationship REL_FAIL = new Relationship.Builder()
+.name("FAIL")
+.description("FlowFiles are routed to this relationship when 
they cannot be parsed")
+.build();
+
+public static final Relationship REL_DROPPED = new 
Relationship.Builder()
+.name("DROPPED")
+.description("FlowFiles are routed to this relationship when 
they are outside of the configured time window, timestamp format is invalid, 
ect...")
+.build();
+
+public void init(final ProcessorInitializationContext context){
+List properties = new ArrayList<>();
+properties.add(DRUID_TRANQUILITY_SERVICE);
+this.properties = 

[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201249#comment-16201249
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144166799
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-processors/src/main/java/org/apache/nifi/processors/PutDruid.java
 ---
@@ -0,0 +1,206 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import org.codehaus.jackson.JsonParseException;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.apache.nifi.controller.api.DruidTranquilityService;
+import com.metamx.tranquility.tranquilizer.MessageDroppedException;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.twitter.util.Await;
+import com.twitter.util.Future;
+import com.twitter.util.FutureEventListener;
+
+import scala.runtime.BoxedUnit;
+
+@SideEffectFree
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Sends events to Apache Druid for Indexing. "
+   + "Leverages Druid Tranquility 
Controller service."
+   + "Incoming flow files are 
expected to contain 1 or many JSON objects, one JSON object per line")
+public class PutDruid extends AbstractSessionFactoryProcessor {
+
+private List properties;
+private Set relationships;
+private final Map messageStatus = new 
HashMap();
+
+public static final PropertyDescriptor DRUID_TRANQUILITY_SERVICE = new 
PropertyDescriptor.Builder()
+.name("druid_tranquility_service")
+.description("Tranquility Service to use for sending events to 
Druid")
+.required(true)
+.identifiesControllerService(DruidTranquilityService.class)
+.build();
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("SUCCESS")
+.description("Succes relationship")
+.build();
+
+public static final Relationship REL_FAIL = new Relationship.Builder()
+.name("FAIL")
+.description("FlowFiles are routed to this relationship when 
they cannot be parsed")
+.build();
+
+public static final Relationship REL_DROPPED = new 
Relationship.Builder()
+.name("DROPPED")
+.description("FlowFiles are routed to this relationship when 
they are outside of the configured time window, timestamp format is invalid, 
ect...")
+.build();
+
+

[jira] [Updated] (NIFI-4480) Implement datadog and ambari reporting tasks as services for MetricReportingTask

2017-10-11 Thread Omer Hadari (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer Hadari updated NIFI-4480:
--
Description: In [NIFI-4392|https://issues.apache.org/jira/browse/NIFI-4392] 
a MetricReportingTask was created, that uses different implementations of a 
controller service for reporting the same metrics to different clients. The 
existing ambari reporting task and datadog reporting task can be implemented in 
the same manner - in order to keep things uniform and avoid duplication.  (was: 
In [NIFI-4392|https://issues.apache.org/jira/browse/NIFI-4392] a 
`MetricReportingTask` was created, that uses different implementations of a 
controller service for reporting the same metrics to different clients. The 
existing ambari reporting task and datadog reporting task can be implemented in 
the same manner - in order to keep things uniform and avoid duplication.)

> Implement datadog and ambari reporting tasks as services for 
> MetricReportingTask
> 
>
> Key: NIFI-4480
> URL: https://issues.apache.org/jira/browse/NIFI-4480
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Omer Hadari
>Assignee: Omer Hadari
>Priority: Minor
>  Labels: refactor
>
> In [NIFI-4392|https://issues.apache.org/jira/browse/NIFI-4392] a 
> MetricReportingTask was created, that uses different implementations of a 
> controller service for reporting the same metrics to different clients. The 
> existing ambari reporting task and datadog reporting task can be implemented 
> in the same manner - in order to keep things uniform and avoid duplication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-11 Thread vakshorton
Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144165948
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/DruidTranquilityController.java
 ---
@@ -0,0 +1,416 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.ExponentialBackoffRetry;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.joda.time.DateTime;
+import org.joda.time.Period;
+
+import com.metamx.common.Granularity;
+import com.metamx.tranquility.beam.Beam;
+import com.metamx.tranquility.beam.ClusteredBeamTuning;
+import com.metamx.tranquility.druid.DruidBeamConfig;
+import com.metamx.tranquility.druid.DruidBeams;
+import com.metamx.tranquility.druid.DruidDimensions;
+import com.metamx.tranquility.druid.DruidEnvironment;
+import com.metamx.tranquility.druid.DruidLocation;
+import com.metamx.tranquility.druid.DruidRollup;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.metamx.tranquility.typeclass.Timestamper;
+
+import io.druid.data.input.impl.TimestampSpec;
+import io.druid.granularity.QueryGranularity;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.DoubleMaxAggregatorFactory;
+import io.druid.query.aggregation.DoubleMinAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongMaxAggregatorFactory;
+import io.druid.query.aggregation.LongMinAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Asyncronously sends flowfiles to Druid Indexing 
Task using Tranquility API. "
+   + "If aggregation and roll-up of data is required, an 
Aggregator JSON desriptor needs to be provided."
+   + "Details on how desribe aggregation using JSON can be found 
at: http://druid.io/docs/latest/querying/aggregations.html;)
+public class DruidTranquilityController extends AbstractControllerService 
implements org.apache.nifi.controller.api.DruidTranquilityService{
+   private String firehosePattern = "druid:firehose:%s";
+   private int clusterPartitions = 1;
+private int clusterReplication = 1 ;
+private String indexRetryPeriod = "PT10M";
+
+private Tranquilizer tranquilizer = null;
+
+   public static final PropertyDescriptor DATASOURCE = new 
PropertyDescriptor.Builder()
+.name("data_source")
+.description("Druid Data Source")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.required(true)
+.build();
+   
+   public static final PropertyDescriptor CONNECT_STRING = new 
PropertyDescriptor.Builder()
+.name("zk_connect_string")
+.description("ZK Connect String for Druid ")
+.required(true)
+

[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201238#comment-16201238
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144165848
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/DruidTranquilityController.java
 ---
@@ -0,0 +1,416 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.ExponentialBackoffRetry;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.joda.time.DateTime;
+import org.joda.time.Period;
+
+import com.metamx.common.Granularity;
+import com.metamx.tranquility.beam.Beam;
+import com.metamx.tranquility.beam.ClusteredBeamTuning;
+import com.metamx.tranquility.druid.DruidBeamConfig;
+import com.metamx.tranquility.druid.DruidBeams;
+import com.metamx.tranquility.druid.DruidDimensions;
+import com.metamx.tranquility.druid.DruidEnvironment;
+import com.metamx.tranquility.druid.DruidLocation;
+import com.metamx.tranquility.druid.DruidRollup;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.metamx.tranquility.typeclass.Timestamper;
+
+import io.druid.data.input.impl.TimestampSpec;
+import io.druid.granularity.QueryGranularity;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.DoubleMaxAggregatorFactory;
+import io.druid.query.aggregation.DoubleMinAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongMaxAggregatorFactory;
+import io.druid.query.aggregation.LongMinAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Asyncronously sends flowfiles to Druid Indexing 
Task using Tranquility API. "
+   + "If aggregation and roll-up of data is required, an 
Aggregator JSON desriptor needs to be provided."
+   + "Details on how desribe aggregation using JSON can be found 
at: http://druid.io/docs/latest/querying/aggregations.html;)
+public class DruidTranquilityController extends AbstractControllerService 
implements org.apache.nifi.controller.api.DruidTranquilityService{
+   private String firehosePattern = "druid:firehose:%s";
+   private int clusterPartitions = 1;
+private int clusterReplication = 1 ;
+private String indexRetryPeriod = "PT10M";
+
+private Tranquilizer tranquilizer = null;
+
+   public static final PropertyDescriptor DATASOURCE = new 
PropertyDescriptor.Builder()
+.name("data_source")
+.description("Druid Data Source")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.required(true)
+.build();
+   
+   public static final 

[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201241#comment-16201241
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144165948
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/DruidTranquilityController.java
 ---
@@ -0,0 +1,416 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.ExponentialBackoffRetry;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.joda.time.DateTime;
+import org.joda.time.Period;
+
+import com.metamx.common.Granularity;
+import com.metamx.tranquility.beam.Beam;
+import com.metamx.tranquility.beam.ClusteredBeamTuning;
+import com.metamx.tranquility.druid.DruidBeamConfig;
+import com.metamx.tranquility.druid.DruidBeams;
+import com.metamx.tranquility.druid.DruidDimensions;
+import com.metamx.tranquility.druid.DruidEnvironment;
+import com.metamx.tranquility.druid.DruidLocation;
+import com.metamx.tranquility.druid.DruidRollup;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.metamx.tranquility.typeclass.Timestamper;
+
+import io.druid.data.input.impl.TimestampSpec;
+import io.druid.granularity.QueryGranularity;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.DoubleMaxAggregatorFactory;
+import io.druid.query.aggregation.DoubleMinAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongMaxAggregatorFactory;
+import io.druid.query.aggregation.LongMinAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Asyncronously sends flowfiles to Druid Indexing 
Task using Tranquility API. "
+   + "If aggregation and roll-up of data is required, an 
Aggregator JSON desriptor needs to be provided."
+   + "Details on how desribe aggregation using JSON can be found 
at: http://druid.io/docs/latest/querying/aggregations.html;)
+public class DruidTranquilityController extends AbstractControllerService 
implements org.apache.nifi.controller.api.DruidTranquilityService{
+   private String firehosePattern = "druid:firehose:%s";
+   private int clusterPartitions = 1;
+private int clusterReplication = 1 ;
+private String indexRetryPeriod = "PT10M";
+
+private Tranquilizer tranquilizer = null;
+
+   public static final PropertyDescriptor DATASOURCE = new 
PropertyDescriptor.Builder()
+.name("data_source")
+.description("Druid Data Source")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.required(true)
+.build();
+   
+   public static final 

[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-11 Thread vakshorton
Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144165848
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/DruidTranquilityController.java
 ---
@@ -0,0 +1,416 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.ExponentialBackoffRetry;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.joda.time.DateTime;
+import org.joda.time.Period;
+
+import com.metamx.common.Granularity;
+import com.metamx.tranquility.beam.Beam;
+import com.metamx.tranquility.beam.ClusteredBeamTuning;
+import com.metamx.tranquility.druid.DruidBeamConfig;
+import com.metamx.tranquility.druid.DruidBeams;
+import com.metamx.tranquility.druid.DruidDimensions;
+import com.metamx.tranquility.druid.DruidEnvironment;
+import com.metamx.tranquility.druid.DruidLocation;
+import com.metamx.tranquility.druid.DruidRollup;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.metamx.tranquility.typeclass.Timestamper;
+
+import io.druid.data.input.impl.TimestampSpec;
+import io.druid.granularity.QueryGranularity;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.DoubleMaxAggregatorFactory;
+import io.druid.query.aggregation.DoubleMinAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongMaxAggregatorFactory;
+import io.druid.query.aggregation.LongMinAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Asyncronously sends flowfiles to Druid Indexing 
Task using Tranquility API. "
+   + "If aggregation and roll-up of data is required, an 
Aggregator JSON desriptor needs to be provided."
+   + "Details on how desribe aggregation using JSON can be found 
at: http://druid.io/docs/latest/querying/aggregations.html;)
+public class DruidTranquilityController extends AbstractControllerService 
implements org.apache.nifi.controller.api.DruidTranquilityService{
+   private String firehosePattern = "druid:firehose:%s";
+   private int clusterPartitions = 1;
+private int clusterReplication = 1 ;
+private String indexRetryPeriod = "PT10M";
+
+private Tranquilizer tranquilizer = null;
+
+   public static final PropertyDescriptor DATASOURCE = new 
PropertyDescriptor.Builder()
+.name("data_source")
+.description("Druid Data Source")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.required(true)
+.build();
+   
+   public static final PropertyDescriptor CONNECT_STRING = new 
PropertyDescriptor.Builder()
+.name("zk_connect_string")
+.description("ZK Connect String for Druid ")
+.required(true)
+

[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201239#comment-16201239
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144165881
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/DruidTranquilityController.java
 ---
@@ -0,0 +1,416 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.ExponentialBackoffRetry;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.joda.time.DateTime;
+import org.joda.time.Period;
+
+import com.metamx.common.Granularity;
+import com.metamx.tranquility.beam.Beam;
+import com.metamx.tranquility.beam.ClusteredBeamTuning;
+import com.metamx.tranquility.druid.DruidBeamConfig;
+import com.metamx.tranquility.druid.DruidBeams;
+import com.metamx.tranquility.druid.DruidDimensions;
+import com.metamx.tranquility.druid.DruidEnvironment;
+import com.metamx.tranquility.druid.DruidLocation;
+import com.metamx.tranquility.druid.DruidRollup;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.metamx.tranquility.typeclass.Timestamper;
+
+import io.druid.data.input.impl.TimestampSpec;
+import io.druid.granularity.QueryGranularity;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.DoubleMaxAggregatorFactory;
+import io.druid.query.aggregation.DoubleMinAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongMaxAggregatorFactory;
+import io.druid.query.aggregation.LongMinAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Asyncronously sends flowfiles to Druid Indexing 
Task using Tranquility API. "
+   + "If aggregation and roll-up of data is required, an 
Aggregator JSON desriptor needs to be provided."
+   + "Details on how desribe aggregation using JSON can be found 
at: http://druid.io/docs/latest/querying/aggregations.html;)
+public class DruidTranquilityController extends AbstractControllerService 
implements org.apache.nifi.controller.api.DruidTranquilityService{
+   private String firehosePattern = "druid:firehose:%s";
+   private int clusterPartitions = 1;
+private int clusterReplication = 1 ;
+private String indexRetryPeriod = "PT10M";
+
+private Tranquilizer tranquilizer = null;
+
+   public static final PropertyDescriptor DATASOURCE = new 
PropertyDescriptor.Builder()
+.name("data_source")
+.description("Druid Data Source")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.required(true)
+.build();
+   
+   public static final 

[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-11 Thread vakshorton
Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144165881
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/DruidTranquilityController.java
 ---
@@ -0,0 +1,416 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.ExponentialBackoffRetry;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.joda.time.DateTime;
+import org.joda.time.Period;
+
+import com.metamx.common.Granularity;
+import com.metamx.tranquility.beam.Beam;
+import com.metamx.tranquility.beam.ClusteredBeamTuning;
+import com.metamx.tranquility.druid.DruidBeamConfig;
+import com.metamx.tranquility.druid.DruidBeams;
+import com.metamx.tranquility.druid.DruidDimensions;
+import com.metamx.tranquility.druid.DruidEnvironment;
+import com.metamx.tranquility.druid.DruidLocation;
+import com.metamx.tranquility.druid.DruidRollup;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.metamx.tranquility.typeclass.Timestamper;
+
+import io.druid.data.input.impl.TimestampSpec;
+import io.druid.granularity.QueryGranularity;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.DoubleMaxAggregatorFactory;
+import io.druid.query.aggregation.DoubleMinAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongMaxAggregatorFactory;
+import io.druid.query.aggregation.LongMinAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Asyncronously sends flowfiles to Druid Indexing 
Task using Tranquility API. "
+   + "If aggregation and roll-up of data is required, an 
Aggregator JSON desriptor needs to be provided."
+   + "Details on how desribe aggregation using JSON can be found 
at: http://druid.io/docs/latest/querying/aggregations.html;)
+public class DruidTranquilityController extends AbstractControllerService 
implements org.apache.nifi.controller.api.DruidTranquilityService{
+   private String firehosePattern = "druid:firehose:%s";
+   private int clusterPartitions = 1;
+private int clusterReplication = 1 ;
+private String indexRetryPeriod = "PT10M";
+
+private Tranquilizer tranquilizer = null;
+
+   public static final PropertyDescriptor DATASOURCE = new 
PropertyDescriptor.Builder()
+.name("data_source")
+.description("Druid Data Source")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.required(true)
+.build();
+   
+   public static final PropertyDescriptor CONNECT_STRING = new 
PropertyDescriptor.Builder()
+.name("zk_connect_string")
+.description("ZK Connect String for Druid ")
+.required(true)
+

[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-11 Thread vakshorton
Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144165757
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service-api-nar/pom.xml
 ---
@@ -0,0 +1,37 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+  
+   org.apache.nifi
+   nifi-druid-bundle
+   1.0-SNAPSHOT
+  
+  
+  nifi-druid-controller-service-api-nar
+  nar
+  
+   
+1.1.1
--- End diff --

The latest nifi binaries available in the public Maven repo is 1.3.0. 
Should I keep 1.5.0-SNAPSHOT even though it will show up as not found when the 
IDE does its checks?


---


[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201232#comment-16201232
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144165557
  
--- Diff: nifi-nar-bundles/nifi-druid-bundle/nifi-druid-bundle-nar/pom.xml 
---
@@ -0,0 +1,39 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+  
+   org.apache.nifi
+   nifi-druid-bundle
+   1.0-SNAPSHOT
--- End diff --

I changed all of the bundle references to 1.5.0-SNAPSHOT. You will see it 
when I push the update.


> Implement PutDruid Processor and Controller
> ---
>
> Key: NIFI-4428
> URL: https://issues.apache.org/jira/browse/NIFI-4428
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.3.0
>Reporter: Vadim Vaks
>
> Implement a PutDruid Processor and Controller using Tranquility API. This 
> will enable Nifi to index contents of flow files in Druid. The implementation 
> should also be able to handle late arriving data (event timestamp points to 
> Druid indexing task that has closed, segment granularity and grace window 
> period expired). Late arriving data is typically dropped. Nifi should allow 
> late arriving data to be diverted to FAILED or DROPPED relationship. That 
> would allow late arriving data to be stored on HDFS or S3 until a re-indexing 
> task can merge it into the correct segment in deep storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-11 Thread vakshorton
Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144165557
  
--- Diff: nifi-nar-bundles/nifi-druid-bundle/nifi-druid-bundle-nar/pom.xml 
---
@@ -0,0 +1,39 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+  
+   org.apache.nifi
+   nifi-druid-bundle
+   1.0-SNAPSHOT
--- End diff --

I changed all of the bundle references to 1.5.0-SNAPSHOT. You will see it 
when I push the update.


---


[jira] [Created] (NIFI-4480) Implement datadog and ambari reporting tasks as services for MetricReportingTask

2017-10-11 Thread Omer Hadari (JIRA)
Omer Hadari created NIFI-4480:
-

 Summary: Implement datadog and ambari reporting tasks as services 
for MetricReportingTask
 Key: NIFI-4480
 URL: https://issues.apache.org/jira/browse/NIFI-4480
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.5.0
Reporter: Omer Hadari
Assignee: Omer Hadari
Priority: Minor


In [NIFI-4392|https://issues.apache.org/jira/browse/NIFI-4392] a 
`MetricReportingTask` was created, that uses different implementations of a 
controller service for reporting the same metrics to different clients. The 
existing ambari reporting task and datadog reporting task can be implemented in 
the same manner - in order to keep things uniform and avoid duplication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201150#comment-16201150
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144160007
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-processors/src/main/java/org/apache/nifi/processors/PutDruid.java
 ---
@@ -0,0 +1,206 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import org.codehaus.jackson.JsonParseException;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.apache.nifi.controller.api.DruidTranquilityService;
+import com.metamx.tranquility.tranquilizer.MessageDroppedException;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.twitter.util.Await;
+import com.twitter.util.Future;
+import com.twitter.util.FutureEventListener;
+
+import scala.runtime.BoxedUnit;
+
+@SideEffectFree
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Sends events to Apache Druid for Indexing. "
+   + "Leverages Druid Tranquility 
Controller service."
+   + "Incoming flow files are 
expected to contain 1 or many JSON objects, one JSON object per line")
+public class PutDruid extends AbstractSessionFactoryProcessor {
+
+private List properties;
+private Set relationships;
+private final Map messageStatus = new 
HashMap();
+
+public static final PropertyDescriptor DRUID_TRANQUILITY_SERVICE = new 
PropertyDescriptor.Builder()
+.name("druid_tranquility_service")
+.description("Tranquility Service to use for sending events to 
Druid")
+.required(true)
+.identifiesControllerService(DruidTranquilityService.class)
+.build();
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("SUCCESS")
+.description("Succes relationship")
+.build();
+
+public static final Relationship REL_FAIL = new Relationship.Builder()
+.name("FAIL")
+.description("FlowFiles are routed to this relationship when 
they cannot be parsed")
+.build();
+
+public static final Relationship REL_DROPPED = new 
Relationship.Builder()
+.name("DROPPED")
+.description("FlowFiles are routed to this relationship when 
they are outside of the configured time window, timestamp format is invalid, 
ect...")
+.build();
+
+

[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-11 Thread vakshorton
Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144160007
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-processors/src/main/java/org/apache/nifi/processors/PutDruid.java
 ---
@@ -0,0 +1,206 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import org.codehaus.jackson.JsonParseException;
+import org.codehaus.jackson.map.JsonMappingException;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.apache.nifi.controller.api.DruidTranquilityService;
+import com.metamx.tranquility.tranquilizer.MessageDroppedException;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.twitter.util.Await;
+import com.twitter.util.Future;
+import com.twitter.util.FutureEventListener;
+
+import scala.runtime.BoxedUnit;
+
+@SideEffectFree
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Sends events to Apache Druid for Indexing. "
+   + "Leverages Druid Tranquility 
Controller service."
+   + "Incoming flow files are 
expected to contain 1 or many JSON objects, one JSON object per line")
+public class PutDruid extends AbstractSessionFactoryProcessor {
+
+private List properties;
+private Set relationships;
+private final Map messageStatus = new 
HashMap();
+
+public static final PropertyDescriptor DRUID_TRANQUILITY_SERVICE = new 
PropertyDescriptor.Builder()
+.name("druid_tranquility_service")
+.description("Tranquility Service to use for sending events to 
Druid")
+.required(true)
+.identifiesControllerService(DruidTranquilityService.class)
+.build();
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("SUCCESS")
+.description("Succes relationship")
+.build();
+
+public static final Relationship REL_FAIL = new Relationship.Builder()
+.name("FAIL")
+.description("FlowFiles are routed to this relationship when 
they cannot be parsed")
+.build();
+
+public static final Relationship REL_DROPPED = new 
Relationship.Builder()
+.name("DROPPED")
+.description("FlowFiles are routed to this relationship when 
they are outside of the configured time window, timestamp format is invalid, 
ect...")
+.build();
+
+public void init(final ProcessorInitializationContext context){
+List properties = new ArrayList<>();
+properties.add(DRUID_TRANQUILITY_SERVICE);
+this.properties = 

[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201145#comment-16201145
 ] 

ASF GitHub Bot commented on NIFI-4428:
--

Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144159450
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/DruidTranquilityController.java
 ---
@@ -0,0 +1,416 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.ExponentialBackoffRetry;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.joda.time.DateTime;
+import org.joda.time.Period;
+
+import com.metamx.common.Granularity;
+import com.metamx.tranquility.beam.Beam;
+import com.metamx.tranquility.beam.ClusteredBeamTuning;
+import com.metamx.tranquility.druid.DruidBeamConfig;
+import com.metamx.tranquility.druid.DruidBeams;
+import com.metamx.tranquility.druid.DruidDimensions;
+import com.metamx.tranquility.druid.DruidEnvironment;
+import com.metamx.tranquility.druid.DruidLocation;
+import com.metamx.tranquility.druid.DruidRollup;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.metamx.tranquility.typeclass.Timestamper;
+
+import io.druid.data.input.impl.TimestampSpec;
+import io.druid.granularity.QueryGranularity;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.DoubleMaxAggregatorFactory;
+import io.druid.query.aggregation.DoubleMinAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongMaxAggregatorFactory;
+import io.druid.query.aggregation.LongMinAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Asyncronously sends flowfiles to Druid Indexing 
Task using Tranquility API. "
+   + "If aggregation and roll-up of data is required, an 
Aggregator JSON desriptor needs to be provided."
+   + "Details on how desribe aggregation using JSON can be found 
at: http://druid.io/docs/latest/querying/aggregations.html;)
+public class DruidTranquilityController extends AbstractControllerService 
implements org.apache.nifi.controller.api.DruidTranquilityService{
+   private String firehosePattern = "druid:firehose:%s";
+   private int clusterPartitions = 1;
+private int clusterReplication = 1 ;
+private String indexRetryPeriod = "PT10M";
+
+private Tranquilizer tranquilizer = null;
+
+   public static final PropertyDescriptor DATASOURCE = new 
PropertyDescriptor.Builder()
+.name("data_source")
+.description("Druid Data Source")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.required(true)
+.build();
+   
+   public static final 

[GitHub] nifi pull request #2181: NIFI-4428: - Implement PutDruid Processor and Contr...

2017-10-11 Thread vakshorton
Github user vakshorton commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2181#discussion_r144159450
  
--- Diff: 
nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/DruidTranquilityController.java
 ---
@@ -0,0 +1,416 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.ExponentialBackoffRetry;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.joda.time.DateTime;
+import org.joda.time.Period;
+
+import com.metamx.common.Granularity;
+import com.metamx.tranquility.beam.Beam;
+import com.metamx.tranquility.beam.ClusteredBeamTuning;
+import com.metamx.tranquility.druid.DruidBeamConfig;
+import com.metamx.tranquility.druid.DruidBeams;
+import com.metamx.tranquility.druid.DruidDimensions;
+import com.metamx.tranquility.druid.DruidEnvironment;
+import com.metamx.tranquility.druid.DruidLocation;
+import com.metamx.tranquility.druid.DruidRollup;
+import com.metamx.tranquility.tranquilizer.Tranquilizer;
+import com.metamx.tranquility.typeclass.Timestamper;
+
+import io.druid.data.input.impl.TimestampSpec;
+import io.druid.granularity.QueryGranularity;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.DoubleMaxAggregatorFactory;
+import io.druid.query.aggregation.DoubleMinAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongMaxAggregatorFactory;
+import io.druid.query.aggregation.LongMinAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+
+@Tags({"Druid","Timeseries","OLAP","ingest"})
+@CapabilityDescription("Asyncronously sends flowfiles to Druid Indexing 
Task using Tranquility API. "
+   + "If aggregation and roll-up of data is required, an 
Aggregator JSON desriptor needs to be provided."
+   + "Details on how desribe aggregation using JSON can be found 
at: http://druid.io/docs/latest/querying/aggregations.html;)
+public class DruidTranquilityController extends AbstractControllerService 
implements org.apache.nifi.controller.api.DruidTranquilityService{
+   private String firehosePattern = "druid:firehose:%s";
+   private int clusterPartitions = 1;
+private int clusterReplication = 1 ;
+private String indexRetryPeriod = "PT10M";
+
+private Tranquilizer tranquilizer = null;
+
+   public static final PropertyDescriptor DATASOURCE = new 
PropertyDescriptor.Builder()
+.name("data_source")
+.description("Druid Data Source")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.required(true)
+.build();
+   
+   public static final PropertyDescriptor CONNECT_STRING = new 
PropertyDescriptor.Builder()
+.name("zk_connect_string")
+.description("ZK Connect String for Druid ")
+.required(true)
+

[GitHub] nifi-minifi-cpp issue #146: Archive merge

2017-10-11 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/146
  
@phrocker @apiri please let me know if you have comments.


---


[jira] [Commented] (NIFI-4444) Upgrade Jersey Versions

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200912#comment-16200912
 ] 

ASF GitHub Bot commented on NIFI-:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2206
  
Thanks @bbende. I've pushed another commit that addresses the checkstyle 
issues.


> Upgrade Jersey Versions
> ---
>
> Key: NIFI-
> URL: https://issues.apache.org/jira/browse/NIFI-
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Attachments: NIFI-.xml
>
>
> Need to upgrade to a newer version of Jersey. The primary motivation is to 
> upgrade the version used within NiFi itself. However, there are a number of 
> extensions that also leverage it. Of those extensions, some utilize the older 
> version defined in dependencyManagement while others override explicitly 
> within their own bundle dependencyManagement. For this JIRA I propose 
> removing the Jersey artifacts from the root pom and allow the version to be 
> specified on a bundle by bundle basis.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2206: NIFI-4444: Upgrade to Jersey 2.x

2017-10-11 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2206
  
Thanks @bbende. I've pushed another commit that addresses the checkstyle 
issues.


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200909#comment-16200909
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user frett27 commented on the issue:

https://github.com/apache/nifi/pull/2184
  
Thank's matt, new pull request associated : #2207, hope this will help the 
review


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2184: NIFI-4441 : add maprecord support for avro union types

2017-10-11 Thread frett27
Github user frett27 commented on the issue:

https://github.com/apache/nifi/pull/2184
  
Thank's matt, new pull request associated : #2207, hope this will help the 
review


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200903#comment-16200903
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

GitHub user frett27 opened a pull request:

https://github.com/apache/nifi/pull/2207

NIFI-4441 patch avro maps in union types

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x ] Does your PR title start with NIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x ] Have you written or updated unit tests to verify your changes?
- [x ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/frett27/nifi nifi-4441

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2207.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2207


commit 30b3596ac351405ea33d09b9737cece257d8ff54
Author: Patrice Freydiere 
Date:   2017-10-11T20:17:15Z

patch avro maps in union types




> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types

2017-10-11 Thread frett27
GitHub user frett27 opened a pull request:

https://github.com/apache/nifi/pull/2207

NIFI-4441 patch avro maps in union types

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x ] Does your PR title start with NIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x ] Have you written or updated unit tests to verify your changes?
- [x ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/frett27/nifi nifi-4441

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2207.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2207


commit 30b3596ac351405ea33d09b9737cece257d8ff54
Author: Patrice Freydiere 
Date:   2017-10-11T20:17:15Z

patch avro maps in union types




---


[jira] [Commented] (NIFI-4444) Upgrade Jersey Versions

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200902#comment-16200902
 ] 

ASF GitHub Bot commented on NIFI-:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2206
  
Reviewing...


> Upgrade Jersey Versions
> ---
>
> Key: NIFI-
> URL: https://issues.apache.org/jira/browse/NIFI-
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Attachments: NIFI-.xml
>
>
> Need to upgrade to a newer version of Jersey. The primary motivation is to 
> upgrade the version used within NiFi itself. However, there are a number of 
> extensions that also leverage it. Of those extensions, some utilize the older 
> version defined in dependencyManagement while others override explicitly 
> within their own bundle dependencyManagement. For this JIRA I propose 
> removing the Jersey artifacts from the root pom and allow the version to be 
> specified on a bundle by bundle basis.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2206: NIFI-4444: Upgrade to Jersey 2.x

2017-10-11 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2206
  
Reviewing...


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200899#comment-16200899
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user frett27 closed the pull request at:

https://github.com/apache/nifi/pull/2184


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2184: NIFI-4441 : add maprecord support for avro union ty...

2017-10-11 Thread frett27
Github user frett27 closed the pull request at:

https://github.com/apache/nifi/pull/2184


---


[jira] [Commented] (NIFI-4444) Upgrade Jersey Versions

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200845#comment-16200845
 ] 

ASF GitHub Bot commented on NIFI-:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2206
  
Also some unused imports in the Yandex bundle


> Upgrade Jersey Versions
> ---
>
> Key: NIFI-
> URL: https://issues.apache.org/jira/browse/NIFI-
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Attachments: NIFI-.xml
>
>
> Need to upgrade to a newer version of Jersey. The primary motivation is to 
> upgrade the version used within NiFi itself. However, there are a number of 
> extensions that also leverage it. Of those extensions, some utilize the older 
> version defined in dependencyManagement while others override explicitly 
> within their own bundle dependencyManagement. For this JIRA I propose 
> removing the Jersey artifacts from the root pom and allow the version to be 
> specified on a bundle by bundle basis.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2206: NIFI-4444: Upgrade to Jersey 2.x

2017-10-11 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2206
  
Also some unused imports in the Yandex bundle


---


[jira] [Commented] (NIFI-4444) Upgrade Jersey Versions

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200816#comment-16200816
 ] 

ASF GitHub Bot commented on NIFI-:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2206
  
There is a minor check style violation in RedirectResourceFilter for the 
@return java doc.


> Upgrade Jersey Versions
> ---
>
> Key: NIFI-
> URL: https://issues.apache.org/jira/browse/NIFI-
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Attachments: NIFI-.xml
>
>
> Need to upgrade to a newer version of Jersey. The primary motivation is to 
> upgrade the version used within NiFi itself. However, there are a number of 
> extensions that also leverage it. Of those extensions, some utilize the older 
> version defined in dependencyManagement while others override explicitly 
> within their own bundle dependencyManagement. For this JIRA I propose 
> removing the Jersey artifacts from the root pom and allow the version to be 
> specified on a bundle by bundle basis.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2206: NIFI-4444: Upgrade to Jersey 2.x

2017-10-11 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2206
  
There is a minor check style violation in RedirectResourceFilter for the 
@return java doc.


---


[jira] [Commented] (NIFIREG-26) Setup nifi-registry-docs

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-26?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200782#comment-16200782
 ] 

ASF GitHub Bot commented on NIFIREG-26:
---

GitHub user bbende opened a pull request:

https://github.com/apache/nifi-registry/pull/19

NIFIREG-26 Initial setup of docs

This PR is just to incorporate the docs into the build and get the ball 
rolling, and will allow others to then fill in the various docs.

Place-holder documentation page: 
http://localhost:8080/nifi-registry-docs/documentation

The REST API doc is here:
http://localhost:8080/nifi-registry-docs/rest-api/index.html

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi-registry NIFIREG-26

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/19.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19


commit 42f246aa3c035e8a293e4453128f12d509e01e50
Author: Bryan Bende 
Date:   2017-10-02T20:42:25Z

NIFIREG-26 Initial setup of docs




> Setup nifi-registry-docs
> 
>
> Key: NIFIREG-26
> URL: https://issues.apache.org/jira/browse/NIFIREG-26
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>
> We need to setup a module to contain any potential documentation for the 
> registry. This should be bundled into the app, similar to the existing 
> approach in NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #19: NIFIREG-26 Initial setup of docs

2017-10-11 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi-registry/pull/19

NIFIREG-26 Initial setup of docs

This PR is just to incorporate the docs into the build and get the ball 
rolling, and will allow others to then fill in the various docs.

Place-holder documentation page: 
http://localhost:8080/nifi-registry-docs/documentation

The REST API doc is here:
http://localhost:8080/nifi-registry-docs/rest-api/index.html

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi-registry NIFIREG-26

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/19.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19


commit 42f246aa3c035e8a293e4453128f12d509e01e50
Author: Bryan Bende 
Date:   2017-10-02T20:42:25Z

NIFIREG-26 Initial setup of docs




---


[jira] [Resolved] (NIFIREG-22) Add a count field to VersionedFlow to be populated when retrieving items

2017-10-11 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFIREG-22.

Resolution: Fixed

> Add a count field to VersionedFlow to be populated when retrieving items
> 
>
> Key: NIFIREG-22
> URL: https://issues.apache.org/jira/browse/NIFIREG-22
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 0.0.1
>
>
> We should be able to display the number of versions of a flow without 
> returning the list of all the versions. We can add a "versionCount" field to 
> VersionedFlow that can be populated by the database service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFIREG-22) Add a count field to VersionedFlow to be populated when retrieving items

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-22?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200758#comment-16200758
 ] 

ASF GitHub Bot commented on NIFIREG-22:
---

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/11


> Add a count field to VersionedFlow to be populated when retrieving items
> 
>
> Key: NIFIREG-22
> URL: https://issues.apache.org/jira/browse/NIFIREG-22
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 0.0.1
>
>
> We should be able to display the number of versions of a flow without 
> returning the list of all the versions. We can add a "versionCount" field to 
> VersionedFlow that can be populated by the database service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #11: NIFIREG-22 Adding versionCount to VersionedF...

2017-10-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/11


---


[jira] [Commented] (NIFIREG-22) Add a count field to VersionedFlow to be populated when retrieving items

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-22?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200756#comment-16200756
 ] 

ASF GitHub Bot commented on NIFIREG-22:
---

Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/11
  
Thanks @kevdoran , going to merge to master


> Add a count field to VersionedFlow to be populated when retrieving items
> 
>
> Key: NIFIREG-22
> URL: https://issues.apache.org/jira/browse/NIFIREG-22
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 0.0.1
>
>
> We should be able to display the number of versions of a flow without 
> returning the list of all the versions. We can add a "versionCount" field to 
> VersionedFlow that can be populated by the database service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry issue #11: NIFIREG-22 Adding versionCount to VersionedFlow wit...

2017-10-11 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/11
  
Thanks @kevdoran , going to merge to master


---


[jira] [Updated] (NIFI-4444) Upgrade Jersey Versions

2017-10-11 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-:
--
Attachment: NIFI-.xml

> Upgrade Jersey Versions
> ---
>
> Key: NIFI-
> URL: https://issues.apache.org/jira/browse/NIFI-
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Attachments: NIFI-.xml
>
>
> Need to upgrade to a newer version of Jersey. The primary motivation is to 
> upgrade the version used within NiFi itself. However, there are a number of 
> extensions that also leverage it. Of those extensions, some utilize the older 
> version defined in dependencyManagement while others override explicitly 
> within their own bundle dependencyManagement. For this JIRA I propose 
> removing the Jersey artifacts from the root pom and allow the version to be 
> specified on a bundle by bundle basis.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4444) Upgrade Jersey Versions

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200746#comment-16200746
 ] 

ASF GitHub Bot commented on NIFI-:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2206
  
This PR upgrades Jersey throughout NiFi where it is used directly. Most 
transitive Jersey dependencies are left intact. The most significant changes 
are centered in the clustering framework, UpdateAttribute custom UI, 
JoltTransform custom UI, and the toolkit groovy scripts. To most easily 
evaluate these changes try the template attached in the JIRA in clustered mode 
and run the node manager toolkit utility.





> Upgrade Jersey Versions
> ---
>
> Key: NIFI-
> URL: https://issues.apache.org/jira/browse/NIFI-
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>
> Need to upgrade to a newer version of Jersey. The primary motivation is to 
> upgrade the version used within NiFi itself. However, there are a number of 
> extensions that also leverage it. Of those extensions, some utilize the older 
> version defined in dependencyManagement while others override explicitly 
> within their own bundle dependencyManagement. For this JIRA I propose 
> removing the Jersey artifacts from the root pom and allow the version to be 
> specified on a bundle by bundle basis.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-39) Create FocusArchive processor

2017-10-11 Thread Caleb Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200729#comment-16200729
 ] 

Caleb Johnson commented on MINIFICPP-39:


[~achristianson],

Can you give [this 
branch|https://github.com/NiFiLocal/nifi-minifi-cpp/tree/MINIFI-244-rebase2] a 
look? If it looks fine, the next step for me is to switch everything that uses 
rapidjson over to jsoncpp. But for now, it's rebased to the latest master and I 
_think _it'll build.

> Create FocusArchive processor
> -
>
> Key: MINIFICPP-39
> URL: https://issues.apache.org/jira/browse/MINIFICPP-39
> Project: NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Minor
>
> Create an FocusArchive processor which implements a lens over an archive 
> (tar, etc.). A concise, though informal, definition of a lens is as follows:
> "Essentially, they represent the act of “peering into” or “focusing in on” 
> some particular piece/path of a complex data object such that you can more 
> precisely target particular operations without losing the context or 
> structure of the overall data you’re working with." 
> https://medium.com/@dtipson/functional-lenses-d1aba9e52254#.hdgsvbraq
> Why an FocusArchive in MiNiFi? Simply put, it will enable us to "focus in on" 
> an entry in the archive, perform processing *in-context* of that entry, then 
> re-focus on the overall archive. This allows for transformation or other 
> processing of an entry in the archive without losing the overall context of 
> the archive.
> Initial format support is tar, due to its simplicity and ubiquity.
> Attributes:
> - Path (the path in the archive to focus; "/" to re-focus the overall archive)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2206: NIFI-4444: Upgrade to Jersey 2.x

2017-10-11 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2206
  
This PR upgrades Jersey throughout NiFi where it is used directly. Most 
transitive Jersey dependencies are left intact. The most significant changes 
are centered in the clustering framework, UpdateAttribute custom UI, 
JoltTransform custom UI, and the toolkit groovy scripts. To most easily 
evaluate these changes try the template attached in the JIRA in clustered mode 
and run the node manager toolkit utility.





---


[jira] [Commented] (NIFI-4444) Upgrade Jersey Versions

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200733#comment-16200733
 ] 

ASF GitHub Bot commented on NIFI-:
--

GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2206

NIFI-: Upgrade to Jersey 2.x

NIFI-: 
- Upgrading to Jersey 2.x.
- Updating NOTICE files where necessary.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2206.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2206


commit cb28bb5daa582117f7057ff819df4588e1c8cda1
Author: Matt Gilman 
Date:   2017-10-02T21:01:31Z

NIFI-:
- Upgrading to Jersey 2.x.
- Updating NOTICE files where necessary.




> Upgrade Jersey Versions
> ---
>
> Key: NIFI-
> URL: https://issues.apache.org/jira/browse/NIFI-
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>
> Need to upgrade to a newer version of Jersey. The primary motivation is to 
> upgrade the version used within NiFi itself. However, there are a number of 
> extensions that also leverage it. Of those extensions, some utilize the older 
> version defined in dependencyManagement while others override explicitly 
> within their own bundle dependencyManagement. For this JIRA I propose 
> removing the Jersey artifacts from the root pom and allow the version to be 
> specified on a bundle by bundle basis.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2206: NIFI-4444: Upgrade to Jersey 2.x

2017-10-11 Thread mcgilman
GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2206

NIFI-: Upgrade to Jersey 2.x

NIFI-: 
- Upgrading to Jersey 2.x.
- Updating NOTICE files where necessary.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2206.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2206


commit cb28bb5daa582117f7057ff819df4588e1c8cda1
Author: Matt Gilman 
Date:   2017-10-02T21:01:31Z

NIFI-:
- Upgrading to Jersey 2.x.
- Updating NOTICE files where necessary.




---


[jira] [Comment Edited] (MINIFICPP-39) Create FocusArchive processor

2017-10-11 Thread Caleb Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200729#comment-16200729
 ] 

Caleb Johnson edited comment on MINIFICPP-39 at 10/11/17 6:39 PM:
--

[~achristianson],

Can you give [this 
branch|https://github.com/NiFiLocal/nifi-minifi-cpp/tree/MINIFI-244-rebase2] a 
look? If it looks fine, the next step for me is to switch everything that uses 
rapidjson over to jsoncpp. But for now, it's rebased to the latest master and I 
_think_ it'll build.


was (Author: calebj):
[~achristianson],

Can you give [this 
branch|https://github.com/NiFiLocal/nifi-minifi-cpp/tree/MINIFI-244-rebase2] a 
look? If it looks fine, the next step for me is to switch everything that uses 
rapidjson over to jsoncpp. But for now, it's rebased to the latest master and I 
_think _ it'll build.

> Create FocusArchive processor
> -
>
> Key: MINIFICPP-39
> URL: https://issues.apache.org/jira/browse/MINIFICPP-39
> Project: NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Minor
>
> Create an FocusArchive processor which implements a lens over an archive 
> (tar, etc.). A concise, though informal, definition of a lens is as follows:
> "Essentially, they represent the act of “peering into” or “focusing in on” 
> some particular piece/path of a complex data object such that you can more 
> precisely target particular operations without losing the context or 
> structure of the overall data you’re working with." 
> https://medium.com/@dtipson/functional-lenses-d1aba9e52254#.hdgsvbraq
> Why an FocusArchive in MiNiFi? Simply put, it will enable us to "focus in on" 
> an entry in the archive, perform processing *in-context* of that entry, then 
> re-focus on the overall archive. This allows for transformation or other 
> processing of an entry in the archive without losing the overall context of 
> the archive.
> Initial format support is tar, due to its simplicity and ubiquity.
> Attributes:
> - Path (the path in the archive to focus; "/" to re-focus the overall archive)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (MINIFICPP-39) Create FocusArchive processor

2017-10-11 Thread Caleb Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200729#comment-16200729
 ] 

Caleb Johnson edited comment on MINIFICPP-39 at 10/11/17 6:38 PM:
--

[~achristianson],

Can you give [this 
branch|https://github.com/NiFiLocal/nifi-minifi-cpp/tree/MINIFI-244-rebase2] a 
look? If it looks fine, the next step for me is to switch everything that uses 
rapidjson over to jsoncpp. But for now, it's rebased to the latest master and I 
_think _ it'll build.


was (Author: calebj):
[~achristianson],

Can you give [this 
branch|https://github.com/NiFiLocal/nifi-minifi-cpp/tree/MINIFI-244-rebase2] a 
look? If it looks fine, the next step for me is to switch everything that uses 
rapidjson over to jsoncpp. But for now, it's rebased to the latest master and I 
_think _it'll build.

> Create FocusArchive processor
> -
>
> Key: MINIFICPP-39
> URL: https://issues.apache.org/jira/browse/MINIFICPP-39
> Project: NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Minor
>
> Create an FocusArchive processor which implements a lens over an archive 
> (tar, etc.). A concise, though informal, definition of a lens is as follows:
> "Essentially, they represent the act of “peering into” or “focusing in on” 
> some particular piece/path of a complex data object such that you can more 
> precisely target particular operations without losing the context or 
> structure of the overall data you’re working with." 
> https://medium.com/@dtipson/functional-lenses-d1aba9e52254#.hdgsvbraq
> Why an FocusArchive in MiNiFi? Simply put, it will enable us to "focus in on" 
> an entry in the archive, perform processing *in-context* of that entry, then 
> re-focus on the overall archive. This allows for transformation or other 
> processing of an entry in the archive without losing the overall context of 
> the archive.
> Initial format support is tar, due to its simplicity and ubiquity.
> Attributes:
> - Path (the path in the archive to focus; "/" to re-focus the overall archive)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-3926) Edit Template information

2017-10-11 Thread Yuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuri reassigned NIFI-3926:
--

Assignee: Yuri

> Edit Template information
> -
>
> Key: NIFI-3926
> URL: https://issues.apache.org/jira/browse/NIFI-3926
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.2.0
>Reporter: Mark Bean
>Assignee: Yuri
>Priority: Minor
>
> Request the addition of an "edit" icon to each template in the list of NiFi 
> Templates (Global Menu > Templates.) The edit would allow the user to modify 
> the template name or description. Arguably, it may also allow the Process 
> Group Id to be editable, but that seems far less likely to be desired.
> Another option which is much more substantial and may require a separate 
> ticket is to be able to edit the Template contents itself. That is, editing 
> brings up the template on a fresh graph where components can be added, 
> removed or modified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFIREG-22) Add a count field to VersionedFlow to be populated when retrieving items

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-22?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200651#comment-16200651
 ] 

ASF GitHub Bot commented on NIFIREG-22:
---

Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/11
  
+1 @bbende - nice work on this!

I reviewed code and did a full build with contrib-check. I ran the web api, 
verified creating/reading flow snapshots works, and new versionCount field 
returns correctly. The simplification of the backend through consolidating 
similar functionality is a nice improvement. LGTM




> Add a count field to VersionedFlow to be populated when retrieving items
> 
>
> Key: NIFIREG-22
> URL: https://issues.apache.org/jira/browse/NIFIREG-22
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 0.0.1
>
>
> We should be able to display the number of versions of a flow without 
> returning the list of all the versions. We can add a "versionCount" field to 
> VersionedFlow that can be populated by the database service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry issue #11: NIFIREG-22 Adding versionCount to VersionedFlow wit...

2017-10-11 Thread kevdoran
Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/11
  
+1 @bbende - nice work on this!

I reviewed code and did a full build with contrib-check. I ran the web api, 
verified creating/reading flow snapshots works, and new versionCount field 
returns correctly. The simplification of the backend through consolidating 
similar functionality is a nice improvement. LGTM




---


[jira] [Created] (NIFI-4479) Add a DeleteMongo processor

2017-10-11 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-4479:
--

 Summary: Add a DeleteMongo processor
 Key: NIFI-4479
 URL: https://issues.apache.org/jira/browse/NIFI-4479
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Matt Burgess
Priority: Minor


Currently there is no processor to remove documents from MongoDB, instead the 
REST API must be used via InvokeHttp for example.  It would be nice to have a 
processor to remove documents from MongoDB. It could be record-aware or not (or 
there could be two different versions), depending on what makes the most sense 
and is the most useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3688) Create extended groovy scripting processor

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200633#comment-16200633
 ] 

ASF GitHub Bot commented on NIFI-3688:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1662#discussion_r144080861
  
--- Diff: nifi-nar-bundles/nifi-groovyx-bundle/nifi-groovyx-nar/pom.xml ---
@@ -0,0 +1,44 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+org.apache.nifi
+nifi-groovyx-bundle
+1.4.0-SNAPSHOT
--- End diff --

Sorry I keep losing track of this PR :(  In the meantime we have released 
1.4.0, do you mind changing these references to 1.5.0-SNAPSHOT and rebasing 
against the latest master? Please and thanks!


> Create extended groovy scripting processor
> --
>
> Key: NIFI-3688
> URL: https://issues.apache.org/jira/browse/NIFI-3688
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Dmitry Lukyanov
>Priority: Minor
>
> The idea is to simplify groovy scripting.
> Main targets:
> - to be compatible with existing groovy scripting
> - simplify read/write attributes
> - simplify read/write content
> - avoid closure casting to nifi types like `StreamCallback`
> - simplify and provide visibility when accessing to controller services from 
> script



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1662: NIFI-3688 Extended Groovy Nifi Processor

2017-10-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1662#discussion_r144080861
  
--- Diff: nifi-nar-bundles/nifi-groovyx-bundle/nifi-groovyx-nar/pom.xml ---
@@ -0,0 +1,44 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+org.apache.nifi
+nifi-groovyx-bundle
+1.4.0-SNAPSHOT
--- End diff --

Sorry I keep losing track of this PR :(  In the meantime we have released 
1.4.0, do you mind changing these references to 1.5.0-SNAPSHOT and rebasing 
against the latest master? Please and thanks!


---


[jira] [Commented] (NIFI-4136) GrokReader - Add a failure option to unmatch behavior options

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200628#comment-16200628
 ] 

ASF GitHub Bot commented on NIFI-4136:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1955#discussion_r144079474
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/grok/GrokReader.java
 ---
@@ -74,6 +74,8 @@
 "The line of text that does not match the Grok Expression will be 
appended to the last field of the prior message.");
 static final AllowableValue SKIP_LINE = new 
AllowableValue("skip-line", "Skip Line",
 "The line of text that does not match the Grok Expression will be 
skipped.");
+static final AllowableValue THROW_ERROR = new 
AllowableValue("throw-error", "Error",
+"The processing of the flow file containing the line of text 
will throw an error and will be routed to the approriate relationship.");
--- End diff --

Nitpick, "throw an error" might be too dev-centric, perhaps "issue an 
error"? Not necessary just a suggestion. Also there is a typo in "appropriate", 
and the PR won't currently merge cleanly, do you mind doing a rebase? Please 
and thanks!


> GrokReader - Add a failure option to unmatch behavior options
> -
>
> Key: NIFI-4136
> URL: https://issues.apache.org/jira/browse/NIFI-4136
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> At the moment, when using the GrokReader, if a line does not match the grok 
> expression (and is not part of a stack trace), the line can be either ignored 
> (the line will be completely skipped) or  appended to the last field from the 
> previous line.
> In the case where appending is not desired and that data should not be 
> ignored/deleted, we should add the option to route the full flow file to the 
> failure relationship. This way the flow file could be treated in a different 
> way (for example with SplitText and ExtractGrok to isolate the incorrect 
> lines and re-route the correct lines back to the Record processors).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1955: NIFI-4136 Add a failure option to unmatch behavior ...

2017-10-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1955#discussion_r144079474
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/grok/GrokReader.java
 ---
@@ -74,6 +74,8 @@
 "The line of text that does not match the Grok Expression will be 
appended to the last field of the prior message.");
 static final AllowableValue SKIP_LINE = new 
AllowableValue("skip-line", "Skip Line",
 "The line of text that does not match the Grok Expression will be 
skipped.");
+static final AllowableValue THROW_ERROR = new 
AllowableValue("throw-error", "Error",
+"The processing of the flow file containing the line of text 
will throw an error and will be routed to the approriate relationship.");
--- End diff --

Nitpick, "throw an error" might be too dev-centric, perhaps "issue an 
error"? Not necessary just a suggestion. Also there is a typo in "appropriate", 
and the PR won't currently merge cleanly, do you mind doing a rebase? Please 
and thanks!


---


[jira] [Commented] (NIFI-4359) Enhance ConvertJSONToSQL processor to handle JSON containing fields having complex type

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200619#comment-16200619
 ] 

ASF GitHub Bot commented on NIFI-4359:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2132#discussion_r144077813
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ConvertJSONToSQL.java
 ---
@@ -37,6 +37,7 @@
 import java.util.Set;
 import java.util.UUID;
 import java.util.concurrent.atomic.AtomicReference;
+import org.apache.commons.lang.StringEscapeUtils;
--- End diff --

All other NiFi code that uses StringEscapeUtils refers to the Commons Lang 
3 class `org.apache.commons.lang3.StringEscapeUtils`


> Enhance ConvertJSONToSQL processor to handle JSON containing fields having 
> complex type
> ---
>
> Key: NIFI-4359
> URL: https://issues.apache.org/jira/browse/NIFI-4359
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Samrat Vilasrao Bandgar
> Attachments: NIFI-4359.patch
>
>
> Processor: ConvertJSONToSQL
> *Problem statement: *
> Sample JSON:
> {noformat}
> {
> "prop1": "value1",
> "prop2": "value2",
> "prop3": "value3",
> "prop4": {
> "prop5": "value5",
> "prop6": "value6"
> }
> }
> {noformat}
> Sample table:
> {noformat}
> mysql> desc mytable;
> +---++--+-+-++
> | Field | Type   | Null | Key | Default | Extra  |
> +---++--+-+-++
> | id| bigint(20) | NO   | PRI | NULL| auto_increment |
> | prop1 | char(30)   | NO   | | NULL||
> | prop2 | char(30)   | NO   | | NULL||
> | prop3 | char(30)   | NO   | | NULL||
> | prop4 | text   | NO   | | NULL||
> +---++--+-+-++
> 5 rows in set (0.00 sec)
> {noformat}
> With the above mentioned sample json and table, I want to convert the json 
> into insert sql in such a way that prop4 column will get inserted with value 
> {"prop5":"value5","prop6":"value6"}. However, when I use the current 
> ConvertJSONToSQL processor, prop4 column gets inserted with empty string.
> *Expected:*
> {noformat}
> mysql> select * from mytable;
> +++++-+
> | id | prop1  | prop2  | prop3  | prop4   |
> +++++-+
> |  1 | value1 | value2 | value3 | {"prop5":"value5","prop6":"value6"} |
> +++++-+
> 1 row in set (0.00 sec)
> {noformat}
> *Actual:*
> {noformat}
> mysql> select * from mytable;
> +++++--+
> | id | prop1  | prop2  | prop3  | prop4|
> +++++--+
> |  1 | value1 | value2 | value3 |  |
> +++++--+
> 1 row in set (0.00 sec)
> {noformat}
> *Attributes details captured from Provenance Event UI for the above use case 
> are:*
> sql.args.1.type
> 1
> sql.args.1.value
> value1
> sql.args.2.type
> 1
> sql.args.2.value
> value2
> sql.args.3.type
> 1
> sql.args.3.value
> value3
> sql.args.4.type
> -1
> sql.args.4.value
> {color:red}Empty string set{color}
> sql.table
> mytable
> The ConvertJSONToSQL.java has a method createSqlStringValue(final JsonNode 
> fieldNode, final Integer colSize, final int sqlType) which is responsible for 
> populating attribute values for each column. This method uses below line to 
> get field value.
> {code:java}
> String fieldValue = fieldNode.asText();
> {code}
> Documentation for org.codehaus.jackson.JsonNode.asText() method tells us:
> *asText()*
> Method that will return valid String representation of the container value, 
> if the node is a value node (method isValueNode() returns true), otherwise 
> empty String.
> Since prop4 in this case is not a value node, empty string is returned and 
> get set to attribute value for the column prop4.
> Suggested improvement is as below. 
> If the fieldNode is value node, use asText() else use toString() with 
> StringEscapeUtils.escapeSql() to take of characters like quotes in insert 
> query. I have tested this locally. Please let me know if it makes sense to 
> add this improvement. I will attach the patch file for code changes.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4359) Enhance ConvertJSONToSQL processor to handle JSON containing fields having complex type

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200620#comment-16200620
 ] 

ASF GitHub Bot commented on NIFI-4359:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2132#discussion_r144078102
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ConvertJSONToSQL.java
 ---
@@ -508,7 +509,12 @@ private String generateInsert(final JsonNode rootNode, 
final Map
  *
  */
 protected static String createSqlStringValue(final JsonNode fieldNode, 
final Integer colSize, final int sqlType) {
-String fieldValue = fieldNode.asText();
+String fieldValue;
--- End diff --

Can you add unit test(s) to TestConvertJSONToSQL to cover this improvement? 
The sample data in the Jira would make an excellent unit test :)


> Enhance ConvertJSONToSQL processor to handle JSON containing fields having 
> complex type
> ---
>
> Key: NIFI-4359
> URL: https://issues.apache.org/jira/browse/NIFI-4359
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Samrat Vilasrao Bandgar
> Attachments: NIFI-4359.patch
>
>
> Processor: ConvertJSONToSQL
> *Problem statement: *
> Sample JSON:
> {noformat}
> {
> "prop1": "value1",
> "prop2": "value2",
> "prop3": "value3",
> "prop4": {
> "prop5": "value5",
> "prop6": "value6"
> }
> }
> {noformat}
> Sample table:
> {noformat}
> mysql> desc mytable;
> +---++--+-+-++
> | Field | Type   | Null | Key | Default | Extra  |
> +---++--+-+-++
> | id| bigint(20) | NO   | PRI | NULL| auto_increment |
> | prop1 | char(30)   | NO   | | NULL||
> | prop2 | char(30)   | NO   | | NULL||
> | prop3 | char(30)   | NO   | | NULL||
> | prop4 | text   | NO   | | NULL||
> +---++--+-+-++
> 5 rows in set (0.00 sec)
> {noformat}
> With the above mentioned sample json and table, I want to convert the json 
> into insert sql in such a way that prop4 column will get inserted with value 
> {"prop5":"value5","prop6":"value6"}. However, when I use the current 
> ConvertJSONToSQL processor, prop4 column gets inserted with empty string.
> *Expected:*
> {noformat}
> mysql> select * from mytable;
> +++++-+
> | id | prop1  | prop2  | prop3  | prop4   |
> +++++-+
> |  1 | value1 | value2 | value3 | {"prop5":"value5","prop6":"value6"} |
> +++++-+
> 1 row in set (0.00 sec)
> {noformat}
> *Actual:*
> {noformat}
> mysql> select * from mytable;
> +++++--+
> | id | prop1  | prop2  | prop3  | prop4|
> +++++--+
> |  1 | value1 | value2 | value3 |  |
> +++++--+
> 1 row in set (0.00 sec)
> {noformat}
> *Attributes details captured from Provenance Event UI for the above use case 
> are:*
> sql.args.1.type
> 1
> sql.args.1.value
> value1
> sql.args.2.type
> 1
> sql.args.2.value
> value2
> sql.args.3.type
> 1
> sql.args.3.value
> value3
> sql.args.4.type
> -1
> sql.args.4.value
> {color:red}Empty string set{color}
> sql.table
> mytable
> The ConvertJSONToSQL.java has a method createSqlStringValue(final JsonNode 
> fieldNode, final Integer colSize, final int sqlType) which is responsible for 
> populating attribute values for each column. This method uses below line to 
> get field value.
> {code:java}
> String fieldValue = fieldNode.asText();
> {code}
> Documentation for org.codehaus.jackson.JsonNode.asText() method tells us:
> *asText()*
> Method that will return valid String representation of the container value, 
> if the node is a value node (method isValueNode() returns true), otherwise 
> empty String.
> Since prop4 in this case is not a value node, empty string is returned and 
> get set to attribute value for the column prop4.
> Suggested improvement is as below. 
> If the fieldNode is value node, use asText() else use toString() with 
> StringEscapeUtils.escapeSql() to take of characters like quotes in insert 
> query. I have tested this locally. Please let me know if it makes sense to 
> add this improvement. I will attach the patch file for code changes.
> Thanks



--
This message was sent by 

[GitHub] nifi pull request #2132: NIFI-4359 Based on field node type, whether value n...

2017-10-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2132#discussion_r144077813
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ConvertJSONToSQL.java
 ---
@@ -37,6 +37,7 @@
 import java.util.Set;
 import java.util.UUID;
 import java.util.concurrent.atomic.AtomicReference;
+import org.apache.commons.lang.StringEscapeUtils;
--- End diff --

All other NiFi code that uses StringEscapeUtils refers to the Commons Lang 
3 class `org.apache.commons.lang3.StringEscapeUtils`


---


[GitHub] nifi pull request #2132: NIFI-4359 Based on field node type, whether value n...

2017-10-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2132#discussion_r144078102
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ConvertJSONToSQL.java
 ---
@@ -508,7 +509,12 @@ private String generateInsert(final JsonNode rootNode, 
final Map
  *
  */
 protected static String createSqlStringValue(final JsonNode fieldNode, 
final Integer colSize, final int sqlType) {
-String fieldValue = fieldNode.asText();
+String fieldValue;
--- End diff --

Can you add unit test(s) to TestConvertJSONToSQL to cover this improvement? 
The sample data in the Jira would make an excellent unit test :)


---


[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200605#comment-16200605
 ] 

ASF GitHub Bot commented on NIFI-4441:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2184
  
This branch looks out of whack with the current master, can you start with 
a fresh master and cherry-pick in your commits for this issue? It will help 
with merging and review, thanks!


> Add MapRecord support inside avro union types
> -
>
> Key: NIFI-4441
> URL: https://issues.apache.org/jira/browse/NIFI-4441
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Patrice Freydiere
>
> Using an avro union type that contain maps in the definition lead to errors 
> in loading avro records.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2184: NIFI-4441 : add maprecord support for avro union types

2017-10-11 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2184
  
This branch looks out of whack with the current master, can you start with 
a fresh master and cherry-pick in your commits for this issue? It will help 
with merging and review, thanks!


---


[jira] [Commented] (NIFIREG-32) NiFi Registry add delete action as separate action for authz policies

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-32?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200563#comment-16200563
 ] 

ASF GitHub Bot commented on NIFIREG-32:
---

GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/18

NIFIREG-32: Add delete as available action for access policy management.

- Add 'delete' as available action for authorization access policies, which 
are (resource, action) pairs.
- Add unit tests for AuthorizationService
- Add BadRequestExceptionMapper.
- Move StandardManagedAuthorizer from nifi-registry-security-api-imple to 
nifi-registry-framework.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-32

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/18.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #18


commit 68c9936c02b6fb3c5fc7fa897524831d0a902811
Author: Kevin Doran 
Date:   2017-10-11T13:44:45Z

NIFIREG-32: Add delete as available action for access policy management.

- Add 'delete' as available action for authorization access policies, which 
are (resource, action) pairs.
- Add unit tests for authorization classes.
- Add BadRequestExceptionMapper.
- Move StandardManagedAuthorizer from nifi-registry-security-api-imple to 
nifi-registry-framework.




> NiFi Registry add delete action as separate action for authz policies
> -
>
> Key: NIFIREG-32
> URL: https://issues.apache.org/jira/browse/NIFIREG-32
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>
> The security model for NiFi Registry allows setting access policies on 
> (resource, action) pairs.
> Currently, available actions are "read", "write".
> This ticket is to add "delete" as an action, distinct from "write"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #18: NIFIREG-32: Add delete as available action f...

2017-10-11 Thread kevdoran
GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/18

NIFIREG-32: Add delete as available action for access policy management.

- Add 'delete' as available action for authorization access policies, which 
are (resource, action) pairs.
- Add unit tests for AuthorizationService
- Add BadRequestExceptionMapper.
- Move StandardManagedAuthorizer from nifi-registry-security-api-imple to 
nifi-registry-framework.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-32

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/18.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #18


commit 68c9936c02b6fb3c5fc7fa897524831d0a902811
Author: Kevin Doran 
Date:   2017-10-11T13:44:45Z

NIFIREG-32: Add delete as available action for access policy management.

- Add 'delete' as available action for authorization access policies, which 
are (resource, action) pairs.
- Add unit tests for authorization classes.
- Add BadRequestExceptionMapper.
- Move StandardManagedAuthorizer from nifi-registry-security-api-imple to 
nifi-registry-framework.




---


[jira] [Created] (NIFI-4478) ignite version should not be mentionned in the root pom.xml

2017-10-11 Thread JIRA
Sébastien Bouchex Bellomié created NIFI-4478:


 Summary: ignite version should not be mentionned in the root 
pom.xml
 Key: NIFI-4478
 URL: https://issues.apache.org/jira/browse/NIFI-4478
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.5.0
 Environment: All
Reporter: Sébastien Bouchex Bellomié
Priority: Blocker


Root pom.xml is mentioning org.apache.ignite 1.6.0. This prevents from using an 
higher version (2.2.0 for example) in a custom processor.

org.apache.ignite should only be mentioned in the processor using this library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-3409) Batch users/groups import - LDAP

2017-10-11 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-3409.
---
Resolution: Won't Fix

NIFI-4059 implements a User Group Provider this is sync with a Directory 
Server. Given this capability, this issue is OBE. The Ldap User Group Provider 
will continue staying in sync based on a configured interval.

> Batch users/groups import - LDAP
> 
>
> Key: NIFI-3409
> URL: https://issues.apache.org/jira/browse/NIFI-3409
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core Framework, Core UI
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> Creating the sub task to answer:
> {quote}
> Batch user import
> * Whether the users are providing client certificates, LDAP credentials, or 
> Kerberos tickets to authenticate, the canonical source of identity is still 
> managed by NiFi. I propose a mechanism to quickly define multiple users in 
> the system (without affording any policy assignments). Here I am looking for 
> substantial community input on the most common/desired use cases, but my 
> initial thoughts are:
> ** LDAP-specific
> *** A manager DN and password (similar to necessary for LDAP authentication) 
> are used to authenticate the admin/user manager, and then a LDAP query string 
> (i.e. {{ou=users,dc=nifi,dc=apache,dc=org}}) is provided and the dialog 
> displays/API returns a list of users/groups matching the query. The admin can 
> then select which to import to NiFi and confirm. 
> {quote}
> In particular the initial implementation would be to add a feature allowing 
> to sync users and groups with LDAP based on additional parameters given in 
> the login identity provider configuration file and custom filters provided by 
> the user through the UI.
> It is not foreseen to delete users/groups that exist in NiFi but are not 
> retrieved in the LDAP. It'd be only creating/updating users/groups based on 
> what is in LDAP server.
> The feature would be exposed through a new REST API endpoint. In case another 
> identity provider is configured (not LDAP), an unsupported operation 
> exception would be returned at the moment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #146: Archive merge

2017-10-11 Thread minifirocks
Github user minifirocks commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/146#discussion_r144026084
  
--- Diff: extensions/http-curl/CMakeLists.txt ---
@@ -24,7 +24,7 @@ find_package(CURL REQUIRED)
 set(CMAKE_EXE_LINKER_FLAGS "-Wl,--export-all-symbols")
 set(CMAKE_SHARED_LINKER_FLAGS "-Wl,--export-symbols")
 
-include_directories(../../libminifi/include ../../libminifi/include/c2  
../../libminifi/include/c2/protocols/  ../../libminifi/include/core/state 
./libminifi/include/core/statemanagement/metrics  
../../libminifi/include/core/yaml  ../../libminifi/include/core  
../../thirdparty/spdlog-20170710/include ../../thirdparty/concurrentqueue 
../../thirdparty/yaml-cpp-yaml-cpp-0.5.3/include 
../../thirdparty/civetweb-1.9.1/include ../../thirdparty/jsoncpp/include 
../../thirdparty/leveldb-1.18/include ../../thirdparty/)
+include_directories(../../libminifi/include ../../libminifi/include/c2  
../../libminifi/include/c2/protocols/  ../../libminifi/include/core/state 
./libminifi/include/core/statemanagement/metrics  
../../libminifi/include/core/yaml  ../../libminifi/include/core  
../../thirdparty/spdlog-20170710/include ../../thirdparty/concurrentqueue 
../../thirdparty/yaml-cpp-yaml-cpp-0.5.3/include 
../../thirdparty/civetweb-1.9.1/include ../../thirdparty/jsoncpp/include 
../../thirdparty/leveldb-1.18/include 
../../thirdparty/libarchive-3.3.2/libarchive ../../thirdparty/)
--- End diff --

it is FlowConfiguration.h which include MergeContent.h



---


[GitHub] nifi-minifi-cpp pull request #146: Archive merge

2017-10-11 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/146#discussion_r144019540
  
--- Diff: extensions/http-curl/CMakeLists.txt ---
@@ -24,7 +24,7 @@ find_package(CURL REQUIRED)
 set(CMAKE_EXE_LINKER_FLAGS "-Wl,--export-all-symbols")
 set(CMAKE_SHARED_LINKER_FLAGS "-Wl,--export-symbols")
 
-include_directories(../../libminifi/include ../../libminifi/include/c2  
../../libminifi/include/c2/protocols/  ../../libminifi/include/core/state 
./libminifi/include/core/statemanagement/metrics  
../../libminifi/include/core/yaml  ../../libminifi/include/core  
../../thirdparty/spdlog-20170710/include ../../thirdparty/concurrentqueue 
../../thirdparty/yaml-cpp-yaml-cpp-0.5.3/include 
../../thirdparty/civetweb-1.9.1/include ../../thirdparty/jsoncpp/include 
../../thirdparty/leveldb-1.18/include ../../thirdparty/)
+include_directories(../../libminifi/include ../../libminifi/include/c2  
../../libminifi/include/c2/protocols/  ../../libminifi/include/core/state 
./libminifi/include/core/statemanagement/metrics  
../../libminifi/include/core/yaml  ../../libminifi/include/core  
../../thirdparty/spdlog-20170710/include ../../thirdparty/concurrentqueue 
../../thirdparty/yaml-cpp-yaml-cpp-0.5.3/include 
../../thirdparty/civetweb-1.9.1/include ../../thirdparty/jsoncpp/include 
../../thirdparty/leveldb-1.18/include 
../../thirdparty/libarchive-3.3.2/libarchive ../../thirdparty/)
--- End diff --

oh that's far too generic of an include directories statement, that's why. 
we shouldn't be cross compiling merge content for curl. 


---


[jira] [Created] (NIFIREG-32) NiFi Registry add delete action as separate action for authz policies

2017-10-11 Thread Kevin Doran (JIRA)
Kevin Doran created NIFIREG-32:
--

 Summary: NiFi Registry add delete action as separate action for 
authz policies
 Key: NIFIREG-32
 URL: https://issues.apache.org/jira/browse/NIFIREG-32
 Project: NiFi Registry
  Issue Type: Improvement
Reporter: Kevin Doran
Assignee: Kevin Doran


The security model for NiFi Registry allows setting access policies on 
(resource, action) pairs.

Currently, available actions are "read", "write".

This ticket is to add "delete" as an action, distinct from "write"




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4457) "Maximum-value" not increasing when "initial.maxvalue" is set and "Maximum-value column" name is different from "id"

2017-10-11 Thread meh (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200121#comment-16200121
 ] 

meh commented on NIFI-4457:
---

[~ijokarumawak] i cannot alter database and add id column ... ; also when i 
changed max value from "tweet_id" to "date", problem is still happened and now 
"date" max value is not changing.
i think it is a bug :|

> "Maximum-value" not increasing when "initial.maxvalue" is set and 
> "Maximum-value column" name is different from "id" 
> -
>
> Key: NIFI-4457
> URL: https://issues.apache.org/jira/browse/NIFI-4457
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
> Environment: windows 10
>Reporter: meh
> Attachments: Picture1.png, Picture2.png, subquery.csv
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> when "Maximum-value column" name is "id" there is no problem, when i add 
> "initial.maxvalue.id" property in "QueryDatabaseTable" processor, it works 
> well and maxvalue is increasing by every running.
> !Picture1.png|thumbnail!
> but...
> when the "Maximum-value column" name is different from "id" (such as 
> "tweet_id"), after initial processor working, only given 
> "initial.maxvalue.id" is saves and that repeating just same value for every 
> run.
> !Picture2.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4465) ConvertExcelToCSV Data Formatting and Delimiters

2017-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199859#comment-16199859
 ] 

ASF GitHub Bot commented on NIFI-4465:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2194#discussion_r143920555
  
--- Diff: 
nifi-nar-bundles/nifi-poi-bundle/nifi-poi-processors/src/test/java/org/apache/nifi/processors/poi/ConvertExcelToCSVProcessorTest.java
 ---
@@ -42,16 +42,6 @@ public void init() {
 }
 
 @Test
-public void testColToIndex() {
-assertEquals(Integer.valueOf(0), 
ConvertExcelToCSVProcessor.columnToIndex("A"));
-assertEquals(Integer.valueOf(1), 
ConvertExcelToCSVProcessor.columnToIndex("B"));
-assertEquals(Integer.valueOf(25), 
ConvertExcelToCSVProcessor.columnToIndex("Z"));
-assertEquals(Integer.valueOf(29), 
ConvertExcelToCSVProcessor.columnToIndex("AD"));
-assertEquals(Integer.valueOf(239), 
ConvertExcelToCSVProcessor.columnToIndex("IF"));
-assertEquals(Integer.valueOf(16383), 
ConvertExcelToCSVProcessor.columnToIndex("XFD"));
--- End diff --

Confirmed Checkstyle has no issue.


> ConvertExcelToCSV Data Formatting and Delimiters
> 
>
> Key: NIFI-4465
> URL: https://issues.apache.org/jira/browse/NIFI-4465
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.5.0
>
>
> The ConvertExcelToCSV Processor does not output cell values using the 
> formatting set in Excel.
> There are also no delimiter options available for column/record delimiting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2194: NIFI-4465 ConvertExcelToCSV Data Formatting and Del...

2017-10-11 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2194#discussion_r143920555
  
--- Diff: 
nifi-nar-bundles/nifi-poi-bundle/nifi-poi-processors/src/test/java/org/apache/nifi/processors/poi/ConvertExcelToCSVProcessorTest.java
 ---
@@ -42,16 +42,6 @@ public void init() {
 }
 
 @Test
-public void testColToIndex() {
-assertEquals(Integer.valueOf(0), 
ConvertExcelToCSVProcessor.columnToIndex("A"));
-assertEquals(Integer.valueOf(1), 
ConvertExcelToCSVProcessor.columnToIndex("B"));
-assertEquals(Integer.valueOf(25), 
ConvertExcelToCSVProcessor.columnToIndex("Z"));
-assertEquals(Integer.valueOf(29), 
ConvertExcelToCSVProcessor.columnToIndex("AD"));
-assertEquals(Integer.valueOf(239), 
ConvertExcelToCSVProcessor.columnToIndex("IF"));
-assertEquals(Integer.valueOf(16383), 
ConvertExcelToCSVProcessor.columnToIndex("XFD"));
--- End diff --

Confirmed Checkstyle has no issue.


---