[jira] [Commented] (NIFI-3518) Create a Morphlines processor

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368461#comment-16368461
 ] 

ASF GitHub Bot commented on NIFI-3518:
--

Github user binhnv commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2028#discussion_r168942465
  
--- Diff: 
nifi-nar-bundles/nifi-morphlines-bundle/nifi-morphlines-processors/src/main/java/org/apache/nifi/processors/morphlines/ExecuteMorphline.java
 ---
@@ -0,0 +1,253 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.morphlines;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.Restricted;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+import org.kitesdk.morphline.api.Command;
+import org.kitesdk.morphline.api.MorphlineContext;
+import org.kitesdk.morphline.api.Record;
+import org.kitesdk.morphline.base.Fields;
+import org.kitesdk.morphline.base.Compiler;
+import org.kitesdk.morphline.base.Notifications;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableSet;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.List;
+import java.util.Set;
+import java.util.Iterator;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.concurrent.atomic.AtomicLong;
+
+@Tags({"kitesdk", "morphlines", "ETL", "HDFS", "avro", "Solr", "HBase"})
+@CapabilityDescription("Executes Morphlines 
(http://kitesdk.org/docs/1.1.0/morphlines/) framework, which performs in-memory 
container of transformation "
++ "commands in oder to perform tasks such as loading, parsing, 
transforming, or otherwise processing a single record.")
+@DynamicProperty(name = "Relationship Name", value = "A Regular 
Expression", supportsExpressionLanguage = true, description = "Adds the dynamic 
property key and value "
++ "as key-value pair to Morphlines content.")
+@Restricted("Provides operator the ability to read/write to any file that 
NiFi has access to.")
+
+public class ExecuteMorphline extends AbstractProcessor {
+public static final PropertyDescriptor MORPHLINES_ID = new 
PropertyDescriptor
+.Builder().name("Morphlines ID")
+.description("Identifier of the morphlines context")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor MORPHLINES_FILE = new 
PropertyDescriptor
+.Builder().name("Morphlines File")
+.description("File for the morphlines context")
+.required(true)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor MORPHLINES_OUTPUT_FIELD = new 
PropertyDescriptor
+.Builder().name("Morphlines output field")
+.description("Field name of output in Morphlines. Default is 

[GitHub] nifi pull request #2028: NIFI-3518 Create a Morphlines processor

2018-02-17 Thread binhnv
Github user binhnv commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2028#discussion_r168942465
  
--- Diff: 
nifi-nar-bundles/nifi-morphlines-bundle/nifi-morphlines-processors/src/main/java/org/apache/nifi/processors/morphlines/ExecuteMorphline.java
 ---
@@ -0,0 +1,253 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.morphlines;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.Restricted;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+import org.kitesdk.morphline.api.Command;
+import org.kitesdk.morphline.api.MorphlineContext;
+import org.kitesdk.morphline.api.Record;
+import org.kitesdk.morphline.base.Fields;
+import org.kitesdk.morphline.base.Compiler;
+import org.kitesdk.morphline.base.Notifications;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableSet;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.List;
+import java.util.Set;
+import java.util.Iterator;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.concurrent.atomic.AtomicLong;
+
+@Tags({"kitesdk", "morphlines", "ETL", "HDFS", "avro", "Solr", "HBase"})
+@CapabilityDescription("Executes Morphlines 
(http://kitesdk.org/docs/1.1.0/morphlines/) framework, which performs in-memory 
container of transformation "
++ "commands in oder to perform tasks such as loading, parsing, 
transforming, or otherwise processing a single record.")
+@DynamicProperty(name = "Relationship Name", value = "A Regular 
Expression", supportsExpressionLanguage = true, description = "Adds the dynamic 
property key and value "
++ "as key-value pair to Morphlines content.")
+@Restricted("Provides operator the ability to read/write to any file that 
NiFi has access to.")
+
+public class ExecuteMorphline extends AbstractProcessor {
+public static final PropertyDescriptor MORPHLINES_ID = new 
PropertyDescriptor
+.Builder().name("Morphlines ID")
+.description("Identifier of the morphlines context")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor MORPHLINES_FILE = new 
PropertyDescriptor
+.Builder().name("Morphlines File")
+.description("File for the morphlines context")
+.required(true)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor MORPHLINES_OUTPUT_FIELD = new 
PropertyDescriptor
+.Builder().name("Morphlines output field")
+.description("Field name of output in Morphlines. Default is 
'_attachment_body'.")
+.required(false)
+.defaultValue("_attachment_body")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final Relationship 

[jira] [Commented] (NIFI-3518) Create a Morphlines processor

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368459#comment-16368459
 ] 

ASF GitHub Bot commented on NIFI-3518:
--

Github user binhnv commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2028#discussion_r168942406
  
--- Diff: 
nifi-nar-bundles/nifi-morphlines-bundle/nifi-morphlines-processors/src/main/java/org/apache/nifi/processors/morphlines/ExecuteMorphline.java
 ---
@@ -0,0 +1,253 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.morphlines;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.Restricted;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+import org.kitesdk.morphline.api.Command;
+import org.kitesdk.morphline.api.MorphlineContext;
+import org.kitesdk.morphline.api.Record;
+import org.kitesdk.morphline.base.Fields;
+import org.kitesdk.morphline.base.Compiler;
+import org.kitesdk.morphline.base.Notifications;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableSet;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.List;
+import java.util.Set;
+import java.util.Iterator;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.concurrent.atomic.AtomicLong;
+
+@Tags({"kitesdk", "morphlines", "ETL", "HDFS", "avro", "Solr", "HBase"})
+@CapabilityDescription("Executes Morphlines 
(http://kitesdk.org/docs/1.1.0/morphlines/) framework, which performs in-memory 
container of transformation "
++ "commands in oder to perform tasks such as loading, parsing, 
transforming, or otherwise processing a single record.")
+@DynamicProperty(name = "Relationship Name", value = "A Regular 
Expression", supportsExpressionLanguage = true, description = "Adds the dynamic 
property key and value "
++ "as key-value pair to Morphlines content.")
+@Restricted("Provides operator the ability to read/write to any file that 
NiFi has access to.")
+
+public class ExecuteMorphline extends AbstractProcessor {
+public static final PropertyDescriptor MORPHLINES_ID = new 
PropertyDescriptor
+.Builder().name("Morphlines ID")
+.description("Identifier of the morphlines context")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor MORPHLINES_FILE = new 
PropertyDescriptor
+.Builder().name("Morphlines File")
+.description("File for the morphlines context")
+.required(true)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor MORPHLINES_OUTPUT_FIELD = new 
PropertyDescriptor
+.Builder().name("Morphlines output field")
+.description("Field name of output in Morphlines. Default is 

[GitHub] nifi pull request #2028: NIFI-3518 Create a Morphlines processor

2018-02-17 Thread binhnv
Github user binhnv commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2028#discussion_r168942406
  
--- Diff: 
nifi-nar-bundles/nifi-morphlines-bundle/nifi-morphlines-processors/src/main/java/org/apache/nifi/processors/morphlines/ExecuteMorphline.java
 ---
@@ -0,0 +1,253 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.morphlines;
+
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.Restricted;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+import org.kitesdk.morphline.api.Command;
+import org.kitesdk.morphline.api.MorphlineContext;
+import org.kitesdk.morphline.api.Record;
+import org.kitesdk.morphline.base.Fields;
+import org.kitesdk.morphline.base.Compiler;
+import org.kitesdk.morphline.base.Notifications;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableSet;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.List;
+import java.util.Set;
+import java.util.Iterator;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.concurrent.atomic.AtomicLong;
+
+@Tags({"kitesdk", "morphlines", "ETL", "HDFS", "avro", "Solr", "HBase"})
+@CapabilityDescription("Executes Morphlines 
(http://kitesdk.org/docs/1.1.0/morphlines/) framework, which performs in-memory 
container of transformation "
++ "commands in oder to perform tasks such as loading, parsing, 
transforming, or otherwise processing a single record.")
+@DynamicProperty(name = "Relationship Name", value = "A Regular 
Expression", supportsExpressionLanguage = true, description = "Adds the dynamic 
property key and value "
++ "as key-value pair to Morphlines content.")
+@Restricted("Provides operator the ability to read/write to any file that 
NiFi has access to.")
+
+public class ExecuteMorphline extends AbstractProcessor {
+public static final PropertyDescriptor MORPHLINES_ID = new 
PropertyDescriptor
+.Builder().name("Morphlines ID")
+.description("Identifier of the morphlines context")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor MORPHLINES_FILE = new 
PropertyDescriptor
+.Builder().name("Morphlines File")
+.description("File for the morphlines context")
+.required(true)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor MORPHLINES_OUTPUT_FIELD = new 
PropertyDescriptor
+.Builder().name("Morphlines output field")
+.description("Field name of output in Morphlines. Default is 
'_attachment_body'.")
+.required(false)
+.defaultValue("_attachment_body")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final Relationship 

[jira] [Commented] (NIFI-3518) Create a Morphlines processor

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368457#comment-16368457
 ] 

ASF GitHub Bot commented on NIFI-3518:
--

Github user binhnv commented on the issue:

https://github.com/apache/nifi/pull/2028
  
@WilliamNouet there is already a `nifi-kite-bundle` which has `kite`'s 
dependencies, does it make sense to move this processor to that bundle?


> Create a Morphlines processor
> -
>
> Key: NIFI-3518
> URL: https://issues.apache.org/jira/browse/NIFI-3518
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: William Nouet
>Priority: Minor
> Attachments: NIFI-3518-versionupdates.patch
>
>
> Create a dedicate processor to run Morphlines transformations 
> (http://kitesdk.org/docs/1.1.0/morphlines/morphlines-reference-guide.html) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2028: NIFI-3518 Create a Morphlines processor

2018-02-17 Thread binhnv
Github user binhnv commented on the issue:

https://github.com/apache/nifi/pull/2028
  
@WilliamNouet there is already a `nifi-kite-bundle` which has `kite`'s 
dependencies, does it make sense to move this processor to that bundle?


---


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368391#comment-16368391
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2478
  
I'll try to take a look this weekend.


> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2478: NIFI-4833 Add scanHBase Processor

2018-02-17 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2478
  
I'll try to take a look this weekend.


---


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368385#comment-16368385
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bdesert commented on the issue:

https://github.com/apache/nifi/pull/2478
  
@MikeThomsen , are you available to re-review this PR? I have addressed you 
comment regarding branch and the rest (except for labels, which can be added 
later in bulk for all the HBase related processors)


> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2478: NIFI-4833 Add scanHBase Processor

2018-02-17 Thread bdesert
Github user bdesert commented on the issue:

https://github.com/apache/nifi/pull/2478
  
@MikeThomsen , are you available to re-review this PR? I have addressed you 
comment regarding branch and the rest (except for labels, which can be added 
later in bulk for all the HBase related processors)


---


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368384#comment-16368384
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

GitHub user bdesert opened a pull request:

https://github.com/apache/nifi/pull/2478

NIFI-4833 Add scanHBase Processor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bdesert/nifi NIFI-4833-Add-ScanHBase-processor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2478.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2478


commit 39bd6fb5d02eb7dca63830967823a4bb48c5712c
Author: Ed 
Date:   2018-02-17T21:26:04Z

Add ScanHBase Processor

New processor for scanning HBase records based on verious params like range 
of rowkeys, range of timestamps. Supports result limit and reverse scan.

commit d2f5410be14a77f64e7ca5593e6c908620a8da58
Author: Ed 
Date:   2018-02-17T21:27:18Z

Adds Atlas Support for ScanHBase processor

Adds Atlas Support for ScanHBase processor




> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2478: NIFI-4833 Add scanHBase Processor

2018-02-17 Thread bdesert
GitHub user bdesert opened a pull request:

https://github.com/apache/nifi/pull/2478

NIFI-4833 Add scanHBase Processor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bdesert/nifi NIFI-4833-Add-ScanHBase-processor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2478.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2478


commit 39bd6fb5d02eb7dca63830967823a4bb48c5712c
Author: Ed 
Date:   2018-02-17T21:26:04Z

Add ScanHBase Processor

New processor for scanning HBase records based on verious params like range 
of rowkeys, range of timestamps. Supports result limit and reverse scan.

commit d2f5410be14a77f64e7ca5593e6c908620a8da58
Author: Ed 
Date:   2018-02-17T21:27:18Z

Adds Atlas Support for ScanHBase processor

Adds Atlas Support for ScanHBase processor




---


[jira] [Commented] (NIFIREG-146) REST API Documentation Improvements

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368362#comment-16368362
 ] 

ASF GitHub Bot commented on NIFIREG-146:


GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/103

NIFIREG-146 REST API Documentation improvements

Improves the REST API documentation and swagger spec, including:

- Corrects handling of collection response types in REST API docs
- Adds required access policy information in REST API docs
- Adds missing required=true tags to swagger spec
- Adds missing readOnly-true tags to swagger spec
- Adds security definitions to swagger spec
- Corrects VersionedConnection.zIndex field name in swagger spec

Functionality changes:

- Adds authorization check to the getFlowDiff endpoint

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-146

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/103.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #103


commit 44b32e1abd121dd9e5e32220bfebb13adfba80be
Author: Kevin Doran 
Date:   2018-02-17T20:56:07Z

NIFIREG-146 REST API Documentation improvements

Improves the REST API documentation and swagger spec, including:

- Corrects handling of collection response types in REST API docs
- Adds required access policy information in REST API docs
- Adds missing required=true tags to swagger spec
- Adds missing readOnly-true tags to swagger spec
- Adds security definitions to swagger spec
- Corrects VersionedConnection.zIndex field name in swagger spec

Functionality changes:

- Adds authorization check to the getFlowDiff endpoint




> REST API Documentation Improvements
> ---
>
> Key: NIFIREG-146
> URL: https://issues.apache.org/jira/browse/NIFIREG-146
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Major
>  Labels: swagger
> Fix For: 0.2.0
>
>
> The self-hosted REST API documentation that is generated using templated HTML 
> files and the swagger.json needs the following improvements:
>  * Collection response types are not being populated into the template 
> correctly
>  * Endpoints that require authorization to an access policy should be marked 
> as such
> Additionally, the following improvements can be made to the Swagger output by 
> the build:
>  * Mark fields required in a few places where that is missing
>  * Mark fields readOnly in a few places where that is missing
>  * Proper security definitions and Authorization mappings
>  * Correct VersionedConnection.zIndex field name in ApiOperation annotation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #103: NIFIREG-146 REST API Documentation improvem...

2018-02-17 Thread kevdoran
GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/103

NIFIREG-146 REST API Documentation improvements

Improves the REST API documentation and swagger spec, including:

- Corrects handling of collection response types in REST API docs
- Adds required access policy information in REST API docs
- Adds missing required=true tags to swagger spec
- Adds missing readOnly-true tags to swagger spec
- Adds security definitions to swagger spec
- Corrects VersionedConnection.zIndex field name in swagger spec

Functionality changes:

- Adds authorization check to the getFlowDiff endpoint

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-146

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/103.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #103


commit 44b32e1abd121dd9e5e32220bfebb13adfba80be
Author: Kevin Doran 
Date:   2018-02-17T20:56:07Z

NIFIREG-146 REST API Documentation improvements

Improves the REST API documentation and swagger spec, including:

- Corrects handling of collection response types in REST API docs
- Adds required access policy information in REST API docs
- Adds missing required=true tags to swagger spec
- Adds missing readOnly-true tags to swagger spec
- Adds security definitions to swagger spec
- Corrects VersionedConnection.zIndex field name in swagger spec

Functionality changes:

- Adds authorization check to the getFlowDiff endpoint




---


[jira] [Updated] (NIFIREG-146) REST API Documentation Improvements

2018-02-17 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-146:

Description: 
The self-hosted REST API documentation that is generated using templated HTML 
files and the swagger.json needs the following improvements:
 * Collection response types are not being populated into the template correctly
 * Endpoints that require authorization to an access policy should be marked as 
such

Additionally, the following improvements can be made to the Swagger output by 
the build:
 * Mark fields required in a few places where that is missing
 * Mark fields readOnly in a few places where that is missing
 * Proper security definitions and Authorization mappings
 * Correct VersionedConnection.zIndex field name in ApiOperation annotation

  was:
The self-hosted REST API documentation that is generated using templated HTML 
files and the swagger.json needs the following improvements:
 * Collection response types are not being populated into the template correctly
 * Endpoints that require authorization to an access policy should be marked as 
such

Additionally, the following improvements can be made to the Swagger output by 
the build:
 * Proper security definitions and Authorization mappings
 * Correct VersionedConnection.zIndex field name in ApiOperation annotation


> REST API Documentation Improvements
> ---
>
> Key: NIFIREG-146
> URL: https://issues.apache.org/jira/browse/NIFIREG-146
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Major
>  Labels: swagger
> Fix For: 0.2.0
>
>
> The self-hosted REST API documentation that is generated using templated HTML 
> files and the swagger.json needs the following improvements:
>  * Collection response types are not being populated into the template 
> correctly
>  * Endpoints that require authorization to an access policy should be marked 
> as such
> Additionally, the following improvements can be made to the Swagger output by 
> the build:
>  * Mark fields required in a few places where that is missing
>  * Mark fields readOnly in a few places where that is missing
>  * Proper security definitions and Authorization mappings
>  * Correct VersionedConnection.zIndex field name in ApiOperation annotation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFIREG-146) REST API Documentation Improvements

2018-02-17 Thread Kevin Doran (JIRA)
Kevin Doran created NIFIREG-146:
---

 Summary: REST API Documentation Improvements
 Key: NIFIREG-146
 URL: https://issues.apache.org/jira/browse/NIFIREG-146
 Project: NiFi Registry
  Issue Type: Improvement
Affects Versions: 0.1.0
Reporter: Kevin Doran
Assignee: Kevin Doran
 Fix For: 0.2.0


The self-hosted REST API documentation that is generated using templated HTML 
files and the swagger.json needs the following improvements:
 * Collection response types are not being populated into the template correctly
 * Endpoints that require authorization to an access policy should be marked as 
such

Additionally, the following improvements can be made to the Swagger output by 
the build:
 * Proper security definitions and Authorization mappings
 * Correct VersionedConnection.zIndex field name in ApiOperation annotation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368349#comment-16368349
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bdesert closed the pull request at:

https://github.com/apache/nifi/pull/2446


> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2446: NIFI-4833 Add ScanHBase processor

2018-02-17 Thread bdesert
Github user bdesert closed the pull request at:

https://github.com/apache/nifi/pull/2446


---


[jira] [Assigned] (NIFI-4835) Incorrect return type specified for registries/{registry-id}/buckets/{bucket-id}/flows

2018-02-17 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran reassigned NIFI-4835:
-

Assignee: Kevin Doran

> Incorrect return type specified for 
> registries/{registry-id}/buckets/{bucket-id}/flows
> --
>
> Key: NIFI-4835
> URL: https://issues.apache.org/jira/browse/NIFI-4835
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Flow Versioning
>Affects Versions: 1.5.0
>Reporter: Charlie Meyer
>Assignee: Kevin Doran
>Priority: Major
>
> On 
> [https://github.com/apache/nifi/blob/b6117743d4c1c1a37a16ba746b9edbbdd276d69f/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/FlowResource.java#L1376]
> {{response = BucketsEntity.class}}
> should likely be
> {{response = VersionedFlowsEntity.class}}
>  
> same copy/paste error on line 1412 also for versions, although that should be 
> {{VersionedFlowSnapshotMetadataSetEntity.class}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4538) Add Process Group information to Search results

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368320#comment-16368320
 ] 

ASF GitHub Bot commented on NIFI-4538:
--

Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2364
  
@mcgilman Implementation of the nearest versioned group is done.


> Add Process Group information to Search results
> ---
>
> Key: NIFI-4538
> URL: https://issues.apache.org/jira/browse/NIFI-4538
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Matt Burgess
>Assignee: Yuri
>Priority: Major
> Attachments: Screenshot from 2017-12-23 21-08-45.png, Screenshot from 
> 2017-12-23 21-42-24.png
>
>
> When querying for components in the Search bar, no Process Group (PG) 
> information is displayed. When copies of PGs are made on the canvas, the 
> search results can be hard to navigate, as you may jump into a different PG 
> than what you're looking for.
> I propose adding (conditionally, based on user permissions) the immediate 
> parent PG name and/or ID, as well as the top-level PG. In this case I mean 
> top-level being the highest parent PG except root, unless the component's 
> immediate parent PG is root, in which case it wouldn't need to be displayed 
> (or could be displayed as the root PG, albeit a duplicate of the immediate).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2364: NIFI-4538 - Add Process Group information to...

2018-02-17 Thread yuri1969
Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2364
  
@mcgilman Implementation of the nearest versioned group is done.


---


[jira] [Created] (NIFI-4890) OIDC Token Refresh is not done correctly

2018-02-17 Thread Federico Michele Facca (JIRA)
Federico Michele Facca created NIFI-4890:


 Summary: OIDC Token Refresh is not done correctly
 Key: NIFI-4890
 URL: https://issues.apache.org/jira/browse/NIFI-4890
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 1.5.0
 Environment: Environment:
Browser: Chrome / Firefox 
Configuration of NiFi: 
- SSL certificate for the server (no client auth) 
- OIDC configuration including end_session_endpoint (see the link 
https://auth.s.orchestracities.com/auth/realms/default/.well-known/openid-configuration)
 
Reporter: Federico Michele Facca


It looks like the NIFI UI is not refreshing the OIDC token in background, and 
because of that, when the token expires, tells you that your session is 
expired. and you need to refresh the page, to get a new token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4889) Logout not working properly with OIDC

2018-02-17 Thread Federico Michele Facca (JIRA)
Federico Michele Facca created NIFI-4889:


 Summary: Logout not working properly with OIDC
 Key: NIFI-4889
 URL: https://issues.apache.org/jira/browse/NIFI-4889
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 1.5.0
 Environment: Browser: Chrome / Firefox
Configuration of NiFi:
- SSL certificate for the server (no client auth)
- OIDC configuration including end_session_endpoint (see the link 
https://auth.s.orchestracities.com/auth/realms/default/.well-known/openid-configuration)

Reporter: Federico Michele Facca


Click on logout, i would expect to logout and getting redirect to the auth 
page. But given that the session is not closed on the oauth provider, i get 
logged in again.

I suppose the solution would be to invoke the end_session_endpoint provided in 
the openid discovery configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4610) Hanging Processor: ExtractHL7Attributes

2018-02-17 Thread Sivaprasanna Sethuraman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368280#comment-16368280
 ] 

Sivaprasanna Sethuraman commented on NIFI-4610:
---

Eric,

Can you please share a sample data that you used for this pipeline? Would be 
helpful to reproduce the error.

> Hanging Processor: ExtractHL7Attributes
> ---
>
> Key: NIFI-4610
> URL: https://issues.apache.org/jira/browse/NIFI-4610
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.4.0
> Environment: Hanging Processor: ExtractHL7Attributes
> NiFi Version 1.4.0
> Tag nifi-1.4.0-RC2
> Build Date/Time 09/28/2017 14:58:26 CDT
> Name Windows Server 2012 R2
> Version 6.3
> Architecture x86
>Reporter: Eric Thompson
>Priority: Blocker
> Attachments: Display.png
>
>
> I am getting an error of 'Administratively Yielded for 1 sec due to 
> processing failure' and the files that are triggering the error in the 
> processor are not getting routed to the failure processor but seem to loop 
> and stop up the first processor.  Any ideas on how to resolve?
>  
> I would expect the files that are loaded that cause the error would get 
> routed, but instead it appears as though it eventually just stops accepting 
> new files all together since the queue
> Sample Error in nifi-app.log:
> 2017-11-09 06:03:38,835 ERROR [Timer-Driven Process Thread-7] 
> o.a.n.p.hl7.ExtractHL7Attributes 
> ExtractHL7Attributes[id=015f105a-8bc0-1e9a-d683-34dd30c9d744] 
> ExtractHL7Attributes[id=015f105a-8bc0-1e9a-d683-34dd30c9d744] failed to 
> process session due to java.lang.NullPointerException: {}
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.processors.hl7.ExtractHL7Attributes.getAllFields(ExtractHL7Attributes.java:287)
> at 
> org.apache.nifi.processors.hl7.ExtractHL7Attributes.getAttributes(ExtractHL7Attributes.java:217)
> at 
> org.apache.nifi.processors.hl7.ExtractHL7Attributes.onTrigger(ExtractHL7Attributes.java:199)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
>  Source)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
>  Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
> Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
> Source)
> at java.lang.Thread.run(Unknown Source)
> 2017-11-09 06:03:38,835 WARN [Timer-Driven Process Thread-7] 
> o.a.n.p.hl7.ExtractHL7Attributes 
> ExtractHL7Attributes[id=015f105a-8bc0-1e9a-d683-34dd30c9d744] Processor 
> Administratively Yielded for 1 sec due to processing failure
> 2017-11-09 06:03:38,835 WARN [Timer-Driven Process Thread-7] 
> o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding 
> ExtractHL7Attributes[id=015f105a-8bc0-1e9a-d683-34dd30c9d744] due to uncaught 
> Exception: java.lang.NullPointerException
> 2017-11-09 06:03:38,835 WARN [Timer-Driven Process Thread-7] 
> o.a.n.c.t.ContinuallyRunProcessorTask 
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.processors.hl7.ExtractHL7Attributes.getAllFields(ExtractHL7Attributes.java:287)
> at 
> org.apache.nifi.processors.hl7.ExtractHL7Attributes.getAttributes(ExtractHL7Attributes.java:217)
> at 
> org.apache.nifi.processors.hl7.ExtractHL7Attributes.onTrigger(ExtractHL7Attributes.java:199)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> at 
> 

[jira] [Closed] (NIFI-4819) Add support to delete blob from Azure Storage container

2018-02-17 Thread zenfenan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zenfenan closed NIFI-4819.
--
Assignee: zenfenan

> Add support to delete blob from Azure Storage container
> ---
>
> Key: NIFI-4819
> URL: https://issues.apache.org/jira/browse/NIFI-4819
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: zenfenan
>Assignee: zenfenan
>Priority: Major
> Fix For: 1.6.0
>
>
> Implement a delete processor that handles deleting blob from Azure Storage 
> container. This should be an extension of nifi-azure-nar bundle. Currently, 
> the azure bundle's storage processors has support to list, fetch, put Azure 
> Storage blobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4872) NIFI component high resource usage annotation

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368244#comment-16368244
 ] 

ASF GitHub Bot commented on NIFI-4872:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2475
  
Few suggestions regarding existing processors: ExtractText and ReplaceText 
can also be CPU intensive when using some tricky regular expressions. Same goes 
for grok processors as well as TransformXML (depends of the XSLT). It's not 
true in most cases but it can be in some situations. Will try to continue the 
review early next week.


> NIFI component high resource usage annotation
> -
>
> Key: NIFI-4872
> URL: https://issues.apache.org/jira/browse/NIFI-4872
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Core UI
>Affects Versions: 1.5.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Critical
>
> NiFi Processors currently have no means to relay whether or not they have may 
> be resource intensive or not. The idea here would be to introduce an 
> Annotation that can be added to Processors that indicate they may cause high 
> memory, disk, CPU, or network usage. For instance, any Processor that reads 
> the FlowFile contents into memory (like many XML Processors for instance) may 
> cause high memory usage. What ultimately determines if there is high 
> memory/disk/cpu/network usage will depend on the FlowFiles being processed. 
> With many of these components in the dataflow, it increases the risk of 
> OutOfMemoryErrors and performance degradation.
> The annotation should support one value from a fixed list of: CPU, Disk, 
> Memory, Network.  It should also allow the developer to provide a custom 
> description of the scenario that the component would fall under the high 
> usage category.  The annotation should be able to be specified multiple 
> times, for as many resources as it has the potential to be high usage.
> By marking components with this new Annotation, we can update the generated 
> Processor documentation to include this fact.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2475: NIFI-4872 Added annotation for specifying scenarios in whi...

2018-02-17 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2475
  
Few suggestions regarding existing processors: ExtractText and ReplaceText 
can also be CPU intensive when using some tricky regular expressions. Same goes 
for grok processors as well as TransformXML (depends of the XSLT). It's not 
true in most cases but it can be in some situations. Will try to continue the 
review early next week.


---


[GitHub] nifi pull request #2138: NIFI-4371 - add support for query timeout in Hive p...

2018-02-17 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r168924543
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -310,6 +311,15 @@ private void onTrigger(final ProcessContext context, 
final ProcessSession sessio
 try (final Connection con = dbcpService.getConnection();
  final Statement st = (flowbased ? 
con.prepareStatement(selectQuery) : con.createStatement())
 ) {
+try {
+final int queryTimeout = 
context.getProperty(QUERY_TIMEOUT).evaluateAttributeExpressions(fileToProcess).asInteger();
--- End diff --

Good point. I just pushed a commit to address it.


---


[jira] [Commented] (NIFI-4371) Add support for query timeout in Hive processors

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368236#comment-16368236
 ] 

ASF GitHub Bot commented on NIFI-4371:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r168924543
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -310,6 +311,15 @@ private void onTrigger(final ProcessContext context, 
final ProcessSession sessio
 try (final Connection con = dbcpService.getConnection();
  final Statement st = (flowbased ? 
con.prepareStatement(selectQuery) : con.createStatement())
 ) {
+try {
+final int queryTimeout = 
context.getProperty(QUERY_TIMEOUT).evaluateAttributeExpressions(fileToProcess).asInteger();
--- End diff --

Good point. I just pushed a commit to address it.


> Add support for query timeout in Hive processors
> 
>
> Key: NIFI-4371
> URL: https://issues.apache.org/jira/browse/NIFI-4371
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Attachments: Screen Shot 2017-09-09 at 4.31.21 PM.png, Screen Shot 
> 2017-09-09 at 6.38.51 PM.png, Screen Shot 2017-09-09 at 6.40.48 PM.png
>
>
> With HIVE-4924 it is possible to set a query timeout when executing a query 
> against Hive (starting with Hive 2.1). Right now, NiFi is built using Hive 
> 1.2.1 and this feature is not available by default (the method is not 
> implemented in the driver). However, if building NiFi with specific profiles 
> this feature can be used.
> The objective is to expose the query timeout parameter in the processor and 
> enable expression language. If the version of the driver is not implementing 
> the query timeout the processor will be in invalid state (unless expression 
> language is used, and in this case, the flow file will be routed to the 
> failure relationship).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4371) Add support for query timeout in Hive processors

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368231#comment-16368231
 ] 

ASF GitHub Bot commented on NIFI-4371:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r168924048
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/AbstractHiveQLProcessor.java
 ---
@@ -75,6 +81,38 @@
 .addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor QUERY_TIMEOUT = new 
PropertyDescriptor.Builder()
+.name("hive-query-timeout")
+.displayName("Query timeout")
+.description("Sets the number of seconds the driver will wait 
for a query to execute. "
++ "A value of 0 means no timeout. NOTE: Non-zero 
values may not be supported by the driver.")
+.defaultValue("0")
+.required(true)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List problems = new ArrayList<>(1);
+
+if(validationContext.getProperty(QUERY_TIMEOUT).isSet()
+&& 
!validationContext.getProperty(QUERY_TIMEOUT).isExpressionLanguagePresent()
+&& 
validationContext.getProperty(QUERY_TIMEOUT).asInteger() != 0) {
+try(HiveStatement stmt = new HiveStatement(null, null, null)) {
+stmt.setQueryTimeout(0);
--- End diff --

Actually, in versions of the driver that does not implement this method, 
this call will throw an exception no matter what is the value.


> Add support for query timeout in Hive processors
> 
>
> Key: NIFI-4371
> URL: https://issues.apache.org/jira/browse/NIFI-4371
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Attachments: Screen Shot 2017-09-09 at 4.31.21 PM.png, Screen Shot 
> 2017-09-09 at 6.38.51 PM.png, Screen Shot 2017-09-09 at 6.40.48 PM.png
>
>
> With HIVE-4924 it is possible to set a query timeout when executing a query 
> against Hive (starting with Hive 2.1). Right now, NiFi is built using Hive 
> 1.2.1 and this feature is not available by default (the method is not 
> implemented in the driver). However, if building NiFi with specific profiles 
> this feature can be used.
> The objective is to expose the query timeout parameter in the processor and 
> enable expression language. If the version of the driver is not implementing 
> the query timeout the processor will be in invalid state (unless expression 
> language is used, and in this case, the flow file will be routed to the 
> failure relationship).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2138: NIFI-4371 - add support for query timeout in Hive p...

2018-02-17 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2138#discussion_r168924048
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/AbstractHiveQLProcessor.java
 ---
@@ -75,6 +81,38 @@
 .addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor QUERY_TIMEOUT = new 
PropertyDescriptor.Builder()
+.name("hive-query-timeout")
+.displayName("Query timeout")
+.description("Sets the number of seconds the driver will wait 
for a query to execute. "
++ "A value of 0 means no timeout. NOTE: Non-zero 
values may not be supported by the driver.")
+.defaultValue("0")
+.required(true)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List problems = new ArrayList<>(1);
+
+if(validationContext.getProperty(QUERY_TIMEOUT).isSet()
+&& 
!validationContext.getProperty(QUERY_TIMEOUT).isExpressionLanguagePresent()
+&& 
validationContext.getProperty(QUERY_TIMEOUT).asInteger() != 0) {
+try(HiveStatement stmt = new HiveStatement(null, null, null)) {
+stmt.setQueryTimeout(0);
--- End diff --

Actually, in versions of the driver that does not implement this method, 
this call will throw an exception no matter what is the value.


---


[jira] [Updated] (NIFI-4836) Allow QueryDatabaseTables to send out batches of flow files while result set is being processed

2018-02-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4836:
-
   Resolution: Fixed
Fix Version/s: 1.6.0
   Status: Resolved  (was: Patch Available)

> Allow QueryDatabaseTables to send out batches of flow files while result set 
> is being processed
> ---
>
> Key: NIFI-4836
> URL: https://issues.apache.org/jira/browse/NIFI-4836
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> Currently QueryDatabaseTable (QDT) will not transfer the outgoing flowfiles 
> to the downstream relationship(s) until the entire result set has been 
> processed (regardless of whether Max Rows Per Flow File is set). This is so 
> the maxvalue.* and fragment.count attributes can be set correctly for each 
> flow file.
> However for very large result sets, the initial fetch can take a long time, 
> and depending on the setting of Max Rows Per FlowFile, there could be a great 
> number of FlowFiles transferred downstream as a large burst at the end of QDT 
> execution.
> It would be nice for the user to be able to choose to have FlowFiles be 
> transferred downstream while the result set is still being processed. This 
> alleviates the "large burst at the end" by replacing it with smaller output 
> batches during processing. The tradeoff will be that if an Output Batch Size 
> is set, then the maxvalue.* and fragment.count attributes will not be set on 
> the outgoing flow files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4836) Allow QueryDatabaseTables to send out batches of flow files while result set is being processed

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368228#comment-16368228
 ] 

ASF GitHub Bot commented on NIFI-4836:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2447


> Allow QueryDatabaseTables to send out batches of flow files while result set 
> is being processed
> ---
>
> Key: NIFI-4836
> URL: https://issues.apache.org/jira/browse/NIFI-4836
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> Currently QueryDatabaseTable (QDT) will not transfer the outgoing flowfiles 
> to the downstream relationship(s) until the entire result set has been 
> processed (regardless of whether Max Rows Per Flow File is set). This is so 
> the maxvalue.* and fragment.count attributes can be set correctly for each 
> flow file.
> However for very large result sets, the initial fetch can take a long time, 
> and depending on the setting of Max Rows Per FlowFile, there could be a great 
> number of FlowFiles transferred downstream as a large burst at the end of QDT 
> execution.
> It would be nice for the user to be able to choose to have FlowFiles be 
> transferred downstream while the result set is still being processed. This 
> alleviates the "large burst at the end" by replacing it with smaller output 
> batches during processing. The tradeoff will be that if an Output Batch Size 
> is set, then the maxvalue.* and fragment.count attributes will not be set on 
> the outgoing flow files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2447: NIFI-4836: Allow output of FlowFiles during result ...

2018-02-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2447


---


[jira] [Commented] (NIFI-4836) Allow QueryDatabaseTables to send out batches of flow files while result set is being processed

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368227#comment-16368227
 ] 

ASF GitHub Bot commented on NIFI-4836:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2447
  
Thanks @mattyb149 and @MikeThomsen - ran some tests on large tables and 
observed the expected behavior. +1, merging to master.


> Allow QueryDatabaseTables to send out batches of flow files while result set 
> is being processed
> ---
>
> Key: NIFI-4836
> URL: https://issues.apache.org/jira/browse/NIFI-4836
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently QueryDatabaseTable (QDT) will not transfer the outgoing flowfiles 
> to the downstream relationship(s) until the entire result set has been 
> processed (regardless of whether Max Rows Per Flow File is set). This is so 
> the maxvalue.* and fragment.count attributes can be set correctly for each 
> flow file.
> However for very large result sets, the initial fetch can take a long time, 
> and depending on the setting of Max Rows Per FlowFile, there could be a great 
> number of FlowFiles transferred downstream as a large burst at the end of QDT 
> execution.
> It would be nice for the user to be able to choose to have FlowFiles be 
> transferred downstream while the result set is still being processed. This 
> alleviates the "large burst at the end" by replacing it with smaller output 
> batches during processing. The tradeoff will be that if an Output Batch Size 
> is set, then the maxvalue.* and fragment.count attributes will not be set on 
> the outgoing flow files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2447: NIFI-4836: Allow output of FlowFiles during result set pro...

2018-02-17 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2447
  
Thanks @mattyb149 and @MikeThomsen - ran some tests on large tables and 
observed the expected behavior. +1, merging to master.


---


[jira] [Commented] (NIFI-4819) Add support to delete blob from Azure Storage container

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368226#comment-16368226
 ] 

ASF GitHub Bot commented on NIFI-4819:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2436


> Add support to delete blob from Azure Storage container
> ---
>
> Key: NIFI-4819
> URL: https://issues.apache.org/jira/browse/NIFI-4819
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: zenfenan
>Priority: Major
> Fix For: 1.6.0
>
>
> Implement a delete processor that handles deleting blob from Azure Storage 
> container. This should be an extension of nifi-azure-nar bundle. Currently, 
> the azure bundle's storage processors has support to list, fetch, put Azure 
> Storage blobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4819) Add support to delete blob from Azure Storage container

2018-02-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-4819.
--
   Resolution: Fixed
Fix Version/s: 1.6.0

> Add support to delete blob from Azure Storage container
> ---
>
> Key: NIFI-4819
> URL: https://issues.apache.org/jira/browse/NIFI-4819
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: zenfenan
>Priority: Major
> Fix For: 1.6.0
>
>
> Implement a delete processor that handles deleting blob from Azure Storage 
> container. This should be an extension of nifi-azure-nar bundle. Currently, 
> the azure bundle's storage processors has support to list, fetch, put Azure 
> Storage blobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2436: NIFI-4819 Added support to delete blob from Azure S...

2018-02-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2436


---


[jira] [Commented] (NIFI-4819) Add support to delete blob from Azure Storage container

2018-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368225#comment-16368225
 ] 

ASF GitHub Bot commented on NIFI-4819:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2436
  
+1, thanks for the changes @zenfenan, merging to master.


> Add support to delete blob from Azure Storage container
> ---
>
> Key: NIFI-4819
> URL: https://issues.apache.org/jira/browse/NIFI-4819
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: zenfenan
>Priority: Major
>
> Implement a delete processor that handles deleting blob from Azure Storage 
> container. This should be an extension of nifi-azure-nar bundle. Currently, 
> the azure bundle's storage processors has support to list, fetch, put Azure 
> Storage blobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2436: NIFI-4819 Added support to delete blob from Azure Storage ...

2018-02-17 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2436
  
+1, thanks for the changes @zenfenan, merging to master.


---


[jira] [Updated] (NIFI-4888) JMSPublisherConsumerTest.validateConsumeWithCustomHeadersAndProperties leads to brittle builds

2018-02-17 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4888:
--
Summary: 
JMSPublisherConsumerTest.validateConsumeWithCustomHeadersAndProperties leads to 
brittle builds  (was: Tests in error: [INFO] skip non existing 
resourceDirectory 
/development/code/nifi.git/nifi-nar-bundles/nifi-beats-bundle/nifi-beats-nar/src/test/resources
  JMSPublisherConsumerTest.validateConsumeWithCustomHeadersAndProperties leads 
to brittle builds)

> JMSPublisherConsumerTest.validateConsumeWithCustomHeadersAndProperties leads 
> to brittle builds
> --
>
> Key: NIFI-4888
> URL: https://issues.apache.org/jira/browse/NIFI-4888
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
> Environment: Apache Maven 3.5.0 
> Maven home: /development/apache-maven-3.5.0
> Java version: 1.8.0_144, vendor: Oracle Corporation
> Java home: /usr/java/jdk1.8.0_144/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "4.14.13-300.fc27.x86_64", arch: "amd64", family: 
> "unix"
>Reporter: Joseph Witt
>Priority: Major
> Fix For: 1.6.0
>
>
> Tests in error: 
>  JMSPublisherConsumerTest.validateConsumeWithCustomHeadersAndProperties:140 » 
> NullPointer
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4888) Tests in error: [INFO] skip non existing resourceDirectory /development/code/nifi.git/nifi-nar-bundles/nifi-beats-bundle/nifi-beats-nar/src/test/resources JMSPublisherCon

2018-02-17 Thread Joseph Witt (JIRA)
Joseph Witt created NIFI-4888:
-

 Summary: Tests in error: [INFO] skip non existing 
resourceDirectory 
/development/code/nifi.git/nifi-nar-bundles/nifi-beats-bundle/nifi-beats-nar/src/test/resources
  JMSPublisherConsumerTest.validateConsumeWithCustomHeadersAndProperties leads 
to brittle builds
 Key: NIFI-4888
 URL: https://issues.apache.org/jira/browse/NIFI-4888
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
 Environment: Apache Maven 3.5.0 
Maven home: /development/apache-maven-3.5.0
Java version: 1.8.0_144, vendor: Oracle Corporation
Java home: /usr/java/jdk1.8.0_144/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.14.13-300.fc27.x86_64", arch: "amd64", family: 
"unix"

Reporter: Joseph Witt


Tests in error: 

 JMSPublisherConsumerTest.validateConsumeWithCustomHeadersAndProperties:140 » 
NullPointer

Tests run: 14, Failures: 0, Errors: 1, Skipped: 0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4888) Tests in error: [INFO] skip non existing resourceDirectory /development/code/nifi.git/nifi-nar-bundles/nifi-beats-bundle/nifi-beats-nar/src/test/resources JMSPublisherCon

2018-02-17 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4888:
--
Fix Version/s: 1.6.0

> Tests in error: [INFO] skip non existing resourceDirectory 
> /development/code/nifi.git/nifi-nar-bundles/nifi-beats-bundle/nifi-beats-nar/src/test/resources
>   JMSPublisherConsumerTest.validateConsumeWithCustomHeadersAndProperties 
> leads to brittle builds
> --
>
> Key: NIFI-4888
> URL: https://issues.apache.org/jira/browse/NIFI-4888
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
> Environment: Apache Maven 3.5.0 
> Maven home: /development/apache-maven-3.5.0
> Java version: 1.8.0_144, vendor: Oracle Corporation
> Java home: /usr/java/jdk1.8.0_144/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "4.14.13-300.fc27.x86_64", arch: "amd64", family: 
> "unix"
>Reporter: Joseph Witt
>Priority: Major
> Fix For: 1.6.0
>
>
> Tests in error: 
>  JMSPublisherConsumerTest.validateConsumeWithCustomHeadersAndProperties:140 » 
> NullPointer
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4887) EncryptedWriteAheadProvenanceRepositoryTest causes build stability issues

2018-02-17 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368223#comment-16368223
 ] 

Joseph Witt commented on NIFI-4887:
---

this was run with a command that caused really high CPU usage.  Issue never run 
into before and usually never run at such high load level.  So might be a race 
condition/threading issue in the test

> EncryptedWriteAheadProvenanceRepositoryTest causes build stability issues
> -
>
> Key: NIFI-4887
> URL: https://issues.apache.org/jira/browse/NIFI-4887
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: Apache Maven 3.5.0 
> Java version: 1.8.0_141, vendor: Oracle Corporation
> Java home: 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_141.jdk/Contents/Home/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac"
>Reporter: Joseph Witt
>Priority: Major
> Fix For: 1.6.0
>
>
> Results :
> Tests in error:
>  
> EncryptedWriteAheadProvenanceRepositoryTest.testShouldRegisterAndGetEvent:277 
> » FileNotFound
> Tests run: 118, Failures: 0, Errors: 1, Skipped: 10



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4887) EncryptedWriteAheadProvenanceRepositoryTest causes build stability issues

2018-02-17 Thread Joseph Witt (JIRA)
Joseph Witt created NIFI-4887:
-

 Summary: EncryptedWriteAheadProvenanceRepositoryTest causes build 
stability issues
 Key: NIFI-4887
 URL: https://issues.apache.org/jira/browse/NIFI-4887
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.5.0
 Environment: Apache Maven 3.5.0 
Java version: 1.8.0_141, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_141.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac"
Reporter: Joseph Witt
 Fix For: 1.6.0


Results :

Tests in error:
 EncryptedWriteAheadProvenanceRepositoryTest.testShouldRegisterAndGetEvent:277 
» FileNotFound

Tests run: 118, Failures: 0, Errors: 1, Skipped: 10



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4886) Slack processor - allow expression language in Webhook Url property

2018-02-17 Thread Eugeny Kolpakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugeny Kolpakov updated NIFI-4886:
--
Priority: Minor  (was: Major)

> Slack processor - allow expression language in Webhook Url property
> ---
>
> Key: NIFI-4886
> URL: https://issues.apache.org/jira/browse/NIFI-4886
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Eugeny Kolpakov
>Priority: Minor
>
> Webhook URL in PutSlack processor does not allow expression language.
> This makes it somewhat problematic to use, especially in the face of multiple 
> Nifi environments (staging/production), not to say it is quite tedious to 
> change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4886) Slack processor - allow expression language in Webhook Url property

2018-02-17 Thread Eugeny (JIRA)
Eugeny created NIFI-4886:


 Summary: Slack processor - allow expression language in Webhook 
Url property
 Key: NIFI-4886
 URL: https://issues.apache.org/jira/browse/NIFI-4886
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Eugeny


Webhook URL in PutSlack processor does not allow expression language.

This makes it somewhat problematic to use, especially in the face of multiple 
Nifi environments (staging/production), not to say it is quite tedious to 
change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)