[jira] [Created] (NIFI-5262) ListFile should retrieve file attributes only once

2018-06-04 Thread Marco Gaido (JIRA)
Marco Gaido created NIFI-5262:
-

 Summary: ListFile should retrieve file attributes only once
 Key: NIFI-5262
 URL: https://issues.apache.org/jira/browse/NIFI-5262
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.7.0
Reporter: Marco Gaido
Assignee: Marco Gaido


The ListFile processor retrieves many times file information like the 
{{length}}, the {{lastModifiedTime}}, the {{isDirectory}} attribute. If the 
filesystem is remote, each of these method calls is blocking and involving a 
communication with the remote system.

We should avoid to retrieve the info more than once in order to improve 
performances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2723: NIFI-5214 Added REST LookupService

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2723
  
@ijokarumawak I'll take a stab at adding the template url option.


---


[jira] [Commented] (NIFI-5214) Add a REST lookup service

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500084#comment-16500084
 ] 

ASF GitHub Bot commented on NIFI-5214:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2723
  
@ijokarumawak I'll take a stab at adding the template url option.


> Add a REST lookup service
> -
>
> Key: NIFI-5214
> URL: https://issues.apache.org/jira/browse/NIFI-5214
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> * Should have reader API support
>  * Should be able to drill down through complex XML and JSON responses to a 
> nested record.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2737: NIFI-5231 Added RecordStats processor.

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r192706576
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@Tags({ "record", "stats", "metrics" })
+@CapabilityDescription("A processor that can count the number of items in 
a record set, as well as provide counts based on " +
+"user-defined criteria on subsets of the record set.")
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = RecordStats.RECORD_COUNT_ATTR, 
description = "A count of the records in the record set in the flowfile.")
+})
+public class RecordStats extends AbstractProcessor {
+static final String RECORD_COUNT_ATTR = "record_count";
+
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(ProcessContext context) {
+cache = new RecordPathCache(25);
+}
+
+@Override
+public Set getRelationships() {
+return new HashSet() {{
+add(REL_SUCCESS);
+add(REL_FAILURE);

[GitHub] nifi pull request #2754: NIFI-5262: Retrieve file attributes only once in Li...

2018-06-04 Thread mgaido91
GitHub user mgaido91 opened a pull request:

https://github.com/apache/nifi/pull/2754

NIFI-5262: Retrieve file attributes only once in ListFile

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mgaido91/nifi NIFI-5262

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2754.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2754


commit 9ab780f7f8ba0dd0782796b09e734102ffb9d42f
Author: Marco Gaido 
Date:   2018-06-04T11:16:49Z

[NIFI-5262] Retrieve file attributes only once in ListFile




---


[jira] [Commented] (NIFI-5262) ListFile should retrieve file attributes only once

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500085#comment-16500085
 ] 

ASF GitHub Bot commented on NIFI-5262:
--

GitHub user mgaido91 opened a pull request:

https://github.com/apache/nifi/pull/2754

NIFI-5262: Retrieve file attributes only once in ListFile

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mgaido91/nifi NIFI-5262

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2754.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2754


commit 9ab780f7f8ba0dd0782796b09e734102ffb9d42f
Author: Marco Gaido 
Date:   2018-06-04T11:16:49Z

[NIFI-5262] Retrieve file attributes only once in ListFile




> ListFile should retrieve file attributes only once
> --
>
> Key: NIFI-5262
> URL: https://issues.apache.org/jira/browse/NIFI-5262
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.0
>Reporter: Marco Gaido
>Assignee: Marco Gaido
>Priority: Major
>
> The ListFile processor retrieves many times file information like the 
> {{length}}, the {{lastModifiedTime}}, the {{isDirectory}} attribute. If the 
> filesystem is remote, each of these method calls is blocking and involving a 
> communication with the remote system.
> We should avoid to retrieve the info more than once in order to improve 
> performances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5231) Record stats processor

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500087#comment-16500087
 ] 

ASF GitHub Bot commented on NIFI-5231:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r192706576
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@Tags({ "record", "stats", "metrics" })
+@CapabilityDescription("A processor that can count the number of items in 
a record set, as well as provide counts based on " +
+"user-defined criteria on subsets of the record set.")
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = RecordStats.RECORD_COUNT_ATTR, 
description = "A count of the records in the record set in the flowfile.")
+})
+public class RecordStats extends AbstractProcessor {
+static final String RECORD_COUNT_ATTR = "record_count";
+
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(Pr

[GitHub] nifi pull request #2737: NIFI-5231 Added RecordStats processor.

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r192706932
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@Tags({ "record", "stats", "metrics" })
+@CapabilityDescription("A processor that can count the number of items in 
a record set, as well as provide counts based on " +
+"user-defined criteria on subsets of the record set.")
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = RecordStats.RECORD_COUNT_ATTR, 
description = "A count of the records in the record set in the flowfile.")
+})
+public class RecordStats extends AbstractProcessor {
+static final String RECORD_COUNT_ATTR = "record_count";
+
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(ProcessContext context) {
+cache = new RecordPathCache(25);
+}
+
+@Override
+public Set getRelationships() {
+return new HashSet() {{
+add(REL_SUCCESS);
+add(REL_FAILURE);

[GitHub] nifi pull request #2737: NIFI-5231 Added RecordStats processor.

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r192706910
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@Tags({ "record", "stats", "metrics" })
+@CapabilityDescription("A processor that can count the number of items in 
a record set, as well as provide counts based on " +
+"user-defined criteria on subsets of the record set.")
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = RecordStats.RECORD_COUNT_ATTR, 
description = "A count of the records in the record set in the flowfile.")
+})
+public class RecordStats extends AbstractProcessor {
+static final String RECORD_COUNT_ATTR = "record_count";
+
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
--- End diff --

Done.


---


[jira] [Commented] (NIFI-5231) Record stats processor

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500091#comment-16500091
 ] 

ASF GitHub Bot commented on NIFI-5231:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r192706932
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@Tags({ "record", "stats", "metrics" })
+@CapabilityDescription("A processor that can count the number of items in 
a record set, as well as provide counts based on " +
+"user-defined criteria on subsets of the record set.")
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = RecordStats.RECORD_COUNT_ATTR, 
description = "A count of the records in the record set in the flowfile.")
+})
+public class RecordStats extends AbstractProcessor {
+static final String RECORD_COUNT_ATTR = "record_count";
+
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(Pr

[GitHub] nifi pull request #2737: NIFI-5231 Added RecordStats processor.

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r192707116
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@Tags({ "record", "stats", "metrics" })
+@CapabilityDescription("A processor that can count the number of items in 
a record set, as well as provide counts based on " +
+"user-defined criteria on subsets of the record set.")
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = RecordStats.RECORD_COUNT_ATTR, 
description = "A count of the records in the record set in the flowfile.")
+})
+public class RecordStats extends AbstractProcessor {
+static final String RECORD_COUNT_ATTR = "record_count";
+
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(ProcessContext context) {
+cache = new RecordPathCache(25);
+}
+
+@Override
+public Set getRelationships() {
+return new HashSet() {{
+add(REL_SUCCESS);
+add(REL_FAILURE);

[jira] [Commented] (NIFI-5231) Record stats processor

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500090#comment-16500090
 ] 

ASF GitHub Bot commented on NIFI-5231:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r192706910
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@Tags({ "record", "stats", "metrics" })
+@CapabilityDescription("A processor that can count the number of items in 
a record set, as well as provide counts based on " +
+"user-defined criteria on subsets of the record set.")
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = RecordStats.RECORD_COUNT_ATTR, 
description = "A count of the records in the record set in the flowfile.")
+})
+public class RecordStats extends AbstractProcessor {
+static final String RECORD_COUNT_ATTR = "record_count";
+
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
--- End diff --

Done.


> Record stats processor
> --
>
> Key: NIFI-5231
> URL: https://issues.apache.org/jira/browse/NIFI-5231
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> Should the following:
>  
>  # Take a record reader.
>  # Count the # of records and add a record_count attribute to the flowfile.
>  # Allow user-defined properties that do the following:
>  ## Map attribute name -> record path.
>  ## Provide aggregate value counts for each record path statement.
>  ## Provide total count for record path operation.
>  ## Put those values on the flowfile as attributes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5231) Record stats processor

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500093#comment-16500093
 ] 

ASF GitHub Bot commented on NIFI-5231:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2737#discussion_r192707116
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/RecordStats.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+@Tags({ "record", "stats", "metrics" })
+@CapabilityDescription("A processor that can count the number of items in 
a record set, as well as provide counts based on " +
+"user-defined criteria on subsets of the record set.")
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = RecordStats.RECORD_COUNT_ATTR, 
description = "A count of the records in the record set in the flowfile.")
+})
+public class RecordStats extends AbstractProcessor {
+static final String RECORD_COUNT_ATTR = "record_count";
+
+static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-stats-reader")
+.displayName("Record Reader")
+.description("A record reader to use for reading the records.")
+.addValidator(Validator.VALID)
+.identifiesControllerService(RecordReaderFactory.class)
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("If a flowfile is successfully processed, it goes 
here.")
+.build();
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("If a flowfile fails to be processed, it goes here.")
+.build();
+
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(propertyDescriptorName)
+.dynamic(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.build();
+}
+
+private RecordPathCache cache;
+
+@OnScheduled
+public void onEnabled(Pr

[GitHub] nifi pull request #2723: NIFI-5214 Added REST LookupService

2018-06-04 Thread ottobackwards
Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2723#discussion_r192722395
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/main/java/org/apache/nifi/lookup/RestLookupService.java
 ---
@@ -277,6 +280,20 @@ private void setProxy(OkHttpClient.Builder builder) {
 }
 }
 
--- End diff --

Should this handle AttributeExpressionLanguageParsingException?  Is there 
any validation that can be done?


---


[jira] [Commented] (NIFI-5214) Add a REST lookup service

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500147#comment-16500147
 ] 

ASF GitHub Bot commented on NIFI-5214:
--

Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2723#discussion_r192722395
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/main/java/org/apache/nifi/lookup/RestLookupService.java
 ---
@@ -277,6 +280,20 @@ private void setProxy(OkHttpClient.Builder builder) {
 }
 }
 
--- End diff --

Should this handle AttributeExpressionLanguageParsingException?  Is there 
any validation that can be done?


> Add a REST lookup service
> -
>
> Key: NIFI-5214
> URL: https://issues.apache.org/jira/browse/NIFI-5214
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> * Should have reader API support
>  * Should be able to drill down through complex XML and JSON responses to a 
> nested record.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2723: NIFI-5214 Added REST LookupService

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2723#discussion_r192722942
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/main/java/org/apache/nifi/lookup/RestLookupService.java
 ---
@@ -277,6 +280,20 @@ private void setProxy(OkHttpClient.Builder builder) {
 }
 }
 
--- End diff --

I don't think so because the EL support is enabled on the fly by the user, 
and any user who is enterprising enough to do that should be responsible for 
validating it. There's no good fallback option short of having them specify yet 
another key, and IMO that's overkill.


---


[jira] [Commented] (NIFI-5214) Add a REST lookup service

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500148#comment-16500148
 ] 

ASF GitHub Bot commented on NIFI-5214:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2723#discussion_r192722942
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/main/java/org/apache/nifi/lookup/RestLookupService.java
 ---
@@ -277,6 +280,20 @@ private void setProxy(OkHttpClient.Builder builder) {
 }
 }
 
--- End diff --

I don't think so because the EL support is enabled on the fly by the user, 
and any user who is enterprising enough to do that should be responsible for 
validating it. There's no good fallback option short of having them specify yet 
another key, and IMO that's overkill.


> Add a REST lookup service
> -
>
> Key: NIFI-5214
> URL: https://issues.apache.org/jira/browse/NIFI-5214
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> * Should have reader API support
>  * Should be able to drill down through complex XML and JSON responses to a 
> nested record.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-523) bootstrap.sh "continue" confirmation prompt does n

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500178#comment-16500178
 ] 

ASF GitHub Bot commented on MINIFICPP-523:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/351


> bootstrap.sh "continue" confirmation prompt does n
> --
>
> Key: MINIFICPP-523
> URL: https://issues.apache.org/jira/browse/MINIFICPP-523
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Minor
>
> In bootstrap.sh, after options are selected and the bootstrap is complete, 
> there is a confirmation prompt that displays the resulting cmake command and 
> a Y/N to continue. Selecting N is ignored.
> It appears this is because the wrong variable is being checked in 
> [bootstrap.sh#L534|https://github.com/apache/nifi-minifi-cpp/blob/e69b20aff3abe44be214d5edaf00e22a48258421/bootstrap.sh#L534]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #351: MINIFICPP-523 Fixes bootstrap continue wi...

2018-06-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/351


---


[jira] [Updated] (MINIFICPP-523) bootstrap.sh "continue" confirmation prompt does n

2018-06-04 Thread marco polo (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-523:
-
   Resolution: Fixed
Fix Version/s: 0.6.0
   Status: Resolved  (was: Patch Available)

> bootstrap.sh "continue" confirmation prompt does n
> --
>
> Key: MINIFICPP-523
> URL: https://issues.apache.org/jira/browse/MINIFICPP-523
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Minor
> Fix For: 0.6.0
>
>
> In bootstrap.sh, after options are selected and the bootstrap is complete, 
> there is a confirmation prompt that displays the resulting cmake command and 
> a Y/N to continue. Selecting N is ignored.
> It appears this is because the wrong variable is being checked in 
> [bootstrap.sh#L534|https://github.com/apache/nifi-minifi-cpp/blob/e69b20aff3abe44be214d5edaf00e22a48258421/bootstrap.sh#L534]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5243) Add ListKafkaTopics Processor

2018-06-04 Thread Bryan Bende (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500213#comment-16500213
 ] 

Bryan Bende commented on NIFI-5243:
---

I would also add that the current ConsumeKafka* processors create a consumer 
pool for the topic when they are started, and they don't support incoming flow 
files since the pool is created when the processor is started.

> Add ListKafkaTopics Processor
> -
>
> Key: NIFI-5243
> URL: https://issues.apache.org/jira/browse/NIFI-5243
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joe Trite
>Priority: Major
>
> I need to get a list of available Kakfa topics which I can filter and pass as 
> input to ConsumeKafka* processors.  This will provide me the ability to 
> ingest Kafka messages using the same List > Fetch pattern that I currently 
> use with files and tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3332) Bug in ListXXX causes matching timestamps to be ignored on later runs

2018-06-04 Thread Bryan Bende (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500216#comment-16500216
 ] 

Bryan Bende commented on NIFI-3332:
---

[~doaks80] what version are you seeing this with?

> Bug in ListXXX causes matching timestamps to be ignored on later runs
> -
>
> Key: NIFI-3332
> URL: https://issues.apache.org/jira/browse/NIFI-3332
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Joe Skora
>Assignee: Koji Kawamura
>Priority: Critical
> Fix For: 1.4.0
>
> Attachments: Test-showing-ListFile-timestamp-bug.log, 
> Test-showing-ListFile-timestamp-bug.patch, listfiles.png
>
>
> The new state implementation for the ListXXX processors based on 
> AbstractListProcessor creates a race conditions when processor runs occur 
> while a batch of files is being written with the same timestamp.
> The changes to state management dropped tracking of the files processed for a 
> given timestamp.  Without the record of files processed, the remainder of the 
> batch is ignored on the next processor run since their timestamp is not 
> greater than the one timestamp stored in processor state.  With the file 
> tracking it was possible to process files that matched the timestamp exactly 
> and exclude the previously processed files.
> A basic time goes as follows.
>   T0 - system creates or receives batch of files with Tx timestamp where Tx 
> is more than the current timestamp in processor state.
>   T1 - system writes 1st half of Tx batch to the ListFile source directory.
>   T2 - ListFile runs picking up 1st half of Tx batch and stores Tx timestamp 
> in processor state.
>   T3 - system writes 2nd half of Tx batch to ListFile source directory.
>   T4 - ListFile runs ignoring any files with T <= Tx, eliminating 2nd half Tx 
> timestamp batch.
> I've attached a patch[1] for TestListFile.java that adds an instrumented unit 
> test demonstrates the problem and a log[2] of the output from one such run.  
> The test writes 3 files each in two batches with processor runs after each 
> batch.  Batch 2 writes files with timestamps older than, equal to, and newer 
> than the timestamp stored when batch 1 was processed, but only the newer file 
> is picked up.  The older file is correctly ignored but file with the matchin 
> timestamp file should have been processed.
> [1] Test-showing-ListFile-timestamp-bug.patch
> [2] Test-showing-ListFile-timestamp-bug.log



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements

2018-06-04 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2747#discussion_r192756076
  
--- Diff: nifi-docker/dockermaven/pom.xml ---
@@ -44,7 +44,6 @@
 
 apache/nifi
 ${project.version}
-latest
--- End diff --

Why was this removed? When we do a release, I believe we want to keep that 
release as the latest.


---


[jira] [Commented] (NIFI-5249) Dockerfile enhancements

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500276#comment-16500276
 ] 

ASF GitHub Bot commented on NIFI-5249:
--

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2747#discussion_r192756076
  
--- Diff: nifi-docker/dockermaven/pom.xml ---
@@ -44,7 +44,6 @@
 
 apache/nifi
 ${project.version}
-latest
--- End diff --

Why was this removed? When we do a release, I believe we want to keep that 
release as the latest.


> Dockerfile enhancements
> ---
>
> Key: NIFI-5249
> URL: https://issues.apache.org/jira/browse/NIFI-5249
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Docker
>Reporter: Peter Wilcsinszky
>Priority: Minor
>
> * make environment variables more explicit
>  * create data and log directories
>  * add procps for process visibility inside the container



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements

2018-06-04 Thread jtstorck
Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2747#discussion_r192208160
  
--- Diff: nifi-docker/dockerhub/Dockerfile ---
@@ -25,28 +25,37 @@ ARG GID=1000
 ARG NIFI_VERSION=1.7.0
 ARG MIRROR=https://archive.apache.org/dist
 
-ENV NIFI_BASE_DIR /opt/nifi 
+ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME=${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} \
 NIFI_BINARY_URL=/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.tar.gz
+ENV NIFI_PID_DIR=${NIFI_HOME}/run
+ENV NIFI_LOG_DIR=${NIFI_HOME}/logs
 
 ADD sh/ /opt/nifi/scripts/
 
-# Setup NiFi user
+# Setup NiFi user and create necessary directories
 RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut 
-d: -f1` \
 && useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \
 && mkdir -p ${NIFI_HOME}/conf/templates \
+&& mkdir -p $NIFI_BASE_DIR/data \
+&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \
+&& mkdir -p $NIFI_BASE_DIR/content_repository \
+&& mkdir -p $NIFI_BASE_DIR/provenance_repository \
+&& mkdir -p $NIFI_LOG_DIR \
 && chown -R nifi:nifi ${NIFI_BASE_DIR} \
 && apt-get update \
-&& apt-get install -y jq xmlstarlet
+&& apt-get install -y jq xmlstarlet procps
 
 USER nifi
 
 # Download, validate, and expand Apache NiFi binary.
 RUN curl -fSL ${MIRROR}/${NIFI_BINARY_URL} -o 
${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz \
-&& echo "$(curl 
https://archive.apache.org/dist/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
+&& echo "$(curl ${MIRROR}/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
--- End diff --

@pepov I would suggest removing the MIRROR build arg from this line and 
reverting back to the apache archive, since from what @apiri has told me, only 
the Apache archive will host the SHA files to verify the archive.  A mirror 
will not contain those.

Also, there's a caveat with using a mirror.  If you're not building a 
version that still exists on the mirror (which should be current and 
current-1), the build will fail, if that version has been removed/rolled off 
from the mirror.


---


[jira] [Commented] (NIFI-5249) Dockerfile enhancements

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500290#comment-16500290
 ] 

ASF GitHub Bot commented on NIFI-5249:
--

Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2747#discussion_r192208160
  
--- Diff: nifi-docker/dockerhub/Dockerfile ---
@@ -25,28 +25,37 @@ ARG GID=1000
 ARG NIFI_VERSION=1.7.0
 ARG MIRROR=https://archive.apache.org/dist
 
-ENV NIFI_BASE_DIR /opt/nifi 
+ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME=${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} \
 NIFI_BINARY_URL=/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.tar.gz
+ENV NIFI_PID_DIR=${NIFI_HOME}/run
+ENV NIFI_LOG_DIR=${NIFI_HOME}/logs
 
 ADD sh/ /opt/nifi/scripts/
 
-# Setup NiFi user
+# Setup NiFi user and create necessary directories
 RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut 
-d: -f1` \
 && useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \
 && mkdir -p ${NIFI_HOME}/conf/templates \
+&& mkdir -p $NIFI_BASE_DIR/data \
+&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \
+&& mkdir -p $NIFI_BASE_DIR/content_repository \
+&& mkdir -p $NIFI_BASE_DIR/provenance_repository \
+&& mkdir -p $NIFI_LOG_DIR \
 && chown -R nifi:nifi ${NIFI_BASE_DIR} \
 && apt-get update \
-&& apt-get install -y jq xmlstarlet
+&& apt-get install -y jq xmlstarlet procps
 
 USER nifi
 
 # Download, validate, and expand Apache NiFi binary.
 RUN curl -fSL ${MIRROR}/${NIFI_BINARY_URL} -o 
${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz \
-&& echo "$(curl 
https://archive.apache.org/dist/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
+&& echo "$(curl ${MIRROR}/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
--- End diff --

@pepov I would suggest removing the MIRROR build arg from this line and 
reverting back to the apache archive, since from what @apiri has told me, only 
the Apache archive will host the SHA files to verify the archive.  A mirror 
will not contain those.

Also, there's a caveat with using a mirror.  If you're not building a 
version that still exists on the mirror (which should be current and 
current-1), the build will fail, if that version has been removed/rolled off 
from the mirror.


> Dockerfile enhancements
> ---
>
> Key: NIFI-5249
> URL: https://issues.apache.org/jira/browse/NIFI-5249
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Docker
>Reporter: Peter Wilcsinszky
>Priority: Minor
>
> * make environment variables more explicit
>  * create data and log directories
>  * add procps for process visibility inside the container



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2755: NIFI-4963: Added Hive3 bundle

2018-06-04 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2755

NIFI-4963: Added Hive3 bundle

You'll need to activate the include-hive3 profile when building the 
assembly, it is currently being excluded by default due to its size (~200 MB).

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [x] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [x] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-4963

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2755.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2755


commit 417bc821d277a0556842f5aa734d854ca225147b
Author: Matthew Burgess 
Date:   2018-06-04T14:29:08Z

NIFI-4963: Added Hive3 bundle




---


[jira] [Commented] (NIFI-4963) Add support for Hive 3.0 processors

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500295#comment-16500295
 ] 

ASF GitHub Bot commented on NIFI-4963:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2755

NIFI-4963: Added Hive3 bundle

You'll need to activate the include-hive3 profile when building the 
assembly, it is currently being excluded by default due to its size (~200 MB).

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [x] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [x] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-4963

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2755.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2755


commit 417bc821d277a0556842f5aa734d854ca225147b
Author: Matthew Burgess 
Date:   2018-06-04T14:29:08Z

NIFI-4963: Added Hive3 bundle




> Add support for Hive 3.0 processors
> ---
>
> Key: NIFI-4963
> URL: https://issues.apache.org/jira/browse/NIFI-4963
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Apache Hive is working on Hive 3.0, this Jira is to add a bundle of 
> components (much like the current Hive bundle) that supports Hive 3.0 (and 
> Apache ORC if necessary).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2755: NIFI-4963: Added Hive3 bundle

2018-06-04 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2755#discussion_r192763045
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive3-processors/src/main/java/org/apache/hive/streaming/HiveRecordWriter.java
 ---
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hive.streaming;
+
+import com.google.common.base.Joiner;
+import org.apache.hadoop.hive.serde.serdeConstants;
+import org.apache.hadoop.hive.serde2.AbstractSerDe;
+import org.apache.hadoop.hive.serde2.SerDeException;
+import org.apache.hadoop.hive.serde2.SerDeUtils;
+import org.apache.hadoop.io.ObjectWritable;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.List;
+import java.util.Properties;
+
+public class HiveRecordWriter extends AbstractRecordWriter {
--- End diff --

@prasanthj Do you mind taking a look at HiveRecordWriter and 
NiFiRecordSerDe (and PutHive3Streaming which uses them when creating the 
connection and passing in options)? Those are the custom impls for the new Hive 
Streaming API classes, hoping for suggestions on improving performance, etc. 
Thanks in advance!


---


[jira] [Commented] (NIFI-4963) Add support for Hive 3.0 processors

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500306#comment-16500306
 ] 

ASF GitHub Bot commented on NIFI-4963:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2755#discussion_r192763045
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive3-processors/src/main/java/org/apache/hive/streaming/HiveRecordWriter.java
 ---
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hive.streaming;
+
+import com.google.common.base.Joiner;
+import org.apache.hadoop.hive.serde.serdeConstants;
+import org.apache.hadoop.hive.serde2.AbstractSerDe;
+import org.apache.hadoop.hive.serde2.SerDeException;
+import org.apache.hadoop.hive.serde2.SerDeUtils;
+import org.apache.hadoop.io.ObjectWritable;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.List;
+import java.util.Properties;
+
+public class HiveRecordWriter extends AbstractRecordWriter {
--- End diff --

@prasanthj Do you mind taking a look at HiveRecordWriter and 
NiFiRecordSerDe (and PutHive3Streaming which uses them when creating the 
connection and passing in options)? Those are the custom impls for the new Hive 
Streaming API classes, hoping for suggestions on improving performance, etc. 
Thanks in advance!


> Add support for Hive 3.0 processors
> ---
>
> Key: NIFI-4963
> URL: https://issues.apache.org/jira/browse/NIFI-4963
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Apache Hive is working on Hive 3.0, this Jira is to add a bundle of 
> components (much like the current Hive bundle) that supports Hive 3.0 (and 
> Apache ORC if necessary).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500336#comment-16500336
 ] 

ASF GitHub Bot commented on NIFI-5059:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2619
  
Reviewing...


> MongoDBLookupService should be able to determine a schema or have one provided
> --
>
> Key: NIFI-5059
> URL: https://issues.apache.org/jira/browse/NIFI-5059
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> MongoDBLookupService should have two schema handling modes:
>  # Where a schema is provided as a configuration parameter to be applied to 
> the Record object generated from the result document.
>  # A schema will be generated by examining the result object and building one 
> that roughly translates from BSON into the Record API.
> In both cases, the schema will be applied to the Mongo result Document object 
> that is returned if one comes back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2619: NIFI-5059 Updated MongoDBLookupService to be able to detec...

2018-06-04 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2619
  
Reviewing...


---


[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements

2018-06-04 Thread pepov
Github user pepov commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2747#discussion_r192781212
  
--- Diff: nifi-docker/dockerhub/Dockerfile ---
@@ -25,28 +25,37 @@ ARG GID=1000
 ARG NIFI_VERSION=1.7.0
 ARG MIRROR=https://archive.apache.org/dist
 
-ENV NIFI_BASE_DIR /opt/nifi 
+ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME=${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} \
 NIFI_BINARY_URL=/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.tar.gz
+ENV NIFI_PID_DIR=${NIFI_HOME}/run
+ENV NIFI_LOG_DIR=${NIFI_HOME}/logs
 
 ADD sh/ /opt/nifi/scripts/
 
-# Setup NiFi user
+# Setup NiFi user and create necessary directories
 RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut 
-d: -f1` \
 && useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \
 && mkdir -p ${NIFI_HOME}/conf/templates \
+&& mkdir -p $NIFI_BASE_DIR/data \
+&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \
+&& mkdir -p $NIFI_BASE_DIR/content_repository \
+&& mkdir -p $NIFI_BASE_DIR/provenance_repository \
+&& mkdir -p $NIFI_LOG_DIR \
 && chown -R nifi:nifi ${NIFI_BASE_DIR} \
 && apt-get update \
-&& apt-get install -y jq xmlstarlet
+&& apt-get install -y jq xmlstarlet procps
 
 USER nifi
 
 # Download, validate, and expand Apache NiFi binary.
 RUN curl -fSL ${MIRROR}/${NIFI_BINARY_URL} -o 
${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz \
-&& echo "$(curl 
https://archive.apache.org/dist/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
+&& echo "$(curl ${MIRROR}/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
--- End diff --

Ok, I just borrowed it from @markap14 and it made sense since the value is 
the same. Where can we see the build config of the public docker image?


---


[jira] [Commented] (NIFI-5249) Dockerfile enhancements

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500369#comment-16500369
 ] 

ASF GitHub Bot commented on NIFI-5249:
--

Github user pepov commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2747#discussion_r192781212
  
--- Diff: nifi-docker/dockerhub/Dockerfile ---
@@ -25,28 +25,37 @@ ARG GID=1000
 ARG NIFI_VERSION=1.7.0
 ARG MIRROR=https://archive.apache.org/dist
 
-ENV NIFI_BASE_DIR /opt/nifi 
+ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME=${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} \
 NIFI_BINARY_URL=/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.tar.gz
+ENV NIFI_PID_DIR=${NIFI_HOME}/run
+ENV NIFI_LOG_DIR=${NIFI_HOME}/logs
 
 ADD sh/ /opt/nifi/scripts/
 
-# Setup NiFi user
+# Setup NiFi user and create necessary directories
 RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut 
-d: -f1` \
 && useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \
 && mkdir -p ${NIFI_HOME}/conf/templates \
+&& mkdir -p $NIFI_BASE_DIR/data \
+&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \
+&& mkdir -p $NIFI_BASE_DIR/content_repository \
+&& mkdir -p $NIFI_BASE_DIR/provenance_repository \
+&& mkdir -p $NIFI_LOG_DIR \
 && chown -R nifi:nifi ${NIFI_BASE_DIR} \
 && apt-get update \
-&& apt-get install -y jq xmlstarlet
+&& apt-get install -y jq xmlstarlet procps
 
 USER nifi
 
 # Download, validate, and expand Apache NiFi binary.
 RUN curl -fSL ${MIRROR}/${NIFI_BINARY_URL} -o 
${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz \
-&& echo "$(curl 
https://archive.apache.org/dist/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
+&& echo "$(curl ${MIRROR}/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
--- End diff --

Ok, I just borrowed it from @markap14 and it made sense since the value is 
the same. Where can we see the build config of the public docker image?


> Dockerfile enhancements
> ---
>
> Key: NIFI-5249
> URL: https://issues.apache.org/jira/browse/NIFI-5249
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Docker
>Reporter: Peter Wilcsinszky
>Priority: Minor
>
> * make environment variables more explicit
>  * create data and log directories
>  * add procps for process visibility inside the container



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #121: NIFIREG-173 Refactor metadata DB to be inde...

2018-06-04 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/121#discussion_r192781546
  
--- Diff: 
nifi-registry-framework/src/main/java/org/apache/nifi/registry/db/CustomFlywayMigrationStrategy.java
 ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry.db;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.registry.db.migration.BucketEntityV1;
+import org.apache.nifi.registry.db.migration.FlowEntityV1;
+import org.apache.nifi.registry.db.migration.FlowSnapshotEntityV1;
+import org.apache.nifi.registry.db.migration.LegacyDataSourceFactory;
+import org.apache.nifi.registry.db.migration.LegacyDatabaseService;
+import org.apache.nifi.registry.db.migration.LegacyEntityMapper;
+import org.apache.nifi.registry.properties.NiFiRegistryProperties;
+import org.apache.nifi.registry.service.MetadataService;
+import org.flywaydb.core.Flyway;
+import org.flywaydb.core.api.FlywayException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.beans.factory.annotation.Autowired;
+import 
org.springframework.boot.autoconfigure.flyway.FlywayMigrationStrategy;
+import org.springframework.jdbc.core.JdbcTemplate;
+import org.springframework.stereotype.Component;
+
+import javax.sql.DataSource;
+import java.sql.Connection;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.List;
+
+/**
+ * Custom Flyway migration strategy that lets us perform data migration 
from the original database used in the
+ * 0.1.0 release, to the new database. The data migration will be 
triggered when it is determined that new database
+ * is brand new AND the legacy DB properties are specified. If the primary 
database already contains the 'BUCKET' table,
+ * or if the legacy database properties are not specified, then no data 
migration is performed.
+ */
+@Component
+public class CustomFlywayMigrationStrategy implements 
FlywayMigrationStrategy {
--- End diff --

This is a nice solution to determining when to migrate databases. Nice work!


---


[jira] [Commented] (NIFIREG-173) Allow metadata DB to use other DBs besides H2

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500370#comment-16500370
 ] 

ASF GitHub Bot commented on NIFIREG-173:


Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/121#discussion_r192781546
  
--- Diff: 
nifi-registry-framework/src/main/java/org/apache/nifi/registry/db/CustomFlywayMigrationStrategy.java
 ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry.db;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.registry.db.migration.BucketEntityV1;
+import org.apache.nifi.registry.db.migration.FlowEntityV1;
+import org.apache.nifi.registry.db.migration.FlowSnapshotEntityV1;
+import org.apache.nifi.registry.db.migration.LegacyDataSourceFactory;
+import org.apache.nifi.registry.db.migration.LegacyDatabaseService;
+import org.apache.nifi.registry.db.migration.LegacyEntityMapper;
+import org.apache.nifi.registry.properties.NiFiRegistryProperties;
+import org.apache.nifi.registry.service.MetadataService;
+import org.flywaydb.core.Flyway;
+import org.flywaydb.core.api.FlywayException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.beans.factory.annotation.Autowired;
+import 
org.springframework.boot.autoconfigure.flyway.FlywayMigrationStrategy;
+import org.springframework.jdbc.core.JdbcTemplate;
+import org.springframework.stereotype.Component;
+
+import javax.sql.DataSource;
+import java.sql.Connection;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.List;
+
+/**
+ * Custom Flyway migration strategy that lets us perform data migration 
from the original database used in the
+ * 0.1.0 release, to the new database. The data migration will be 
triggered when it is determined that new database
+ * is brand new AND the legacy DB properties are specified. If the primary 
database already contains the 'BUCKET' table,
+ * or if the legacy database properties are not specified, then no data 
migration is performed.
+ */
+@Component
+public class CustomFlywayMigrationStrategy implements 
FlywayMigrationStrategy {
--- End diff --

This is a nice solution to determining when to migrate databases. Nice work!


> Allow metadata DB to use other DBs besides H2
> -
>
> Key: NIFIREG-173
> URL: https://issues.apache.org/jira/browse/NIFIREG-173
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>
> Now that we have the Git provider for flow storage which can be used to push 
> flows to a remote location, it would be nice to be able to leverage an 
> external DB for the metadata database.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2619: NIFI-5059 Updated MongoDBLookupService to be able t...

2018-06-04 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r192783725
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -52,68 +54,125 @@
 "The query is limited to the first result (findOne in the Mongo 
documentation). If no \"Lookup Value Field\" is specified " +
 "then the entire MongoDB result document minus the _id field will be 
returned as a record."
 )
-public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+public class MongoDBLookupService extends SchemaRegistryService implements 
LookupService {
+public static final PropertyDescriptor CONTROLLER_SERVICE = new 
PropertyDescriptor.Builder()
+.name("mongo-lookup-client-service")
+.displayName("Client Service")
+.description("A MongoDB controller service to use with this lookup 
service.")
+.required(true)
+.identifiesControllerService(MongoDBControllerService.class)
--- End diff --

I believe this is supposed to be an interface not the impl class (see my 
other comment below), so I think you want `MongoDBClientService` here.


---


[GitHub] nifi pull request #2619: NIFI-5059 Updated MongoDBLookupService to be able t...

2018-06-04 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r192783531
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -52,68 +54,125 @@
 "The query is limited to the first result (findOne in the Mongo 
documentation). If no \"Lookup Value Field\" is specified " +
 "then the entire MongoDB result document minus the _id field will be 
returned as a record."
 )
-public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+public class MongoDBLookupService extends SchemaRegistryService implements 
LookupService {
+public static final PropertyDescriptor CONTROLLER_SERVICE = new 
PropertyDescriptor.Builder()
--- End diff --

AFAICT this property is never added to the list of supported property 
descriptors, so I couldn't set it on the UI which causes an NPE when lookup() 
is called. Seems odd that for a required property that is not supported, 
setting it (in tests) would not complain. I haven't run the integration tests 
yet, just put the NARs into a live NiFi to try it out. 


---


[GitHub] nifi pull request #2619: NIFI-5059 Updated MongoDBLookupService to be able t...

2018-06-04 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r192784028
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -52,68 +54,125 @@
 "The query is limited to the first result (findOne in the Mongo 
documentation). If no \"Lookup Value Field\" is specified " +
 "then the entire MongoDB result document minus the _id field will be 
returned as a record."
 )
-public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+public class MongoDBLookupService extends SchemaRegistryService implements 
LookupService {
+public static final PropertyDescriptor CONTROLLER_SERVICE = new 
PropertyDescriptor.Builder()
+.name("mongo-lookup-client-service")
+.displayName("Client Service")
+.description("A MongoDB controller service to use with this lookup 
service.")
+.required(true)
+.identifiesControllerService(MongoDBControllerService.class)
+.build();
 
 public static final PropertyDescriptor LOOKUP_VALUE_FIELD = new 
PropertyDescriptor.Builder()
-.name("mongo-lookup-value-field")
-.displayName("Lookup Value Field")
-.description("The field whose value will be returned when the 
lookup key(s) match a record. If not specified then the entire " +
-"MongoDB result document minus the _id field will be 
returned as a record.")
-.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
-.required(false)
-.build();
+.name("mongo-lookup-value-field")
+.displayName("Lookup Value Field")
+.description("The field whose value will be returned when the 
lookup key(s) match a record. If not specified then the entire " +
+"MongoDB result document minus the _id field will be 
returned as a record.")
+.addValidator(Validator.VALID)
+.required(false)
+.build();
+public static final PropertyDescriptor PROJECTION = new 
PropertyDescriptor.Builder()
+.name("mongo-lookup-projection")
+.displayName("Projection")
+.description("Specifies a projection for limiting which fields 
will be returned.")
+.required(false)
+.build();
 
 private String lookupValueField;
 
-private static final List lookupDescriptors;
-
-static {
-lookupDescriptors = new ArrayList<>();
-lookupDescriptors.addAll(descriptors);
-lookupDescriptors.add(LOOKUP_VALUE_FIELD);
-}
-
 @Override
 public Optional lookup(Map coordinates) throws 
LookupFailureException {
-Map clean = new HashMap<>();
-clean.putAll(coordinates);
+Map clean = coordinates.entrySet().stream()
+.filter(e -> !schemaNameProperty.equals(String.format("${%s}", 
e.getKey(
+.collect(Collectors.toMap(
+e -> e.getKey(),
+e -> e.getValue()
+));
 Document query = new Document(clean);
 
 if (coordinates.size() == 0) {
 throw new LookupFailureException("No keys were configured. 
Mongo query would return random documents.");
 }
 
 try {
-Document result = this.findOne(query);
+Document result = projection != null ? 
controllerService.findOne(query, projection) : controllerService.findOne(query);
 
 if(result == null) {
 return Optional.empty();
 } else if (!StringUtils.isEmpty(lookupValueField)) {
 return Optional.ofNullable(result.get(lookupValueField));
 } else {
-final List fields = new ArrayList<>();
+RecordSchema schema = loadSchema(coordinates);
 
-for (String key : result.keySet()) {
-if (key.equals("_id")) {
-continue;
-}
-fields.add(new RecordField(key, 
RecordFieldType.STRING.getDataType()));
-}
-
-final RecordSchema schema = new SimpleRecordSchema(fields);
-return Optional.ofNullable(new MapRecord(schema, result));
+RecordSchema toUse = schema != null ? schema : 
convertSchema(result);
+return Optional.ofNullable(new MapRecord(toUse, result));
 }
 } catch (Exception ex) {
 getLogger().error("Error during lookup {}", new Object[]{ 
query.toJson() }, ex);
 throw new LookupFailureException

[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500381#comment-16500381
 ] 

ASF GitHub Bot commented on NIFI-5059:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r192783725
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -52,68 +54,125 @@
 "The query is limited to the first result (findOne in the Mongo 
documentation). If no \"Lookup Value Field\" is specified " +
 "then the entire MongoDB result document minus the _id field will be 
returned as a record."
 )
-public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+public class MongoDBLookupService extends SchemaRegistryService implements 
LookupService {
+public static final PropertyDescriptor CONTROLLER_SERVICE = new 
PropertyDescriptor.Builder()
+.name("mongo-lookup-client-service")
+.displayName("Client Service")
+.description("A MongoDB controller service to use with this lookup 
service.")
+.required(true)
+.identifiesControllerService(MongoDBControllerService.class)
--- End diff --

I believe this is supposed to be an interface not the impl class (see my 
other comment below), so I think you want `MongoDBClientService` here.


> MongoDBLookupService should be able to determine a schema or have one provided
> --
>
> Key: NIFI-5059
> URL: https://issues.apache.org/jira/browse/NIFI-5059
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> MongoDBLookupService should have two schema handling modes:
>  # Where a schema is provided as a configuration parameter to be applied to 
> the Record object generated from the result document.
>  # A schema will be generated by examining the result object and building one 
> that roughly translates from BSON into the Record API.
> In both cases, the schema will be applied to the Mongo result Document object 
> that is returned if one comes back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500382#comment-16500382
 ] 

ASF GitHub Bot commented on NIFI-5059:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r192784028
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -52,68 +54,125 @@
 "The query is limited to the first result (findOne in the Mongo 
documentation). If no \"Lookup Value Field\" is specified " +
 "then the entire MongoDB result document minus the _id field will be 
returned as a record."
 )
-public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+public class MongoDBLookupService extends SchemaRegistryService implements 
LookupService {
+public static final PropertyDescriptor CONTROLLER_SERVICE = new 
PropertyDescriptor.Builder()
+.name("mongo-lookup-client-service")
+.displayName("Client Service")
+.description("A MongoDB controller service to use with this lookup 
service.")
+.required(true)
+.identifiesControllerService(MongoDBControllerService.class)
+.build();
 
 public static final PropertyDescriptor LOOKUP_VALUE_FIELD = new 
PropertyDescriptor.Builder()
-.name("mongo-lookup-value-field")
-.displayName("Lookup Value Field")
-.description("The field whose value will be returned when the 
lookup key(s) match a record. If not specified then the entire " +
-"MongoDB result document minus the _id field will be 
returned as a record.")
-.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
-.required(false)
-.build();
+.name("mongo-lookup-value-field")
+.displayName("Lookup Value Field")
+.description("The field whose value will be returned when the 
lookup key(s) match a record. If not specified then the entire " +
+"MongoDB result document minus the _id field will be 
returned as a record.")
+.addValidator(Validator.VALID)
+.required(false)
+.build();
+public static final PropertyDescriptor PROJECTION = new 
PropertyDescriptor.Builder()
+.name("mongo-lookup-projection")
+.displayName("Projection")
+.description("Specifies a projection for limiting which fields 
will be returned.")
+.required(false)
+.build();
 
 private String lookupValueField;
 
-private static final List lookupDescriptors;
-
-static {
-lookupDescriptors = new ArrayList<>();
-lookupDescriptors.addAll(descriptors);
-lookupDescriptors.add(LOOKUP_VALUE_FIELD);
-}
-
 @Override
 public Optional lookup(Map coordinates) throws 
LookupFailureException {
-Map clean = new HashMap<>();
-clean.putAll(coordinates);
+Map clean = coordinates.entrySet().stream()
+.filter(e -> !schemaNameProperty.equals(String.format("${%s}", 
e.getKey(
+.collect(Collectors.toMap(
+e -> e.getKey(),
+e -> e.getValue()
+));
 Document query = new Document(clean);
 
 if (coordinates.size() == 0) {
 throw new LookupFailureException("No keys were configured. 
Mongo query would return random documents.");
 }
 
 try {
-Document result = this.findOne(query);
+Document result = projection != null ? 
controllerService.findOne(query, projection) : controllerService.findOne(query);
 
 if(result == null) {
 return Optional.empty();
 } else if (!StringUtils.isEmpty(lookupValueField)) {
 return Optional.ofNullable(result.get(lookupValueField));
 } else {
-final List fields = new ArrayList<>();
+RecordSchema schema = loadSchema(coordinates);
 
-for (String key : result.keySet()) {
-if (key.equals("_id")) {
-continue;
-}
-fields.add(new RecordField(key, 
RecordFieldType.STRING.getDataType()));
-}
-
-final RecordSchema schema = new SimpleRecordSchema(fields);
-return Optional.ofNullable(new MapRecord(schema, result));
+RecordSchema toUse = schema != null ? schema : 
convertSchema(result);
+return Op

[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500380#comment-16500380
 ] 

ASF GitHub Bot commented on NIFI-5059:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r192783531
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -52,68 +54,125 @@
 "The query is limited to the first result (findOne in the Mongo 
documentation). If no \"Lookup Value Field\" is specified " +
 "then the entire MongoDB result document minus the _id field will be 
returned as a record."
 )
-public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+public class MongoDBLookupService extends SchemaRegistryService implements 
LookupService {
+public static final PropertyDescriptor CONTROLLER_SERVICE = new 
PropertyDescriptor.Builder()
--- End diff --

AFAICT this property is never added to the list of supported property 
descriptors, so I couldn't set it on the UI which causes an NPE when lookup() 
is called. Seems odd that for a required property that is not supported, 
setting it (in tests) would not complain. I haven't run the integration tests 
yet, just put the NARs into a live NiFi to try it out. 


> MongoDBLookupService should be able to determine a schema or have one provided
> --
>
> Key: NIFI-5059
> URL: https://issues.apache.org/jira/browse/NIFI-5059
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> MongoDBLookupService should have two schema handling modes:
>  # Where a schema is provided as a configuration parameter to be applied to 
> the Record object generated from the result document.
>  # A schema will be generated by examining the result object and building one 
> that roughly translates from BSON into the Record API.
> In both cases, the schema will be applied to the Mongo result Document object 
> that is returned if one comes back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements

2018-06-04 Thread pepov
Github user pepov commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2747#discussion_r192787369
  
--- Diff: nifi-docker/dockermaven/pom.xml ---
@@ -44,7 +44,6 @@
 
 apache/nifi
 ${project.version}
-latest
--- End diff --

I removed it because it did not build both tags, just the last one, so I 
had a choice. Also this is only a dev/test config and does not affect the 
public image.


---


[jira] [Commented] (NIFI-5249) Dockerfile enhancements

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500395#comment-16500395
 ] 

ASF GitHub Bot commented on NIFI-5249:
--

Github user pepov commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2747#discussion_r192787369
  
--- Diff: nifi-docker/dockermaven/pom.xml ---
@@ -44,7 +44,6 @@
 
 apache/nifi
 ${project.version}
-latest
--- End diff --

I removed it because it did not build both tags, just the last one, so I 
had a choice. Also this is only a dev/test config and does not affect the 
public image.


> Dockerfile enhancements
> ---
>
> Key: NIFI-5249
> URL: https://issues.apache.org/jira/browse/NIFI-5249
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Docker
>Reporter: Peter Wilcsinszky
>Priority: Minor
>
> * make environment variables more explicit
>  * create data and log directories
>  * add procps for process visibility inside the container



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #121: NIFIREG-173 Refactor metadata DB to be inde...

2018-06-04 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/121#discussion_r192790400
  
--- Diff: 
nifi-registry-framework/src/main/resources/db/migration/V2__Initial.sql ---
@@ -0,0 +1,58 @@
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements.  See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License.  You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
--- End diff --

I tried this schema in H2, Postgres and MySQL. For the most part looks 
good. I did get this error in MySQL:

```
Column length too big for column 'DESCRIPTION' (max = 21845); use BLOB or 
TEXT instead
```

I did a bit of research and this was the best explanation I found:
https://dev.mysql.com/doc/refman/5.7/en/column-count-limit.html

Seems like 65535 should be fine so long as the total row size stays below 
that limit. Maybe the 21845 value in the error message is due to a text 
encoding type (did not dig into it past that). In any case, to maintain mysql 
compatibility, you may want to lower those max length sizes or change the 
description fields to TEXT as I don't anticipate we will need to search on 
those.

Aside from that, this schema looks good to me as a clean starting point for 
the new database. Nice work.


---


[jira] [Commented] (NIFIREG-173) Allow metadata DB to use other DBs besides H2

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500400#comment-16500400
 ] 

ASF GitHub Bot commented on NIFIREG-173:


Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/121#discussion_r192790400
  
--- Diff: 
nifi-registry-framework/src/main/resources/db/migration/V2__Initial.sql ---
@@ -0,0 +1,58 @@
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements.  See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License.  You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
--- End diff --

I tried this schema in H2, Postgres and MySQL. For the most part looks 
good. I did get this error in MySQL:

```
Column length too big for column 'DESCRIPTION' (max = 21845); use BLOB or 
TEXT instead
```

I did a bit of research and this was the best explanation I found:
https://dev.mysql.com/doc/refman/5.7/en/column-count-limit.html

Seems like 65535 should be fine so long as the total row size stays below 
that limit. Maybe the 21845 value in the error message is due to a text 
encoding type (did not dig into it past that). In any case, to maintain mysql 
compatibility, you may want to lower those max length sizes or change the 
description fields to TEXT as I don't anticipate we will need to search on 
those.

Aside from that, this schema looks good to me as a clean starting point for 
the new database. Nice work.


> Allow metadata DB to use other DBs besides H2
> -
>
> Key: NIFIREG-173
> URL: https://issues.apache.org/jira/browse/NIFIREG-173
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>
> Now that we have the Git provider for flow storage which can be used to push 
> flows to a remote location, it would be nice to be able to leverage an 
> external DB for the metadata database.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #121: NIFIREG-173 Refactor metadata DB to be inde...

2018-06-04 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/121#discussion_r192790706
  
--- Diff: nifi-registry-docs/src/main/asciidoc/administration-guide.adoc ---
@@ -867,12 +867,32 @@ content of the flows saved to the registry. For 
further details on persistence p
 
 These properties define the settings for the Registry database, which 
keeps track of metadata about buckets and all items stored in buckets.
 
+The 0.1.0 release leveraged an embedded H2 database that was configured 
via the following properties:
+
 |
 |*Property*|*Description*
 |nifi.registry.db.directory|The location of the Registry database 
directory. The default value is `./database`.
 |nifi.registry.db.url.append|This property specifies additional arguments 
to add to the connection string for the Registry database. The default value 
should be used and should not be changed. It is: 
`;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE`.
 |
 
+The 0.2.0 release introduced a more flexible approach which allows 
leveraging an external database. This new approach
+is configured via the following properties:
--- End diff --

Good writeup. Clear and concise.


---


[jira] [Commented] (NIFIREG-173) Allow metadata DB to use other DBs besides H2

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500402#comment-16500402
 ] 

ASF GitHub Bot commented on NIFIREG-173:


Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/121#discussion_r192790706
  
--- Diff: nifi-registry-docs/src/main/asciidoc/administration-guide.adoc ---
@@ -867,12 +867,32 @@ content of the flows saved to the registry. For 
further details on persistence p
 
 These properties define the settings for the Registry database, which 
keeps track of metadata about buckets and all items stored in buckets.
 
+The 0.1.0 release leveraged an embedded H2 database that was configured 
via the following properties:
+
 |
 |*Property*|*Description*
 |nifi.registry.db.directory|The location of the Registry database 
directory. The default value is `./database`.
 |nifi.registry.db.url.append|This property specifies additional arguments 
to add to the connection string for the Registry database. The default value 
should be used and should not be changed. It is: 
`;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE`.
 |
 
+The 0.2.0 release introduced a more flexible approach which allows 
leveraging an external database. This new approach
+is configured via the following properties:
--- End diff --

Good writeup. Clear and concise.


> Allow metadata DB to use other DBs besides H2
> -
>
> Key: NIFIREG-173
> URL: https://issues.apache.org/jira/browse/NIFIREG-173
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>
> Now that we have the Git provider for flow storage which can be used to push 
> flows to a remote location, it would be nice to be able to leverage an 
> external DB for the metadata database.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5192) Allow expression language in 'Schema File' property for ValidateXML processor

2018-06-04 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5192:
---
Status: Patch Available  (was: Open)

> Allow expression language in 'Schema File' property for ValidateXML processor 
> --
>
> Key: NIFI-5192
> URL: https://issues.apache.org/jira/browse/NIFI-5192
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Bartłomiej Tartanus
>Priority: Minor
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> Is there any reason that ValidateXML processor doesn't allow expression 
> language in 'Schema File' property? If no it would be useful to allow this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5263) Address auditing of Controller Service Referencing Components

2018-06-04 Thread Matt Gilman (JIRA)
Matt Gilman created NIFI-5263:
-

 Summary: Address auditing of Controller Service Referencing 
Components
 Key: NIFI-5263
 URL: https://issues.apache.org/jira/browse/NIFI-5263
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
Assignee: Matt Gilman


When enabling/disabling Controller Services, the resulting action is recorded 
in the Flow Configuration History. However, the changes to the referencing 
components is not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (NIFI-5208) Editing a flow on a Disconnected Node

2018-06-04 Thread Matt Gilman (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reopened NIFI-5208:
---

Removing due to a minor javascript bug that was introduced with this Jira that 
is preventing connections from being moved between components.

> Editing a flow on a Disconnected Node
> -
>
> Key: NIFI-5208
> URL: https://issues.apache.org/jira/browse/NIFI-5208
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.7.0
>
>
> Currently, editing on a flow on a disconnected node is allowed. This feature 
> is useful when needing to debug a node-specific environmental issue prior to 
> re-joining the cluster. However, this can also lead to issues when the user 
> doesn't realize that the node they are viewing is disconnected. This was 
> never an issue in our 0.x baseline as viewing the disconnected node required 
> the user to manually direct their browser away from the NCM and towards the 
> disconnected node.
> In 1.x, this can happen transparently without the need for the user to 
> redirect their browser. There is a label at the top to indicate that the node 
> is disconnected but this is not sufficient. If the user continues with their 
> edits, it will make it difficult to re-join the cluster without manual 
> inventions to retain their changes.
> There is a dialog that should inform the user that the cluster connection 
> state has changed. However, there appears to be a regression that is 
> preventing that dialog from showing. We should restore this dialog and make 
> it confirm the users intent to make changes in a disconnected state. 
> Furthermore, changes should be prevented without this confirmation. 
> Confirmation should happen anytime the cluster connected state changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5192) Allow expression language in 'Schema File' property for ValidateXML processor

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500447#comment-16500447
 ] 

ASF GitHub Bot commented on NIFI-5192:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2699
  
I took the liberty of adding the missing evaluateAttributeExpressions() 
call, the unit tests pass at that point but if Expression Language is present, 
then we can't count on the File Exists Validator to catch missing schema files. 
I added a check for file.exists() and unit tests for valid and invalid (with 
EL). Feel free to cherry-pick 
https://github.com/mattyb149/nifi/commit/a452382bed64cd4ddda69711e7382829726503ee
 into your branch and push back up here for review, or @MikeThomsen if you're 
cool with it you could just cherry pick his and my commits from 
https://github.com/mattyb149/nifi/tree/NIFI-5192 before merge.


> Allow expression language in 'Schema File' property for ValidateXML processor 
> --
>
> Key: NIFI-5192
> URL: https://issues.apache.org/jira/browse/NIFI-5192
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Bartłomiej Tartanus
>Priority: Minor
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> Is there any reason that ValidateXML processor doesn't allow expression 
> language in 'Schema File' property? If no it would be useful to allow this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2699: [NIFI-5192] allow expression language in Schema File prope...

2018-06-04 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2699
  
I took the liberty of adding the missing evaluateAttributeExpressions() 
call, the unit tests pass at that point but if Expression Language is present, 
then we can't count on the File Exists Validator to catch missing schema files. 
I added a check for file.exists() and unit tests for valid and invalid (with 
EL). Feel free to cherry-pick 
https://github.com/mattyb149/nifi/commit/a452382bed64cd4ddda69711e7382829726503ee
 into your branch and push back up here for review, or @MikeThomsen if you're 
cool with it you could just cherry pick his and my commits from 
https://github.com/mattyb149/nifi/tree/NIFI-5192 before merge.


---


[GitHub] nifi pull request #2756: NIFI-5263: Fixing advice for auditing controller se...

2018-06-04 Thread mcgilman
GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2756

NIFI-5263: Fixing advice for auditing controller service actions

NIFI-5263:
- Fixing the advice auditing the method for updating controller service 
referencing components.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-5263

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2756.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2756


commit a55b2f76d26326fe5e431dddc8231022be456615
Author: Matt Gilman 
Date:   2018-06-04T16:21:17Z

NIFI-5263:
- Fixing the advice auditing the method for updating controller service 
referencing components.




---


[jira] [Commented] (NIFI-5263) Address auditing of Controller Service Referencing Components

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500455#comment-16500455
 ] 

ASF GitHub Bot commented on NIFI-5263:
--

GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2756

NIFI-5263: Fixing advice for auditing controller service actions

NIFI-5263:
- Fixing the advice auditing the method for updating controller service 
referencing components.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-5263

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2756.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2756


commit a55b2f76d26326fe5e431dddc8231022be456615
Author: Matt Gilman 
Date:   2018-06-04T16:21:17Z

NIFI-5263:
- Fixing the advice auditing the method for updating controller service 
referencing components.




> Address auditing of Controller Service Referencing Components
> -
>
> Key: NIFI-5263
> URL: https://issues.apache.org/jira/browse/NIFI-5263
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Minor
>
> When enabling/disabling Controller Services, the resulting action is recorded 
> in the Flow Configuration History. However, the changes to the referencing 
> components is not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2619: NIFI-5059 Updated MongoDBLookupService to be able t...

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r192802068
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -52,68 +54,125 @@
 "The query is limited to the first result (findOne in the Mongo 
documentation). If no \"Lookup Value Field\" is specified " +
 "then the entire MongoDB result document minus the _id field will be 
returned as a record."
 )
-public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+public class MongoDBLookupService extends SchemaRegistryService implements 
LookupService {
+public static final PropertyDescriptor CONTROLLER_SERVICE = new 
PropertyDescriptor.Builder()
+.name("mongo-lookup-client-service")
+.displayName("Client Service")
+.description("A MongoDB controller service to use with this lookup 
service.")
+.required(true)
+.identifiesControllerService(MongoDBControllerService.class)
+.build();
 
 public static final PropertyDescriptor LOOKUP_VALUE_FIELD = new 
PropertyDescriptor.Builder()
-.name("mongo-lookup-value-field")
-.displayName("Lookup Value Field")
-.description("The field whose value will be returned when the 
lookup key(s) match a record. If not specified then the entire " +
-"MongoDB result document minus the _id field will be 
returned as a record.")
-.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
-.required(false)
-.build();
+.name("mongo-lookup-value-field")
+.displayName("Lookup Value Field")
+.description("The field whose value will be returned when the 
lookup key(s) match a record. If not specified then the entire " +
+"MongoDB result document minus the _id field will be 
returned as a record.")
+.addValidator(Validator.VALID)
+.required(false)
+.build();
+public static final PropertyDescriptor PROJECTION = new 
PropertyDescriptor.Builder()
+.name("mongo-lookup-projection")
+.displayName("Projection")
+.description("Specifies a projection for limiting which fields 
will be returned.")
+.required(false)
+.build();
 
 private String lookupValueField;
 
-private static final List lookupDescriptors;
-
-static {
-lookupDescriptors = new ArrayList<>();
-lookupDescriptors.addAll(descriptors);
-lookupDescriptors.add(LOOKUP_VALUE_FIELD);
-}
-
 @Override
 public Optional lookup(Map coordinates) throws 
LookupFailureException {
-Map clean = new HashMap<>();
-clean.putAll(coordinates);
+Map clean = coordinates.entrySet().stream()
+.filter(e -> !schemaNameProperty.equals(String.format("${%s}", 
e.getKey(
+.collect(Collectors.toMap(
+e -> e.getKey(),
+e -> e.getValue()
+));
 Document query = new Document(clean);
 
 if (coordinates.size() == 0) {
 throw new LookupFailureException("No keys were configured. 
Mongo query would return random documents.");
 }
 
 try {
-Document result = this.findOne(query);
+Document result = projection != null ? 
controllerService.findOne(query, projection) : controllerService.findOne(query);
 
 if(result == null) {
 return Optional.empty();
 } else if (!StringUtils.isEmpty(lookupValueField)) {
 return Optional.ofNullable(result.get(lookupValueField));
 } else {
-final List fields = new ArrayList<>();
+RecordSchema schema = loadSchema(coordinates);
 
-for (String key : result.keySet()) {
-if (key.equals("_id")) {
-continue;
-}
-fields.add(new RecordField(key, 
RecordFieldType.STRING.getDataType()));
-}
-
-final RecordSchema schema = new SimpleRecordSchema(fields);
-return Optional.ofNullable(new MapRecord(schema, result));
+RecordSchema toUse = schema != null ? schema : 
convertSchema(result);
+return Optional.ofNullable(new MapRecord(toUse, result));
 }
 } catch (Exception ex) {
 getLogger().error("Error during lookup {}", new Object[]{ 
query.toJson() }, ex);
 throw new LookupFailureExcepti

[GitHub] nifi pull request #2619: NIFI-5059 Updated MongoDBLookupService to be able t...

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r192802039
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -52,68 +54,125 @@
 "The query is limited to the first result (findOne in the Mongo 
documentation). If no \"Lookup Value Field\" is specified " +
 "then the entire MongoDB result document minus the _id field will be 
returned as a record."
 )
-public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+public class MongoDBLookupService extends SchemaRegistryService implements 
LookupService {
+public static final PropertyDescriptor CONTROLLER_SERVICE = new 
PropertyDescriptor.Builder()
--- End diff --

I added to the property list.


---


[GitHub] nifi issue #2679: NIFI-5141: Updated regex for doubles to allow for numbers ...

2018-06-04 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2679
  
+1 LGTM, all tests pass, merging based on Ed's +1, thanks for the 
improvement (and to Ed for the review!) Merging to master


---


[jira] [Commented] (NIFI-5141) ValidateRecord considers a record invalid if it has an integer value and schema says double, even if strict type checking is disabled

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500460#comment-16500460
 ] 

ASF GitHub Bot commented on NIFI-5141:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2679
  
+1 LGTM, all tests pass, merging based on Ed's +1, thanks for the 
improvement (and to Ed for the review!) Merging to master


> ValidateRecord considers a record invalid if it has an integer value and 
> schema says double, even if strict type checking is disabled
> -
>
> Key: NIFI-5141
> URL: https://issues.apache.org/jira/browse/NIFI-5141
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>  Labels: Record, beginner, newbie, validation
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2619: NIFI-5059 Updated MongoDBLookupService to be able to detec...

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2619
  
Accidentally rebased it a while and so I had to force push. Sorry about 
that.


---


[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500458#comment-16500458
 ] 

ASF GitHub Bot commented on NIFI-5059:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r192802039
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -52,68 +54,125 @@
 "The query is limited to the first result (findOne in the Mongo 
documentation). If no \"Lookup Value Field\" is specified " +
 "then the entire MongoDB result document minus the _id field will be 
returned as a record."
 )
-public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+public class MongoDBLookupService extends SchemaRegistryService implements 
LookupService {
+public static final PropertyDescriptor CONTROLLER_SERVICE = new 
PropertyDescriptor.Builder()
--- End diff --

I added to the property list.


> MongoDBLookupService should be able to determine a schema or have one provided
> --
>
> Key: NIFI-5059
> URL: https://issues.apache.org/jira/browse/NIFI-5059
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> MongoDBLookupService should have two schema handling modes:
>  # Where a schema is provided as a configuration parameter to be applied to 
> the Record object generated from the result document.
>  # A schema will be generated by examining the result object and building one 
> that roughly translates from BSON into the Record API.
> In both cases, the schema will be applied to the Mongo result Document object 
> that is returned if one comes back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500459#comment-16500459
 ] 

ASF GitHub Bot commented on NIFI-5059:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r192802068
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -52,68 +54,125 @@
 "The query is limited to the first result (findOne in the Mongo 
documentation). If no \"Lookup Value Field\" is specified " +
 "then the entire MongoDB result document minus the _id field will be 
returned as a record."
 )
-public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+public class MongoDBLookupService extends SchemaRegistryService implements 
LookupService {
+public static final PropertyDescriptor CONTROLLER_SERVICE = new 
PropertyDescriptor.Builder()
+.name("mongo-lookup-client-service")
+.displayName("Client Service")
+.description("A MongoDB controller service to use with this lookup 
service.")
+.required(true)
+.identifiesControllerService(MongoDBControllerService.class)
+.build();
 
 public static final PropertyDescriptor LOOKUP_VALUE_FIELD = new 
PropertyDescriptor.Builder()
-.name("mongo-lookup-value-field")
-.displayName("Lookup Value Field")
-.description("The field whose value will be returned when the 
lookup key(s) match a record. If not specified then the entire " +
-"MongoDB result document minus the _id field will be 
returned as a record.")
-.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
-.required(false)
-.build();
+.name("mongo-lookup-value-field")
+.displayName("Lookup Value Field")
+.description("The field whose value will be returned when the 
lookup key(s) match a record. If not specified then the entire " +
+"MongoDB result document minus the _id field will be 
returned as a record.")
+.addValidator(Validator.VALID)
+.required(false)
+.build();
+public static final PropertyDescriptor PROJECTION = new 
PropertyDescriptor.Builder()
+.name("mongo-lookup-projection")
+.displayName("Projection")
+.description("Specifies a projection for limiting which fields 
will be returned.")
+.required(false)
+.build();
 
 private String lookupValueField;
 
-private static final List lookupDescriptors;
-
-static {
-lookupDescriptors = new ArrayList<>();
-lookupDescriptors.addAll(descriptors);
-lookupDescriptors.add(LOOKUP_VALUE_FIELD);
-}
-
 @Override
 public Optional lookup(Map coordinates) throws 
LookupFailureException {
-Map clean = new HashMap<>();
-clean.putAll(coordinates);
+Map clean = coordinates.entrySet().stream()
+.filter(e -> !schemaNameProperty.equals(String.format("${%s}", 
e.getKey(
+.collect(Collectors.toMap(
+e -> e.getKey(),
+e -> e.getValue()
+));
 Document query = new Document(clean);
 
 if (coordinates.size() == 0) {
 throw new LookupFailureException("No keys were configured. 
Mongo query would return random documents.");
 }
 
 try {
-Document result = this.findOne(query);
+Document result = projection != null ? 
controllerService.findOne(query, projection) : controllerService.findOne(query);
 
 if(result == null) {
 return Optional.empty();
 } else if (!StringUtils.isEmpty(lookupValueField)) {
 return Optional.ofNullable(result.get(lookupValueField));
 } else {
-final List fields = new ArrayList<>();
+RecordSchema schema = loadSchema(coordinates);
 
-for (String key : result.keySet()) {
-if (key.equals("_id")) {
-continue;
-}
-fields.add(new RecordField(key, 
RecordFieldType.STRING.getDataType()));
-}
-
-final RecordSchema schema = new SimpleRecordSchema(fields);
-return Optional.ofNullable(new MapRecord(schema, result));
+RecordSchema toUse = schema != null ? schema : 
convertSchema(result);
+return 

[GitHub] nifi pull request #2679: NIFI-5141: Updated regex for doubles to allow for n...

2018-06-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2679


---


[jira] [Commented] (NIFI-5141) ValidateRecord considers a record invalid if it has an integer value and schema says double, even if strict type checking is disabled

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500461#comment-16500461
 ] 

ASF subversion and git services commented on NIFI-5141:
---

Commit 06d1276f0948fd44975a6c8e5758fd39148e5506 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=06d1276 ]

NIFI-5141: Updated regex for doubles to allow for numbers that have no decimal

NIFI-5141: Loosened regex for floating-point numbers to account for decimal 
place followed by 0 digits, such as '13.' and also added unit tests

Signed-off-by: Matthew Burgess 

This closes #2679


> ValidateRecord considers a record invalid if it has an integer value and 
> schema says double, even if strict type checking is disabled
> -
>
> Key: NIFI-5141
> URL: https://issues.apache.org/jira/browse/NIFI-5141
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>  Labels: Record, beginner, newbie, validation
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5141) ValidateRecord considers a record invalid if it has an integer value and schema says double, even if strict type checking is disabled

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500462#comment-16500462
 ] 

ASF subversion and git services commented on NIFI-5141:
---

Commit 06d1276f0948fd44975a6c8e5758fd39148e5506 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=06d1276 ]

NIFI-5141: Updated regex for doubles to allow for numbers that have no decimal

NIFI-5141: Loosened regex for floating-point numbers to account for decimal 
place followed by 0 digits, such as '13.' and also added unit tests

Signed-off-by: Matthew Burgess 

This closes #2679


> ValidateRecord considers a record invalid if it has an integer value and 
> schema says double, even if strict type checking is disabled
> -
>
> Key: NIFI-5141
> URL: https://issues.apache.org/jira/browse/NIFI-5141
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>  Labels: Record, beginner, newbie, validation
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500463#comment-16500463
 ] 

ASF GitHub Bot commented on NIFI-5059:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2619
  
Accidentally rebased it a while and so I had to force push. Sorry about 
that.


> MongoDBLookupService should be able to determine a schema or have one provided
> --
>
> Key: NIFI-5059
> URL: https://issues.apache.org/jira/browse/NIFI-5059
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> MongoDBLookupService should have two schema handling modes:
>  # Where a schema is provided as a configuration parameter to be applied to 
> the Record object generated from the result document.
>  # A schema will be generated by examining the result object and building one 
> that roughly translates from BSON into the Record API.
> In both cases, the schema will be applied to the Mongo result Document object 
> that is returned if one comes back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5141) ValidateRecord considers a record invalid if it has an integer value and schema says double, even if strict type checking is disabled

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500465#comment-16500465
 ] 

ASF GitHub Bot commented on NIFI-5141:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2679


> ValidateRecord considers a record invalid if it has an integer value and 
> schema says double, even if strict type checking is disabled
> -
>
> Key: NIFI-5141
> URL: https://issues.apache.org/jira/browse/NIFI-5141
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>  Labels: Record, beginner, newbie, validation
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5141) ValidateRecord considers a record invalid if it has an integer value and schema says double, even if strict type checking is disabled

2018-06-04 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5141:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ValidateRecord considers a record invalid if it has an integer value and 
> schema says double, even if strict type checking is disabled
> -
>
> Key: NIFI-5141
> URL: https://issues.apache.org/jira/browse/NIFI-5141
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>  Labels: Record, beginner, newbie, validation
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2619: NIFI-5059 Updated MongoDBLookupService to be able to detec...

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2619
  
@mattyb149 once this and the ES one are merged, it would probably be a good 
time to discuss extracting the schema builder code into a utility class.


---


[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements

2018-06-04 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2747#discussion_r192803756
  
--- Diff: nifi-docker/dockerhub/Dockerfile ---
@@ -25,28 +25,37 @@ ARG GID=1000
 ARG NIFI_VERSION=1.7.0
 ARG MIRROR=https://archive.apache.org/dist
 
-ENV NIFI_BASE_DIR /opt/nifi 
+ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME=${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} \
 NIFI_BINARY_URL=/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.tar.gz
+ENV NIFI_PID_DIR=${NIFI_HOME}/run
+ENV NIFI_LOG_DIR=${NIFI_HOME}/logs
 
 ADD sh/ /opt/nifi/scripts/
 
-# Setup NiFi user
+# Setup NiFi user and create necessary directories
 RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut 
-d: -f1` \
 && useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \
 && mkdir -p ${NIFI_HOME}/conf/templates \
+&& mkdir -p $NIFI_BASE_DIR/data \
+&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \
+&& mkdir -p $NIFI_BASE_DIR/content_repository \
+&& mkdir -p $NIFI_BASE_DIR/provenance_repository \
+&& mkdir -p $NIFI_LOG_DIR \
 && chown -R nifi:nifi ${NIFI_BASE_DIR} \
 && apt-get update \
-&& apt-get install -y jq xmlstarlet
+&& apt-get install -y jq xmlstarlet procps
 
 USER nifi
 
 # Download, validate, and expand Apache NiFi binary.
 RUN curl -fSL ${MIRROR}/${NIFI_BINARY_URL} -o 
${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz \
-&& echo "$(curl 
https://archive.apache.org/dist/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
+&& echo "$(curl ${MIRROR}/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
--- End diff --

@pepov It is an automated build in Docker Hub.  The Dockerfile is used as 
shown without any external arguments when a rel/nifi- tag is created


---


[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500471#comment-16500471
 ] 

ASF GitHub Bot commented on NIFI-5059:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2619
  
@mattyb149 once this and the ES one are merged, it would probably be a good 
time to discuss extracting the schema builder code into a utility class.


> MongoDBLookupService should be able to determine a schema or have one provided
> --
>
> Key: NIFI-5059
> URL: https://issues.apache.org/jira/browse/NIFI-5059
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> MongoDBLookupService should have two schema handling modes:
>  # Where a schema is provided as a configuration parameter to be applied to 
> the Record object generated from the result document.
>  # A schema will be generated by examining the result object and building one 
> that roughly translates from BSON into the Record API.
> In both cases, the schema will be applied to the Mongo result Document object 
> that is returned if one comes back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5249) Dockerfile enhancements

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500474#comment-16500474
 ] 

ASF GitHub Bot commented on NIFI-5249:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2747#discussion_r192803756
  
--- Diff: nifi-docker/dockerhub/Dockerfile ---
@@ -25,28 +25,37 @@ ARG GID=1000
 ARG NIFI_VERSION=1.7.0
 ARG MIRROR=https://archive.apache.org/dist
 
-ENV NIFI_BASE_DIR /opt/nifi 
+ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME=${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} \
 NIFI_BINARY_URL=/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.tar.gz
+ENV NIFI_PID_DIR=${NIFI_HOME}/run
+ENV NIFI_LOG_DIR=${NIFI_HOME}/logs
 
 ADD sh/ /opt/nifi/scripts/
 
-# Setup NiFi user
+# Setup NiFi user and create necessary directories
 RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut 
-d: -f1` \
 && useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \
 && mkdir -p ${NIFI_HOME}/conf/templates \
+&& mkdir -p $NIFI_BASE_DIR/data \
+&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \
+&& mkdir -p $NIFI_BASE_DIR/content_repository \
+&& mkdir -p $NIFI_BASE_DIR/provenance_repository \
+&& mkdir -p $NIFI_LOG_DIR \
 && chown -R nifi:nifi ${NIFI_BASE_DIR} \
 && apt-get update \
-&& apt-get install -y jq xmlstarlet
+&& apt-get install -y jq xmlstarlet procps
 
 USER nifi
 
 # Download, validate, and expand Apache NiFi binary.
 RUN curl -fSL ${MIRROR}/${NIFI_BINARY_URL} -o 
${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz \
-&& echo "$(curl 
https://archive.apache.org/dist/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
+&& echo "$(curl ${MIRROR}/${NIFI_BINARY_URL}.sha256) 
*${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \
--- End diff --

@pepov It is an automated build in Docker Hub.  The Dockerfile is used as 
shown without any external arguments when a rel/nifi- tag is created


> Dockerfile enhancements
> ---
>
> Key: NIFI-5249
> URL: https://issues.apache.org/jira/browse/NIFI-5249
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Docker
>Reporter: Peter Wilcsinszky
>Priority: Minor
>
> * make environment variables more explicit
>  * create data and log directories
>  * add procps for process visibility inside the container



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2619: NIFI-5059 Updated MongoDBLookupService to be able to detec...

2018-06-04 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2619
  
Agreed, I'll try to get this one in today then take a look at the ES one.


---


[jira] [Commented] (NIFI-5059) MongoDBLookupService should be able to determine a schema or have one provided

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500481#comment-16500481
 ] 

ASF GitHub Bot commented on NIFI-5059:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2619
  
Agreed, I'll try to get this one in today then take a look at the ES one.


> MongoDBLookupService should be able to determine a schema or have one provided
> --
>
> Key: NIFI-5059
> URL: https://issues.apache.org/jira/browse/NIFI-5059
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> MongoDBLookupService should have two schema handling modes:
>  # Where a schema is provided as a configuration parameter to be applied to 
> the Record object generated from the result document.
>  # A schema will be generated by examining the result object and building one 
> that roughly translates from BSON into the Record API.
> In both cases, the schema will be applied to the Mongo result Document object 
> that is returned if one comes back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2757: NIFI-5208: Fixing issue preventing connections from...

2018-06-04 Thread mcgilman
GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2757

NIFI-5208: Fixing issue preventing connections from being moved.

NIFI-5208:
- Ensuring nf-storage is injected where necessary.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-5208

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2757.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2757


commit af06c879638c194a649b9e28af64e80ab23ef9f5
Author: Matt Gilman 
Date:   2018-06-04T16:37:16Z

NIFI-5208:
- Ensuring nf-storage is injected where necessary.




---


[jira] [Commented] (NIFI-5208) Editing a flow on a Disconnected Node

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500491#comment-16500491
 ] 

ASF GitHub Bot commented on NIFI-5208:
--

GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2757

NIFI-5208: Fixing issue preventing connections from being moved.

NIFI-5208:
- Ensuring nf-storage is injected where necessary.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-5208

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2757.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2757


commit af06c879638c194a649b9e28af64e80ab23ef9f5
Author: Matt Gilman 
Date:   2018-06-04T16:37:16Z

NIFI-5208:
- Ensuring nf-storage is injected where necessary.




> Editing a flow on a Disconnected Node
> -
>
> Key: NIFI-5208
> URL: https://issues.apache.org/jira/browse/NIFI-5208
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.7.0
>
>
> Currently, editing on a flow on a disconnected node is allowed. This feature 
> is useful when needing to debug a node-specific environmental issue prior to 
> re-joining the cluster. However, this can also lead to issues when the user 
> doesn't realize that the node they are viewing is disconnected. This was 
> never an issue in our 0.x baseline as viewing the disconnected node required 
> the user to manually direct their browser away from the NCM and towards the 
> disconnected node.
> In 1.x, this can happen transparently without the need for the user to 
> redirect their browser. There is a label at the top to indicate that the node 
> is disconnected but this is not sufficient. If the user continues with their 
> edits, it will make it difficult to re-join the cluster without manual 
> inventions to retain their changes.
> There is a dialog that should inform the user that the cluster connection 
> state has changed. However, there appears to be a regression that is 
> preventing that dialog from showing. We should restore this dialog and make 
> it confirm the users intent to make changes in a disconnected state. 
> Furthermore, changes should be prevented without this confirmation. 
> Confirmation should happen anytime the cluster connected state changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5263) Address auditing of Controller Service Referencing Components

2018-06-04 Thread Matt Gilman (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-5263:
--
Status: Patch Available  (was: Open)

> Address auditing of Controller Service Referencing Components
> -
>
> Key: NIFI-5263
> URL: https://issues.apache.org/jira/browse/NIFI-5263
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Minor
>
> When enabling/disabling Controller Services, the resulting action is recorded 
> in the Flow Configuration History. However, the changes to the referencing 
> components is not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5208) Editing a flow on a Disconnected Node

2018-06-04 Thread Matt Gilman (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-5208:
--
Status: Patch Available  (was: Reopened)

> Editing a flow on a Disconnected Node
> -
>
> Key: NIFI-5208
> URL: https://issues.apache.org/jira/browse/NIFI-5208
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.7.0
>
>
> Currently, editing on a flow on a disconnected node is allowed. This feature 
> is useful when needing to debug a node-specific environmental issue prior to 
> re-joining the cluster. However, this can also lead to issues when the user 
> doesn't realize that the node they are viewing is disconnected. This was 
> never an issue in our 0.x baseline as viewing the disconnected node required 
> the user to manually direct their browser away from the NCM and towards the 
> disconnected node.
> In 1.x, this can happen transparently without the need for the user to 
> redirect their browser. There is a label at the top to indicate that the node 
> is disconnected but this is not sufficient. If the user continues with their 
> edits, it will make it difficult to re-join the cluster without manual 
> inventions to retain their changes.
> There is a dialog that should inform the user that the cluster connection 
> state has changed. However, there appears to be a regression that is 
> preventing that dialog from showing. We should restore this dialog and make 
> it confirm the users intent to make changes in a disconnected state. 
> Furthermore, changes should be prevented without this confirmation. 
> Confirmation should happen anytime the cluster connected state changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2758: NIFI-5261: Added JSON_VALIDATOR to StandardValidato...

2018-06-04 Thread zenfenan
GitHub user zenfenan opened a pull request:

https://github.com/apache/nifi/pull/2758

NIFI-5261: Added JSON_VALIDATOR to StandardValidators

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zenfenan/nifi NIFI-5261

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2758.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2758


commit 8fca7bca325d1bcabfdeb1a3903ed893f6dfcb50
Author: zenfenan 
Date:   2018-06-04T17:18:38Z

NIFI-5261: Added JSON_VALIDATOR to StandardValidators




---


[jira] [Commented] (NIFI-5261) Create a JSON validator

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500568#comment-16500568
 ] 

ASF GitHub Bot commented on NIFI-5261:
--

GitHub user zenfenan opened a pull request:

https://github.com/apache/nifi/pull/2758

NIFI-5261: Added JSON_VALIDATOR to StandardValidators

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zenfenan/nifi NIFI-5261

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2758.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2758


commit 8fca7bca325d1bcabfdeb1a3903ed893f6dfcb50
Author: zenfenan 
Date:   2018-06-04T17:18:38Z

NIFI-5261: Added JSON_VALIDATOR to StandardValidators




> Create a JSON validator
> ---
>
> Key: NIFI-5261
> URL: https://issues.apache.org/jira/browse/NIFI-5261
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Minor
>
> Create a StandardValidator that validates PropertyDescriptors that take a 
> JSON input.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #348: MINIFICPP-517: Add RTIMULib and create ba...

2018-06-04 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/348#discussion_r192816075
  
--- Diff: extensions/sensors/GetEnvironmentalSensors.cpp ---
@@ -0,0 +1,156 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "utils/ByteArrayCallback.h"
+#include "core/FlowFile.h"
+#include "core/logging/Logger.h"
+#include "core/ProcessContext.h"
+#include "core/Relationship.h"
+#include "GetEnvironmentalSensors.h"
+#include "io/DataStream.h"
+#include "io/StreamFactory.h"
+#include "ResourceClaim.h"
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+std::shared_ptr GetEnvironmentalSensors::id_generator_ 
= utils::IdGenerator::getIdGenerator();
+
+const char *GetEnvironmentalSensors::ProcessorName = 
"EnvironmentalSensors";
+
+core::Relationship GetEnvironmentalSensors::Success("success", "All files 
are routed to success");
+
+void GetEnvironmentalSensors::initialize() {
+  logger_->log_trace("Initializing EnvironmentalSensors");
+  // Set the supported properties
+  std::set properties;
+
+  setSupportedProperties(properties);
+  // Set the supported relationships
+  std::set relationships;
+  relationships.insert(Success);
+  setSupportedRelationships(relationships);
+}
+
+void GetEnvironmentalSensors::onSchedule(const 
std::shared_ptr &context, const 
std::shared_ptr &sessionFactory) {
+
+  imu = RTIMU::createIMU(&settings);
+  if (imu) {
+imu->IMUInit();
+imu->setGyroEnable(true);
+imu->setAccelEnable(true);
+  } else {
+throw std::runtime_error("RTIMU could not be initialized");
+  }
+
+  humidity_sensor_ = RTHumidity::createHumidity(&settings);
+  if (humidity_sensor_) {
+humidity_sensor_->humidityInit();
+  } else {
+throw std::runtime_error("RTHumidity could not be initialized");
+  }
+
+  pressure_sensor_ = RTPressure::createPressure(&settings);
+  if (pressure_sensor_) {
+pressure_sensor_->pressureInit();
+  } else {
+throw std::runtime_error("RTPressure could not be initialized");
+  }
+
+}
+
+GetEnvironmentalSensors::~GetEnvironmentalSensors() {
+  delete humidity_sensor_;
+  delete pressure_sensor_;
+}
+
+void GetEnvironmentalSensors::onTrigger(const 
std::shared_ptr &context, const 
std::shared_ptr &session) {
+
+  auto flow_file_ = session->create();
+
+  flow_file_->setSize(0);
+
+  if (imu->IMURead()) {
+RTIMU_DATA imuData = imu->getIMUData();
+auto vector = imuData.accel;
+std::string degrees = RTMath::displayDegrees("acceleration", vector);
+flow_file_->addAttribute("ACCELERATION", degrees);
+  }
+
+  RTIMU_DATA data;
+
+  bool have_sensor = false;
+
+  if (humidity_sensor_->humidityRead(data)) {
+if (data.humidityValid) {
+  have_sensor = true;
+  std::stringstream ss;
+  ss << std::fixed << std::setprecision(2) << data.humidity;
+  flow_file_->addAttribute("HUMIDITY", ss.str());
+}
+  }
+
+  if (pressure_sensor_->pressureRead(data)) {
+if (data.pressureValid) {
+  have_sensor = true;
+  {
+std::stringstream ss;
+ss << std::fixed << std::setprecision(2) << data.pressure;
+flow_file_->addAttribute("PRESSURE", ss.str());
+  }
+
+  if (data.temperatureValid) {
+std::stringstream ss;
+ss << std::fixed << std::setprecision(2) << data.temperature;
+flow_file_->addAttribute("TEMPERATURE", ss.str());
+  }
+
+}
+  }
+
+  if (have_sensor) {

[GitHub] nifi pull request #2758: NIFI-5261: Added JSON_VALIDATOR to StandardValidato...

2018-06-04 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2758#discussion_r192820713
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -60,23 +59,6 @@
 @InputRequirement(Requirement.INPUT_ALLOWED)
 @CapabilityDescription("Creates FlowFiles from documents in MongoDB")
 public class GetMongo extends AbstractMongoProcessor {
-public static final Validator DOCUMENT_VALIDATOR = (subject, value, 
context) -> {
--- End diff --

@MikeThomsen I have replaced this one as well as `AGG_VALIDATOR` with the 
new `JSON_VALIDATOR`. I have ensured that the tests run fine. Also run a sample 
flow on a live MongoDB instance. Everything worked fine. Let me know, if you 
find anything odd.


---


[jira] [Commented] (MINIFICPP-517) Port sensor reading processors

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500572#comment-16500572
 ] 

ASF GitHub Bot commented on MINIFICPP-517:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/348#discussion_r192816075
  
--- Diff: extensions/sensors/GetEnvironmentalSensors.cpp ---
@@ -0,0 +1,156 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "utils/ByteArrayCallback.h"
+#include "core/FlowFile.h"
+#include "core/logging/Logger.h"
+#include "core/ProcessContext.h"
+#include "core/Relationship.h"
+#include "GetEnvironmentalSensors.h"
+#include "io/DataStream.h"
+#include "io/StreamFactory.h"
+#include "ResourceClaim.h"
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+std::shared_ptr GetEnvironmentalSensors::id_generator_ 
= utils::IdGenerator::getIdGenerator();
+
+const char *GetEnvironmentalSensors::ProcessorName = 
"EnvironmentalSensors";
+
+core::Relationship GetEnvironmentalSensors::Success("success", "All files 
are routed to success");
+
+void GetEnvironmentalSensors::initialize() {
+  logger_->log_trace("Initializing EnvironmentalSensors");
+  // Set the supported properties
+  std::set properties;
+
+  setSupportedProperties(properties);
+  // Set the supported relationships
+  std::set relationships;
+  relationships.insert(Success);
+  setSupportedRelationships(relationships);
+}
+
+void GetEnvironmentalSensors::onSchedule(const 
std::shared_ptr &context, const 
std::shared_ptr &sessionFactory) {
+
+  imu = RTIMU::createIMU(&settings);
+  if (imu) {
+imu->IMUInit();
+imu->setGyroEnable(true);
+imu->setAccelEnable(true);
+  } else {
+throw std::runtime_error("RTIMU could not be initialized");
+  }
+
+  humidity_sensor_ = RTHumidity::createHumidity(&settings);
+  if (humidity_sensor_) {
+humidity_sensor_->humidityInit();
+  } else {
+throw std::runtime_error("RTHumidity could not be initialized");
+  }
+
+  pressure_sensor_ = RTPressure::createPressure(&settings);
+  if (pressure_sensor_) {
+pressure_sensor_->pressureInit();
+  } else {
+throw std::runtime_error("RTPressure could not be initialized");
+  }
+
+}
+
+GetEnvironmentalSensors::~GetEnvironmentalSensors() {
+  delete humidity_sensor_;
+  delete pressure_sensor_;
+}
+
+void GetEnvironmentalSensors::onTrigger(const 
std::shared_ptr &context, const 
std::shared_ptr &session) {
+
+  auto flow_file_ = session->create();
+
+  flow_file_->setSize(0);
+
+  if (imu->IMURead()) {
+RTIMU_DATA imuData = imu->getIMUData();
+auto vector = imuData.accel;
+std::string degrees = RTMath::displayDegrees("acceleration", vector);
+flow_file_->addAttribute("ACCELERATION", degrees);
+  }
+
+  RTIMU_DATA data;
+
+  bool have_sensor = false;
+
+  if (humidity_sensor_->humidityRead(data)) {
+if (data.humidityValid) {
+  have_sensor = true;
+  std::stringstream ss;
+  ss << std::fixed << std::setprecision(2) << data.humidity;
+  flow_file_->addAttribute("HUMIDITY", ss.str());
+}
+  }
+
+  if (pressure_sensor_->pressureRead(data)) {
+if (data.pressureValid) {
+  have_sensor = true;
+  {
+std::stringstream ss;
+ss << std::fixed << std::setprecision(2) << data.pressure;
+flow_file_->addAttribute("PRESSURE", ss.str());
+  }
+
+  if (data.tem

[jira] [Commented] (NIFI-5261) Create a JSON validator

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500573#comment-16500573
 ] 

ASF GitHub Bot commented on NIFI-5261:
--

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2758#discussion_r192820713
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -60,23 +59,6 @@
 @InputRequirement(Requirement.INPUT_ALLOWED)
 @CapabilityDescription("Creates FlowFiles from documents in MongoDB")
 public class GetMongo extends AbstractMongoProcessor {
-public static final Validator DOCUMENT_VALIDATOR = (subject, value, 
context) -> {
--- End diff --

@MikeThomsen I have replaced this one as well as `AGG_VALIDATOR` with the 
new `JSON_VALIDATOR`. I have ensured that the tests run fine. Also run a sample 
flow on a live MongoDB instance. Everything worked fine. Let me know, if you 
find anything odd.


> Create a JSON validator
> ---
>
> Key: NIFI-5261
> URL: https://issues.apache.org/jira/browse/NIFI-5261
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Minor
>
> Create a StandardValidator that validates PropertyDescriptors that take a 
> JSON input.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5264) Add parsing failure message in ValidateCSV

2018-06-04 Thread bsf (JIRA)
bsf created NIFI-5264:
-

 Summary: Add parsing failure message in ValidateCSV 
 Key: NIFI-5264
 URL: https://issues.apache.org/jira/browse/NIFI-5264
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: bsf


As a developer I would like to see an improvement on the ValidateCSV component 
when using the line by line validation strategy. It will be nice to have an 
option to add into the flowfile 2 fields:
 * field_name : the name of the field failed in the schema validation
 * field_description : the description of the validation error

This will help a lot the user to see what the issue on each line.

Pentaho DI do something like this by enabling error handling.

Thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5175) NiFi built with Java 1.8 needs to run on Java 9

2018-06-04 Thread Jeff Storck (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500582#comment-16500582
 ] 

Jeff Storck commented on NIFI-5175:
---

[~joewitt]

In attempting to test NiFi built on Java 1.8 running on Java 10, I was able to 
run a build from master on the docker openjdk:10 image:
{code:java}
docker run -it --rm -v 
~/Development/git-repos/nifi/nifi-assembly/target/nifi-1.7.0-SNAPSHOT-bin/nifi-1.7.0-SNAPSHOT:/nifi
 -p 18080:8080 openjdk:10 /bin/bash
{code}
The --add-modules=java.xml.bind arg is properly added to the command that 
starts NiFi:
{code:java}
2018-06-04 17:23:59,883 INFO [main] org.apache.nifi.bootstrap.Command Command: 
/docker-java-home/bin/java -classpath 
/nifi/./conf:/nifi/./lib/jetty-schemas-3.1.jar:/nifi/./lib/nifi-properties-1.7.0-SNAPSHOT.jar:/nifi/./lib/slf4j-api-1.7.25.jar:/nifi/./lib/javax.servlet-
api-3.1.0.jar:/nifi/./lib/nifi-api-1.7.0-SNAPSHOT.jar:/nifi/./lib/jcl-over-slf4j-1.7.25.jar:/nifi/./lib/logback-classic-1.2.3.jar:/nifi/./lib/jul-to-slf4j-1.7.25.jar:/nifi/./lib/log4j-over-slf4j-1.7.25.jar:/nifi/./lib/logback-core-1.2.3.jar:/nifi/./lib/nifi-runtime-1.7.
0-SNAPSHOT.jar:/nifi/./lib/nifi-nar-utils-1.7.0-SNAPSHOT.jar:/nifi/./lib/nifi-framework-api-1.7.0-SNAPSHOT.jar
 -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx512m -Xms512m 
-Djavax.security.auth.useSubjectCredsOnly=true 
-Djava.security.egd=file:/dev/urandom -Dsun.ne
t.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true 
-Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol 
-XX:+UseG1GC -Dnifi.properties.file.path=/nifi/./conf/nifi.properties 
-Dnifi.bootstrap.listen.port=33115 -Dapp=NiFi -Dorg.apache
.nifi.bootstrap.config.log.dir=/nifi/logs --add-modules=java.xml.bind 
org.apache.nifi.NiFi
{code}
I'm still doing some investigation...

> NiFi built with Java 1.8 needs to run on Java 9
> ---
>
> Key: NIFI-5175
> URL: https://issues.apache.org/jira/browse/NIFI-5175
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
> Fix For: 1.7.0
>
>
> The following issues have been encountered while attempting to run a Java 
> 1.8-built NiFi on Java 9:
> ||Issue||Solution||Status||
> |JAXB classes cannot be found on the classpath|Add 
> "--add-modules=java.xml.bind" to the commant that starts NiFi|Done|
> |NiFI boostrap not able to determine PID, restarts nifi after nifi.sh 
> stop|Detect if NiFi is running on Java 9, and reflectively invoke 
> Process.pid(), which was newly added to the Process API in Java 9|Done|
>  
> 
>  
> ||Unaddressed issues/warnings with NiFi compiled on Java 1.8 running on Java 
> 9+||Description||Solution||
> |WARNING: An illegal reflective access operation has occurred
>  ..._specific class usage snipped_...
>  WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
>  WARNING: All illegal access operations will be denied in a future 
> release|Reflective invocations are common in the code used in NiFi and its 
> dependencies in Java 1.8|Full compliant migration to Java 9 and use 
> dependencies that are Java 9 compliant|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5264) Add parsing failure message in ValidateCSV

2018-06-04 Thread bsf (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bsf updated NIFI-5264:
--
Description: 
As a developer I would like to see an improvement on the ValidateCSV component 
when using the line by line validation strategy. It will be nice to have an 
option to add into the flowfile 2 fields:
 * field_name : the name of the field failed in the schema validation
 * field_description : the description of the validation error

This will help a lot the user to understand the validation issue on each line.

Pentaho DI do something like this by enabling error handling.

Thanks a lot!

  was:
As a developer I would like to see an improvement on the ValidateCSV component 
when using the line by line validation strategy. It will be nice to have an 
option to add into the flowfile 2 fields:
 * field_name : the name of the field failed in the schema validation
 * field_description : the description of the validation error

This will help a lot the user to see what the issue on each line.

Pentaho DI do something like this by enabling error handling.

Thanks a lot!


> Add parsing failure message in ValidateCSV 
> ---
>
> Key: NIFI-5264
> URL: https://issues.apache.org/jira/browse/NIFI-5264
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: bsf
>Priority: Blocker
>
> As a developer I would like to see an improvement on the ValidateCSV 
> component when using the line by line validation strategy. It will be nice to 
> have an option to add into the flowfile 2 fields:
>  * field_name : the name of the field failed in the schema validation
>  * field_description : the description of the validation error
> This will help a lot the user to understand the validation issue on each line.
> Pentaho DI do something like this by enabling error handling.
> Thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5264) Add parsing failure message in ValidateCSV

2018-06-04 Thread bsf (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bsf updated NIFI-5264:
--
Affects Version/s: 1.6.0

> Add parsing failure message in ValidateCSV 
> ---
>
> Key: NIFI-5264
> URL: https://issues.apache.org/jira/browse/NIFI-5264
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: bsf
>Priority: Blocker
>
> As a developer I would like to see an improvement on the ValidateCSV 
> component when using the line by line validation strategy. It will be nice to 
> have an option to add into the flowfile 2 fields:
>  * field_name : the name of the field failed in the schema validation
>  * field_description : the description of the validation error
> This will help a lot the user to understand the validation issue on each line.
> Pentaho DI do something like this by enabling error handling.
> Thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5264) Add parsing failure message in ValidateCSV

2018-06-04 Thread bsf (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bsf updated NIFI-5264:
--
Priority: Major  (was: Blocker)

> Add parsing failure message in ValidateCSV 
> ---
>
> Key: NIFI-5264
> URL: https://issues.apache.org/jira/browse/NIFI-5264
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: bsf
>Priority: Major
>
> As a developer I would like to see an improvement on the ValidateCSV 
> component when using the line by line validation strategy. It will be nice to 
> have an option to add into the flowfile 2 fields:
>  * field_name : the name of the field failed in the schema validation
>  * field_description : the description of the validation error
> This will help a lot the user to understand the validation issue on each line.
> Pentaho DI do something like this by enabling error handling.
> Thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5264) Add parsing failure message in ValidateCSV

2018-06-04 Thread bsf (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bsf updated NIFI-5264:
--
Description: 
As a developer I would like to see an improvement on the ValidateCSV component 
when using the line by line validation strategy. It will be nice to have an 
option to add into the flowfile 2 fields:
 * field_name : the name of the field failed in the schema validation
 * field_description : the description of the validation error

This will help a lot the user to understand the validation issue on each line.

If too complex, anything that provides information on the fail validation of 
the line against the schema will be more than welcome :)

Pentaho DI do something like this by enabling error handling.

Thanks a lot!

  was:
As a developer I would like to see an improvement on the ValidateCSV component 
when using the line by line validation strategy. It will be nice to have an 
option to add into the flowfile 2 fields:
 * field_name : the name of the field failed in the schema validation
 * field_description : the description of the validation error

This will help a lot the user to understand the validation issue on each line.

Pentaho DI do something like this by enabling error handling.

Thanks a lot!


> Add parsing failure message in ValidateCSV 
> ---
>
> Key: NIFI-5264
> URL: https://issues.apache.org/jira/browse/NIFI-5264
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: bsf
>Priority: Major
>
> As a developer I would like to see an improvement on the ValidateCSV 
> component when using the line by line validation strategy. It will be nice to 
> have an option to add into the flowfile 2 fields:
>  * field_name : the name of the field failed in the schema validation
>  * field_description : the description of the validation error
> This will help a lot the user to understand the validation issue on each line.
> If too complex, anything that provides information on the fail validation of 
> the line against the schema will be more than welcome :)
> Pentaho DI do something like this by enabling error handling.
> Thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2758: NIFI-5261: Added JSON_VALIDATOR to StandardValidato...

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2758#discussion_r192826557
  
--- Diff: nifi-commons/nifi-utils/pom.xml ---
@@ -40,5 +40,10 @@
 nifi-api
 1.7.0-SNAPSHOT
 
+
--- End diff --

Jackson is used in more packages than Gson, so I think you should switch 
over to that unless you have a compelling reason.


---


[GitHub] nifi pull request #2758: NIFI-5261: Added JSON_VALIDATOR to StandardValidato...

2018-06-04 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2758#discussion_r192825698
  
--- Diff: 
nifi-commons/nifi-utils/src/test/java/org/apache/nifi/util/validator/TestStandardValidators.java
 ---
@@ -288,4 +288,42 @@ public void testiso8061InstantValidator() {
 vr = val.validate("foo", "2016-01-01T01:01:01.000Z", vc);
 assertTrue(vr.isValid());
 }
+
--- End diff --

This would be a great use case for Groovy instead of Java. I've started 
doing that with my unit tests because you can specify the JSON like this:

```
import static groovy.json.JsonOutput.*

def json = prettyPrint(toJson([
Name: "Crockford, Douglas"
]))
```

Not required, but worth thinking about because it's a lot cleaner and 
Groovy is allowed in tests.


---


[jira] [Commented] (NIFI-5261) Create a JSON validator

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500611#comment-16500611
 ] 

ASF GitHub Bot commented on NIFI-5261:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2758#discussion_r192826557
  
--- Diff: nifi-commons/nifi-utils/pom.xml ---
@@ -40,5 +40,10 @@
 nifi-api
 1.7.0-SNAPSHOT
 
+
--- End diff --

Jackson is used in more packages than Gson, so I think you should switch 
over to that unless you have a compelling reason.


> Create a JSON validator
> ---
>
> Key: NIFI-5261
> URL: https://issues.apache.org/jira/browse/NIFI-5261
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Minor
>
> Create a StandardValidator that validates PropertyDescriptors that take a 
> JSON input.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5261) Create a JSON validator

2018-06-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500610#comment-16500610
 ] 

ASF GitHub Bot commented on NIFI-5261:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2758#discussion_r192825698
  
--- Diff: 
nifi-commons/nifi-utils/src/test/java/org/apache/nifi/util/validator/TestStandardValidators.java
 ---
@@ -288,4 +288,42 @@ public void testiso8061InstantValidator() {
 vr = val.validate("foo", "2016-01-01T01:01:01.000Z", vc);
 assertTrue(vr.isValid());
 }
+
--- End diff --

This would be a great use case for Groovy instead of Java. I've started 
doing that with my unit tests because you can specify the JSON like this:

```
import static groovy.json.JsonOutput.*

def json = prettyPrint(toJson([
Name: "Crockford, Douglas"
]))
```

Not required, but worth thinking about because it's a lot cleaner and 
Groovy is allowed in tests.


> Create a JSON validator
> ---
>
> Key: NIFI-5261
> URL: https://issues.apache.org/jira/browse/NIFI-5261
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Minor
>
> Create a StandardValidator that validates PropertyDescriptors that take a 
> JSON input.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5264) Add parsing failure message in ValidateCSV

2018-06-04 Thread bsf (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bsf updated NIFI-5264:
--
Description: 
As a developer I would like to see an improvement on the ValidateCSV component 
when using the line by line validation strategy. It will be nice to have an 
option to append into the flowfile 2 fields:
 * field_name : the name of the field failed in the schema validation
 * field_description : the description of the validation error

This will help a lot the user to understand the validation issue on each line.

If too complex, anything that provides information on the fail validation of 
the line against the schema will be more than welcome :)

Pentaho DI do something like this by enabling error handling.

Thanks a lot!

  was:
As a developer I would like to see an improvement on the ValidateCSV component 
when using the line by line validation strategy. It will be nice to have an 
option to add into the flowfile 2 fields:
 * field_name : the name of the field failed in the schema validation
 * field_description : the description of the validation error

This will help a lot the user to understand the validation issue on each line.

If too complex, anything that provides information on the fail validation of 
the line against the schema will be more than welcome :)

Pentaho DI do something like this by enabling error handling.

Thanks a lot!


> Add parsing failure message in ValidateCSV 
> ---
>
> Key: NIFI-5264
> URL: https://issues.apache.org/jira/browse/NIFI-5264
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: bsf
>Priority: Major
>
> As a developer I would like to see an improvement on the ValidateCSV 
> component when using the line by line validation strategy. It will be nice to 
> have an option to append into the flowfile 2 fields:
>  * field_name : the name of the field failed in the schema validation
>  * field_description : the description of the validation error
> This will help a lot the user to understand the validation issue on each line.
> If too complex, anything that provides information on the fail validation of 
> the line against the schema will be more than welcome :)
> Pentaho DI do something like this by enabling error handling.
> Thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5175) NiFi built with Java 1.8 needs to run on Java 9

2018-06-04 Thread Jeff Storck (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500582#comment-16500582
 ] 

Jeff Storck edited comment on NIFI-5175 at 6/4/18 5:53 PM:
---

[~joewitt]

In attempting to test NiFi built on Java 1.8 running on Java 10, I was able to 
run a build from master on the docker openjdk:10 image:
{code:java}
docker run -it --rm -v 
~/Development/git-repos/nifi/nifi-assembly/target/nifi-1.7.0-SNAPSHOT-bin/nifi-1.7.0-SNAPSHOT:/nifi
 -p 18080:8080 openjdk:10 /bin/bash
{code}
Once the container is up, uname reports:
{code:java}
# uname -a
Linux 0629848c2fc2 4.9.87-linuxkit-aufs #1 SMP Wed Mar 14 15:12:16 UTC 2018 
x86_64 GNU/Linux{code}
The --add-modules=java.xml.bind arg is properly added to the command that 
starts NiFi:
{code:java}
2018-06-04 17:23:59,883 INFO [main] org.apache.nifi.bootstrap.Command Command: 
/docker-java-home/bin/java -classpath 
/nifi/./conf:/nifi/./lib/jetty-schemas-3.1.jar:/nifi/./lib/nifi-properties-1.7.0-SNAPSHOT.jar:/nifi/./lib/slf4j-api-1.7.25.jar:/nifi/./lib/javax.servlet-
api-3.1.0.jar:/nifi/./lib/nifi-api-1.7.0-SNAPSHOT.jar:/nifi/./lib/jcl-over-slf4j-1.7.25.jar:/nifi/./lib/logback-classic-1.2.3.jar:/nifi/./lib/jul-to-slf4j-1.7.25.jar:/nifi/./lib/log4j-over-slf4j-1.7.25.jar:/nifi/./lib/logback-core-1.2.3.jar:/nifi/./lib/nifi-runtime-1.7.
0-SNAPSHOT.jar:/nifi/./lib/nifi-nar-utils-1.7.0-SNAPSHOT.jar:/nifi/./lib/nifi-framework-api-1.7.0-SNAPSHOT.jar
 -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx512m -Xms512m 
-Djavax.security.auth.useSubjectCredsOnly=true 
-Djava.security.egd=file:/dev/urandom -Dsun.ne
t.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true 
-Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol 
-XX:+UseG1GC -Dnifi.properties.file.path=/nifi/./conf/nifi.properties 
-Dnifi.bootstrap.listen.port=33115 -Dapp=NiFi -Dorg.apache
.nifi.bootstrap.config.log.dir=/nifi/logs --add-modules=java.xml.bind 
org.apache.nifi.NiFi
{code}
I'm still doing some investigation...


was (Author: jtstorck):
[~joewitt]

In attempting to test NiFi built on Java 1.8 running on Java 10, I was able to 
run a build from master on the docker openjdk:10 image:
{code:java}
docker run -it --rm -v 
~/Development/git-repos/nifi/nifi-assembly/target/nifi-1.7.0-SNAPSHOT-bin/nifi-1.7.0-SNAPSHOT:/nifi
 -p 18080:8080 openjdk:10 /bin/bash
{code}
The --add-modules=java.xml.bind arg is properly added to the command that 
starts NiFi:
{code:java}
2018-06-04 17:23:59,883 INFO [main] org.apache.nifi.bootstrap.Command Command: 
/docker-java-home/bin/java -classpath 
/nifi/./conf:/nifi/./lib/jetty-schemas-3.1.jar:/nifi/./lib/nifi-properties-1.7.0-SNAPSHOT.jar:/nifi/./lib/slf4j-api-1.7.25.jar:/nifi/./lib/javax.servlet-
api-3.1.0.jar:/nifi/./lib/nifi-api-1.7.0-SNAPSHOT.jar:/nifi/./lib/jcl-over-slf4j-1.7.25.jar:/nifi/./lib/logback-classic-1.2.3.jar:/nifi/./lib/jul-to-slf4j-1.7.25.jar:/nifi/./lib/log4j-over-slf4j-1.7.25.jar:/nifi/./lib/logback-core-1.2.3.jar:/nifi/./lib/nifi-runtime-1.7.
0-SNAPSHOT.jar:/nifi/./lib/nifi-nar-utils-1.7.0-SNAPSHOT.jar:/nifi/./lib/nifi-framework-api-1.7.0-SNAPSHOT.jar
 -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx512m -Xms512m 
-Djavax.security.auth.useSubjectCredsOnly=true 
-Djava.security.egd=file:/dev/urandom -Dsun.ne
t.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true 
-Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol 
-XX:+UseG1GC -Dnifi.properties.file.path=/nifi/./conf/nifi.properties 
-Dnifi.bootstrap.listen.port=33115 -Dapp=NiFi -Dorg.apache
.nifi.bootstrap.config.log.dir=/nifi/logs --add-modules=java.xml.bind 
org.apache.nifi.NiFi
{code}
I'm still doing some investigation...

> NiFi built with Java 1.8 needs to run on Java 9
> ---
>
> Key: NIFI-5175
> URL: https://issues.apache.org/jira/browse/NIFI-5175
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
> Fix For: 1.7.0
>
>
> The following issues have been encountered while attempting to run a Java 
> 1.8-built NiFi on Java 9:
> ||Issue||Solution||Status||
> |JAXB classes cannot be found on the classpath|Add 
> "--add-modules=java.xml.bind" to the commant that starts NiFi|Done|
> |NiFI boostrap not able to determine PID, restarts nifi after nifi.sh 
> stop|Detect if NiFi is running on Java 9, and reflectively invoke 
> Process.pid(), which was newly added to the Process API in Java 9|Done|
>  
> 
>  
> ||Unaddressed issues/warnings with NiFi compiled on Java 1.8 running on Java 
> 9+||Description||Solution||
> |WARNING: An illegal reflective access operation has occurred
>  ..._specific class usage snipped_...
>  WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective ac

[jira] [Updated] (NIFI-5264) Add parsing failure message in ValidateCSV

2018-06-04 Thread bsf (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bsf updated NIFI-5264:
--
Fix Version/s: 1.7.0

> Add parsing failure message in ValidateCSV 
> ---
>
> Key: NIFI-5264
> URL: https://issues.apache.org/jira/browse/NIFI-5264
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: bsf
>Priority: Major
> Fix For: 1.7.0
>
>
> As a developer I would like to see an improvement on the ValidateCSV 
> component when using the line by line validation strategy. It will be nice to 
> have an option to append into the flowfile 2 fields:
>  * field_name : the name of the field failed in the schema validation
>  * field_description : the description of the validation error
> This will help a lot the user to understand the validation issue on each line.
> If too complex, anything that provides information on the fail validation of 
> the line against the schema will be more than welcome :)
> Pentaho DI do something like this by enabling error handling.
> Thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-524) Default to building RocksDB

2018-06-04 Thread marco polo (JIRA)
marco polo created MINIFICPP-524:


 Summary: Default to building RocksDB
 Key: MINIFICPP-524
 URL: https://issues.apache.org/jira/browse/MINIFICPP-524
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: marco polo
Assignee: marco polo
 Fix For: 0.6.0


We should default to using the built copy of RocksDB and then use the system 
copy if and only if the user requests that path. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5264) Add parsing failure message in ValidateCSV

2018-06-04 Thread bsf (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bsf updated NIFI-5264:
--
Description: 
As a developer I would like to see an improvement on the ValidateCSV component 
when using the line by line validation strategy. It will be nice to have an 
option to append into the flowfile on the invalid relationship 1 or 2 new 
fields:
 * field_name : the name of the field failed in the schema validation
 * field_description : the description of the validation error

This will help a lot the user to understand the validation issue on each line.

If too complex, anything that provides information on the fail validation of 
the line against the schema will be more than welcome :)

Pentaho DI do something like this by enabling error handling.

Thanks a lot!

  was:
As a developer I would like to see an improvement on the ValidateCSV component 
when using the line by line validation strategy. It will be nice to have an 
option to append into the flowfile 2 fields:
 * field_name : the name of the field failed in the schema validation
 * field_description : the description of the validation error

This will help a lot the user to understand the validation issue on each line.

If too complex, anything that provides information on the fail validation of 
the line against the schema will be more than welcome :)

Pentaho DI do something like this by enabling error handling.

Thanks a lot!


> Add parsing failure message in ValidateCSV 
> ---
>
> Key: NIFI-5264
> URL: https://issues.apache.org/jira/browse/NIFI-5264
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: bsf
>Priority: Major
> Fix For: 1.7.0
>
>
> As a developer I would like to see an improvement on the ValidateCSV 
> component when using the line by line validation strategy. It will be nice to 
> have an option to append into the flowfile on the invalid relationship 1 or 2 
> new fields:
>  * field_name : the name of the field failed in the schema validation
>  * field_description : the description of the validation error
> This will help a lot the user to understand the validation issue on each line.
> If too complex, anything that provides information on the fail validation of 
> the line against the schema will be more than welcome :)
> Pentaho DI do something like this by enabling error handling.
> Thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi pull request #129: MINIFI-450 Handling closing of HTTP response ...

2018-06-04 Thread apiri
GitHub user apiri opened a pull request:

https://github.com/apache/nifi-minifi/pull/129

MINIFI-450 Handling closing of HTTP response in PullHttpChangeIngestor.

Thank you for submitting a contribution to Apache NiFi - MiNiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi-minifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under minifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under minifi-assembly?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apiri/nifi-minifi MINIFI-450

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi/pull/129.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #129


commit a6b7842eada964121a08bad14d0acdf5cdd9424c
Author: Aldrin Piri 
Date:   2018-06-04T18:18:15Z

MINIFI-450 Handling closing of HTTP response in PullHttpChangeIngestor.




---


  1   2   >