[jira] [Commented] (NIFI-1614) Simple Username/Password Authentication

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268065#comment-15268065
 ] 

ASF GitHub Bot commented on NIFI-1614:
--

Github user jvwing commented on the pull request:

https://github.com/apache/nifi/pull/267#issuecomment-216431685
  
What would you recommend for this pull request?  No utility?  A simpler 
hashing utility?


> Simple Username/Password Authentication
> ---
>
> Key: NIFI-1614
> URL: https://issues.apache.org/jira/browse/NIFI-1614
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: James Wing
>Priority: Minor
>
> NiFi should include a simple option for username/password authentication 
> backed by a local file store.  NiFi's existing certificate and LDAP 
> authentication schemes are very secure.  However, the configuration and setup 
> is complex, making them more suitable for long-lived corporate and government 
> installations, but less accessible for casual or short-term use.  Simple 
> username/password authentication would help more users secure more NiFi 
> installations beyond anonymous admin access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MINIFI-23) Remove provenance indexing

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFI-23?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268054#comment-15268054
 ] 

ASF GitHub Bot commented on MINIFI-23:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi/pull/18


> Remove provenance indexing
> --
>
> Key: MINIFI-23
> URL: https://issues.apache.org/jira/browse/MINIFI-23
> Project: Apache NiFi MiNiFi
>  Issue Type: Sub-task
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>
> As these items are not being searched on the agent, the associated 
> functionality is unneeded.  These classes should be refactored/rewritten as 
> needed to cover the use cases being addressed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1614) Simple Username/Password Authentication

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268052#comment-15268052
 ] 

ASF GitHub Bot commented on NIFI-1614:
--

Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/267#discussion_r61836807
  
--- Diff: 
nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/CredentialsStore.java
 ---
@@ -0,0 +1,229 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.authentication.file;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.InvalidObjectException;
+import java.util.List;
+import javax.xml.XMLConstants;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBElement;
+import javax.xml.bind.JAXBException;
+import javax.xml.bind.Marshaller;
+import javax.xml.bind.Unmarshaller;
+import javax.xml.bind.ValidationEvent;
+import javax.xml.bind.ValidationEventHandler;
+import javax.xml.transform.stream.StreamSource;
+import javax.xml.validation.Schema;
+import javax.xml.validation.SchemaFactory;
+
+import org.apache.nifi.authentication.file.generated.ObjectFactory;
--- End diff --

They are used to serialize and deserialize the XML credentials file.  What 
kind of issues are you experiencing?


> Simple Username/Password Authentication
> ---
>
> Key: NIFI-1614
> URL: https://issues.apache.org/jira/browse/NIFI-1614
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: James Wing
>Priority: Minor
>
> NiFi should include a simple option for username/password authentication 
> backed by a local file store.  NiFi's existing certificate and LDAP 
> authentication schemes are very secure.  However, the configuration and setup 
> is complex, making them more suitable for long-lived corporate and government 
> installations, but less accessible for casual or short-term use.  Simple 
> username/password authentication would help more users secure more NiFi 
> installations beyond anonymous admin access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] nifi-minifi git commit: MINIFI-23 Providing base implementation of a non-indexed, persistent provenance repository using NiFi interfaces and package structure.

2016-05-02 Thread aldrin
http://git-wip-us.apache.org/repos/asf/nifi-minifi/blob/fb554819/minifi-nar-bundles/minifi-provenance-repository-bundle/minifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/MiNiFiPersistentProvenanceRepository.java
--
diff --git 
a/minifi-nar-bundles/minifi-provenance-repository-bundle/minifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/MiNiFiPersistentProvenanceRepository.java
 
b/minifi-nar-bundles/minifi-provenance-repository-bundle/minifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/MiNiFiPersistentProvenanceRepository.java
new file mode 100644
index 000..ea66c38
--- /dev/null
+++ 
b/minifi-nar-bundles/minifi-provenance-repository-bundle/minifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/MiNiFiPersistentProvenanceRepository.java
@@ -0,0 +1,1640 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.provenance;
+
+import static org.apache.nifi.provenance.toc.TocUtil.getTocFile;
+
+import java.io.EOFException;
+import java.io.File;
+import java.io.FileFilter;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadFactory;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReadWriteLock;
+import java.util.concurrent.locks.ReentrantLock;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+import java.util.regex.Pattern;
+
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.provenance.expiration.ExpirationAction;
+import org.apache.nifi.provenance.expiration.FileRemovalAction;
+import org.apache.nifi.provenance.lucene.LuceneUtil;
+import org.apache.nifi.provenance.search.Query;
+import org.apache.nifi.provenance.search.QuerySubmission;
+import org.apache.nifi.provenance.search.SearchableField;
+import org.apache.nifi.provenance.serialization.RecordReader;
+import org.apache.nifi.provenance.serialization.RecordReaders;
+import org.apache.nifi.provenance.serialization.RecordWriter;
+import org.apache.nifi.provenance.serialization.RecordWriters;
+import org.apache.nifi.provenance.toc.TocReader;
+import org.apache.nifi.provenance.toc.TocUtil;
+import org.apache.nifi.reporting.Severity;
+import org.apache.nifi.util.FormatUtils;
+import org.apache.nifi.util.NiFiProperties;
+import org.apache.nifi.util.StopWatch;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+// TODO: When API, FlowController, and supporting classes are 
refactored/reimplemented migrate this class and its accompanying imports to 
minifi package structure
+public class MiNiFiPersistentProvenanceRepository implements 
ProvenanceEventRepository {
+
+public static final String EVENT_CATEGORY = "Provenance Repository";
+private static final String FILE_EXTENSION = ".prov";
+private static final String TEMP_FILE_SUFFIX = ".prov.part";
+private static final long PURGE_EVENT_MILLISECONDS = 2500L; //Determines 
the frequency over which the task to delete old events will occur
+public static final int SERIALIZATION_VERSION = 8;
+public static final Pattern NUMBER_PATTERN = Pattern.compile("\\d+");
+
+
+private static final Logger logger = 
LoggerFactory.getLogger(MiNiFiPersistentProvenanceRepository.class);

[3/3] nifi-minifi git commit: MINIFI-23 Providing base implementation of a non-indexed, persistent provenance repository using NiFi interfaces and package structure.

2016-05-02 Thread aldrin
MINIFI-23 Providing base implementation of a non-indexed, persistent
provenance repository using NiFi interfaces and package structure.

Exposing rollover time as configurable property for provenance repository.

This closes #18.


Project: http://git-wip-us.apache.org/repos/asf/nifi-minifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi-minifi/commit/fb554819
Tree: http://git-wip-us.apache.org/repos/asf/nifi-minifi/tree/fb554819
Diff: http://git-wip-us.apache.org/repos/asf/nifi-minifi/diff/fb554819

Branch: refs/heads/master
Commit: fb55481982f36bd74bc84e189a535d95b7b024aa
Parents: 66dbda9
Author: Aldrin Piri 
Authored: Thu Apr 21 11:27:06 2016 -0400
Committer: Aldrin Piri 
Committed: Tue May 3 00:08:43 2016 -0400

--
 minifi-api/pom.xml  |6 +
 .../provenance/ProvenanceEventRepository.java   |  104 ++
 minifi-assembly/pom.xml |   11 +-
 .../bootstrap/util/ConfigTransformer.java   |7 +-
 minifi-bootstrap/src/test/resources/config.yml  |3 +
 minifi-bootstrap/src/test/resources/default.yml |3 +
 minifi-commons/minifi-utils/pom.xml |   28 +
 minifi-commons/pom.xml  |6 +-
 minifi-docs/Properties_Guide.md |6 +
 .../minifi-framework-nar/pom.xml|1 -
 .../src/main/resources/conf/config.yml  |3 +
 .../minifi-provenance-reporting-nar/pom.xml |1 -
 .../minifi-provenance-reporting-bundle/pom.xml  |7 +-
 .../pom.xml |   57 +
 .../MiNiFiPersistentProvenanceRepository.java   | 1640 ++
 ...he.nifi.provenance.ProvenanceEventRepository |   15 +
 ...estMiNiFiPersistentProvenanceRepository.java |  691 
 .../org/apache/nifi/provenance/TestUtil.java|   82 +
 .../minifi-provenance-repository-nar/pom.xml|   42 +
 .../src/main/resources/META-INF/NOTICE  |  202 +++
 .../minifi-provenance-repository-bundle/pom.xml |   37 +
 minifi-nar-bundles/pom.xml  |1 +
 pom.xml |   24 +
 23 files changed, 2967 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/nifi-minifi/blob/fb554819/minifi-api/pom.xml
--
diff --git a/minifi-api/pom.xml b/minifi-api/pom.xml
index 660f357..01f4ad8 100644
--- a/minifi-api/pom.xml
+++ b/minifi-api/pom.xml
@@ -27,4 +27,10 @@ limitations under the License.
 minifi-api
 jar
 
+
+
+org.apache.nifi
+nifi-api
+
+
 
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/nifi-minifi/blob/fb554819/minifi-api/src/main/java/org/apache/nifi/minifi/provenance/ProvenanceEventRepository.java
--
diff --git 
a/minifi-api/src/main/java/org/apache/nifi/minifi/provenance/ProvenanceEventRepository.java
 
b/minifi-api/src/main/java/org/apache/nifi/minifi/provenance/ProvenanceEventRepository.java
new file mode 100644
index 000..fc955b4
--- /dev/null
+++ 
b/minifi-api/src/main/java/org/apache/nifi/minifi/provenance/ProvenanceEventRepository.java
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.minifi.provenance;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+
+/**
+ * This Repository houses Provenance Events. The repository is responsible for
+ * managing the life-cycle of the events, providing access to the events that 
it
+ * has stored, and providing query capabilities against the events.
+ *
+ */
+public interface ProvenanceEventRepository {
+
+/**
+ * Performs any initialization needed. This should be called only by the
+ * framework.
+ *
+ * @param eventReporter to report to
+ * @throws IOException if unable to initialize

[jira] [Commented] (NIFI-1118) Enable SplitText processor to limit line length and filter header lines

2016-05-02 Thread Karthik Narayanan (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267998#comment-15267998
 ] 

Karthik Narayanan commented on NIFI-1118:
-

sure,  i do see you have disabled RTN.  

> Enable SplitText processor to limit line length and filter header lines
> ---
>
> Key: NIFI-1118
> URL: https://issues.apache.org/jira/browse/NIFI-1118
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Mark Bean
>Assignee: Mark Bean
> Fix For: 0.7.0
>
>
> Include the following functionality to the SplitText processor:
> 1) Maximum size limit of the split file(s)
> A new split file will be created if the next line to be added to the current 
> split file exceeds a user-defined maximum file size
> 2) Header line marker
> User-defined character(s) can be used to identify the header line(s) of the 
> data file rather than a predetermined number of lines
> These changes are additions, not a replacement of any property or behavior. 
> In the case of header line marker, the existing property "Header Line Count" 
> must be zero for the new property and behavior to be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1118) Enable SplitText processor to limit line length and filter header lines

2016-05-02 Thread Mark Bean (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267953#comment-15267953
 ] 

Mark Bean commented on NIFI-1118:
-

I believe the "current" modifications for NIFI-1118 are at: 
https://github.com/jskora/nifi.git. This version has the changes proposed in 
NIFI-1118, but has the Remove Trailing Newline property disabled. I will be 
adding that feature back in this week if you can be a little patient. The 
intent is to apply the version with both the new features and RTN property 
available in 0.7.0. I believe 1.0 will have the RTN property removed though.

> Enable SplitText processor to limit line length and filter header lines
> ---
>
> Key: NIFI-1118
> URL: https://issues.apache.org/jira/browse/NIFI-1118
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Mark Bean
>Assignee: Mark Bean
> Fix For: 0.7.0
>
>
> Include the following functionality to the SplitText processor:
> 1) Maximum size limit of the split file(s)
> A new split file will be created if the next line to be added to the current 
> split file exceeds a user-defined maximum file size
> 2) Header line marker
> User-defined character(s) can be used to identify the header line(s) of the 
> data file rather than a predetermined number of lines
> These changes are additions, not a replacement of any property or behavior. 
> In the case of header line marker, the existing property "Header Line Count" 
> must be zero for the new property and behavior to be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1152) Mock Framework allows processor to route to Relationships that the Processor does not support

2016-05-02 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267950#comment-15267950
 ] 

Puspendu Banerjee commented on NIFI-1152:
-

[~markap14] & [~naveenmadhire] or anyone available please review.

> Mock Framework allows processor to route to Relationships that the Processor 
> does not support
> -
>
> Key: NIFI-1152
> URL: https://issues.apache.org/jira/browse/NIFI-1152
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Reporter: Mark Payne
>  Labels: beginner, newbie
> Attachments: 
> 0001-Fix-for-NIFI-1838-NIFI-1152-Code-modification-for-ty.patch
>
>
> If a processor calls ProcessSession.transfer(flowFile, 
> NON_EXISTENT_RELATIONSHIP) the NiFi framework will throw a 
> FlowFileHandlingException. However, the Mock Framework simply allows it and 
> does not throw any sort of Exception. This needs to be addressed so that the 
> Mock framework functions the same way as the normal NiFi framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1838) Groovy Test Scripts will require refactoring if we implement NIFI-1152

2016-05-02 Thread Puspendu Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Puspendu Banerjee updated NIFI-1838:

Attachment: 0001-Fix-for-NIFI-1838-NIFI-1152-Code-modification-for-ty.patch

patch

> Groovy Test Scripts will require refactoring if we implement NIFI-1152
> --
>
> Key: NIFI-1838
> URL: https://issues.apache.org/jira/browse/NIFI-1838
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.6.1
>Reporter: Puspendu Banerjee
>  Labels: patch
> Fix For: 1.0.0
>
> Attachments: 
> 0001-Fix-for-NIFI-1838-NIFI-1152-Code-modification-for-ty.patch
>
>
> Groovy Test Scripts will require refractoring we implement NIFI-1152 as they 
> don't define Relationships properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-981) Add support for Hive JDBC / ExecuteSQL

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267940#comment-15267940
 ] 

ASF GitHub Bot commented on NIFI-981:
-

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/384#discussion_r61832445
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -52,15 +54,27 @@
 @EventDriven
 @InputRequirement(Requirement.INPUT_ALLOWED)
 @Tags({"hive", "sql", "select", "jdbc", "query", "database"})
-@CapabilityDescription("Execute provided HiveQL SELECT query against a 
Hive database connection. Query result will be converted to Avro format."
+@CapabilityDescription("Execute provided HiveQL SELECT query against a 
Hive database connection. Query result will be converted to Avro or CSV format."
 + " Streaming is used so arbitrarily large result sets are 
supported. This processor can be scheduled to run on "
 + "a timer, or cron expression, using the standard scheduling 
methods, or it can be triggered by an incoming FlowFile. "
 + "If it is triggered by an incoming FlowFile, then attributes of 
that FlowFile will be available when evaluating the "
 + "select query. FlowFile attribute 'executehiveql.row.count' 
indicates how many rows were selected.")
-public class ExecuteHiveQL extends AbstractHiveQLProcessor {
+@WritesAttributes({
+@WritesAttribute(attribute = "mime.type", description = "Sets the 
MIME type for the outgoing flowfile to application/avro-binary for Avro or 
text/csv for CSV."),
+@WritesAttribute(attribute = "filename", description = "Adds .avro 
or .csv to the filename attribute depending on which output format is 
selected."),
+@WritesAttribute(attribute = "executehiveql.row.count", 
description = "Indicates how many rows were selected/returned by the query.")
--- End diff --

Nit-picking here, but given the rename of the processor, do we want this to 
be selecthiveql.row.count?


> Add support for Hive JDBC / ExecuteSQL
> --
>
> Key: NIFI-981
> URL: https://issues.apache.org/jira/browse/NIFI-981
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Matt Burgess
>
> In this mailing list thread from September 2015 "NIFI DBCP connection pool 
> not working for hive" the main thrust of the converstation is to provide 
> proper support for delivering data to hive.  Hive's jdbc driver appears to 
> have dependencies on Hadoop libraries.  We need to be careful/thoughtful 
> about how to best support this so that different versions of Hadoop distros 
> can be supported (potentially in parallel on the same flow).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-981) Add support for Hive JDBC / ExecuteSQL

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267941#comment-15267941
 ] 

ASF GitHub Bot commented on NIFI-981:
-

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/384#discussion_r61832456
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/SelectHiveQL.java
 ---
@@ -52,15 +54,27 @@
 @EventDriven
 @InputRequirement(Requirement.INPUT_ALLOWED)
 @Tags({"hive", "sql", "select", "jdbc", "query", "database"})
-@CapabilityDescription("Execute provided HiveQL SELECT query against a 
Hive database connection. Query result will be converted to Avro format."
+@CapabilityDescription("Execute provided HiveQL SELECT query against a 
Hive database connection. Query result will be converted to Avro or CSV format."
 + " Streaming is used so arbitrarily large result sets are 
supported. This processor can be scheduled to run on "
 + "a timer, or cron expression, using the standard scheduling 
methods, or it can be triggered by an incoming FlowFile. "
 + "If it is triggered by an incoming FlowFile, then attributes of 
that FlowFile will be available when evaluating the "
 + "select query. FlowFile attribute 'executehiveql.row.count' 
indicates how many rows were selected.")
-public class ExecuteHiveQL extends AbstractHiveQLProcessor {
+@WritesAttributes({
+@WritesAttribute(attribute = "mime.type", description = "Sets the 
MIME type for the outgoing flowfile to application/avro-binary for Avro or 
text/csv for CSV."),
+@WritesAttribute(attribute = "filename", description = "Adds .avro 
or .csv to the filename attribute depending on which output format is 
selected."),
+@WritesAttribute(attribute = "executehiveql.row.count", 
description = "Indicates how many rows were selected/returned by the query.")
+})
+public class SelectHiveQL extends AbstractHiveQLProcessor {
 
 public static final String RESULT_ROW_COUNT = 
"executehiveql.row.count";
--- End diff --

same as above


> Add support for Hive JDBC / ExecuteSQL
> --
>
> Key: NIFI-981
> URL: https://issues.apache.org/jira/browse/NIFI-981
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Matt Burgess
>
> In this mailing list thread from September 2015 "NIFI DBCP connection pool 
> not working for hive" the main thrust of the converstation is to provide 
> proper support for delivering data to hive.  Hive's jdbc driver appears to 
> have dependencies on Hadoop libraries.  We need to be careful/thoughtful 
> about how to best support this so that different versions of Hadoop distros 
> can be supported (potentially in parallel on the same flow).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1614) Simple Username/Password Authentication

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267939#comment-15267939
 ] 

ASF GitHub Bot commented on NIFI-1614:
--

Github user alopresto commented on the pull request:

https://github.com/apache/nifi/pull/267#issuecomment-216417900
  
I think my earlier comments may have been unclear or ambiguous. I do not 
believe we need a full command-line interface for modifying the configuration 
file, as hand-editing the files is the existing norm. I simply meant that the 
process of protecting a raw password with bcrypt is not an "in-head" operation 
for most instance admins, so we should provide a utility to perform that 
operation. 

While I very much respect the effort that went into the supporting 
infrastructure, I think it is overkill and not consistent with the global 
approach. 


> Simple Username/Password Authentication
> ---
>
> Key: NIFI-1614
> URL: https://issues.apache.org/jira/browse/NIFI-1614
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: James Wing
>Priority: Minor
>
> NiFi should include a simple option for username/password authentication 
> backed by a local file store.  NiFi's existing certificate and LDAP 
> authentication schemes are very secure.  However, the configuration and setup 
> is complex, making them more suitable for long-lived corporate and government 
> installations, but less accessible for casual or short-term use.  Simple 
> username/password authentication would help more users secure more NiFi 
> installations beyond anonymous admin access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1152) Mock Framework allows processor to route to Relationships that the Processor does not support

2016-05-02 Thread Puspendu Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Puspendu Banerjee updated NIFI-1152:

Attachment: 0001-Fix-for-NIFI-1838-NIFI-1152-Code-modification-for-ty.patch

Patch for NIFI-1838 & NIFI-1152 & Code modification for typeSafety

> Mock Framework allows processor to route to Relationships that the Processor 
> does not support
> -
>
> Key: NIFI-1152
> URL: https://issues.apache.org/jira/browse/NIFI-1152
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Reporter: Mark Payne
>  Labels: beginner, newbie
> Attachments: 
> 0001-Fix-for-NIFI-1838-NIFI-1152-Code-modification-for-ty.patch
>
>
> If a processor calls ProcessSession.transfer(flowFile, 
> NON_EXISTENT_RELATIONSHIP) the NiFi framework will throw a 
> FlowFileHandlingException. However, the Mock Framework simply allows it and 
> does not throw any sort of Exception. This needs to be addressed so that the 
> Mock framework functions the same way as the normal NiFi framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1614) Simple Username/Password Authentication

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267931#comment-15267931
 ] 

ASF GitHub Bot commented on NIFI-1614:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/267#discussion_r61832014
  
--- Diff: 
nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/CredentialsStore.java
 ---
@@ -0,0 +1,229 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.authentication.file;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.InvalidObjectException;
+import java.util.List;
+import javax.xml.XMLConstants;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBElement;
+import javax.xml.bind.JAXBException;
+import javax.xml.bind.Marshaller;
+import javax.xml.bind.Unmarshaller;
+import javax.xml.bind.ValidationEvent;
+import javax.xml.bind.ValidationEventHandler;
+import javax.xml.transform.stream.StreamSource;
+import javax.xml.validation.Schema;
+import javax.xml.validation.SchemaFactory;
+
+import org.apache.nifi.authentication.file.generated.ObjectFactory;
--- End diff --

I'm getting a number of issues building the project with these generated 
classes. Why do they need to be generated?


> Simple Username/Password Authentication
> ---
>
> Key: NIFI-1614
> URL: https://issues.apache.org/jira/browse/NIFI-1614
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: James Wing
>Priority: Minor
>
> NiFi should include a simple option for username/password authentication 
> backed by a local file store.  NiFi's existing certificate and LDAP 
> authentication schemes are very secure.  However, the configuration and setup 
> is complex, making them more suitable for long-lived corporate and government 
> installations, but less accessible for casual or short-term use.  Simple 
> username/password authentication would help more users secure more NiFi 
> installations beyond anonymous admin access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1838) Groovy Test Scripts will require refactoring if we implement NIFI-1152

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267912#comment-15267912
 ] 

ASF GitHub Bot commented on NIFI-1838:
--

Github user PuspenduBanerjee commented on the pull request:

https://github.com/apache/nifi/pull/400#issuecomment-216414833
  
Updated to address Fix for NIFI-1838 & NIFI-1152


> Groovy Test Scripts will require refactoring if we implement NIFI-1152
> --
>
> Key: NIFI-1838
> URL: https://issues.apache.org/jira/browse/NIFI-1838
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.6.1
>Reporter: Puspendu Banerjee
>
> Groovy Test Scripts will require refractoring we implement NIFI-1152 as they 
> don't define Relationships properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1662) Improve Expression Language to Enable Working with Decimals

2016-05-02 Thread Joseph Percivall (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267862#comment-15267862
 ] 

Joseph Percivall commented on NIFI-1662:


As a note: this will need to take into account the PriorityAttributePrioritizer 
as well.

https://github.com/apache/nifi/blob/ad73a23affe32e77a5295c829a4958da440d24cb/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-prioritizers/src/main/java/org/apache/nifi/prioritizer/PriorityAttributePrioritizer.java

> Improve Expression Language to Enable Working with Decimals
> ---
>
> Key: NIFI-1662
> URL: https://issues.apache.org/jira/browse/NIFI-1662
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Percivall
>Assignee: Joseph Percivall
>
> Currently the math operations in Expression Language use Longs to evaluate 
> numbers. This leads to any decimal places getting truncated when performing 
> operations like divide. 
> EL should be improved to enable the user to evaluate math expressions using 
> doubles.
> Another desired portion of this would be to open up the static Math class [1] 
> methods (using reflection) to further enable working with Decimals.
> [1] https://docs.oracle.com/javase/7/docs/api/java/lang/Math.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1838) Groovy Test Scripts will require refactoring if we implement NIFI-1152

2016-05-02 Thread Puspendu Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Puspendu Banerjee updated NIFI-1838:

Summary: Groovy Test Scripts will require refactoring if we implement 
NIFI-1152  (was: Groovy Test Scripts will require refractoring we implement 
NIFI-1152)

> Groovy Test Scripts will require refactoring if we implement NIFI-1152
> --
>
> Key: NIFI-1838
> URL: https://issues.apache.org/jira/browse/NIFI-1838
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.6.1
>Reporter: Puspendu Banerjee
>
> Groovy Test Scripts will require refractoring we implement NIFI-1152 as they 
> don't define Relationships properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1838) Groovy Test Scripts will require refractoring we implement NIFI-1152

2016-05-02 Thread Puspendu Banerjee (JIRA)
Puspendu Banerjee created NIFI-1838:
---

 Summary: Groovy Test Scripts will require refractoring we 
implement NIFI-1152
 Key: NIFI-1838
 URL: https://issues.apache.org/jira/browse/NIFI-1838
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.6.1, 1.0.0
Reporter: Puspendu Banerjee


Groovy Test Scripts will require refractoring we implement NIFI-1152 as they 
don't define Relationships properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1614) Simple Username/Password Authentication

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267718#comment-15267718
 ] 

ASF GitHub Bot commented on NIFI-1614:
--

Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/267#discussion_r61821120
  
--- Diff: 
nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/CredentialsCLI.java
 ---
@@ -0,0 +1,207 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.authentication.file;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.util.ArrayList;
+
+import org.apache.nifi.authentication.file.generated.UserCredentials;
+import org.apache.nifi.authentication.file.generated.UserCredentialsList;
+
+
+/**
+ * Command-line interface for working with a {@link CredentialsStore}
+ * persisted as an XML file.
+ *
+ * Usage:
+ * 
+ *   list credentials.xml
+ *   add credentials.xml admin password
--- End diff --

Thanks, I'll try that method.


> Simple Username/Password Authentication
> ---
>
> Key: NIFI-1614
> URL: https://issues.apache.org/jira/browse/NIFI-1614
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: James Wing
>Priority: Minor
>
> NiFi should include a simple option for username/password authentication 
> backed by a local file store.  NiFi's existing certificate and LDAP 
> authentication schemes are very secure.  However, the configuration and setup 
> is complex, making them more suitable for long-lived corporate and government 
> installations, but less accessible for casual or short-term use.  Simple 
> username/password authentication would help more users secure more NiFi 
> installations beyond anonymous admin access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1614) Simple Username/Password Authentication

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267653#comment-15267653
 ] 

ASF GitHub Bot commented on NIFI-1614:
--

Github user alopresto commented on a diff in the pull request:

https://github.com/apache/nifi/pull/267#discussion_r61817392
  
--- Diff: 
nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/CredentialsCLI.java
 ---
@@ -0,0 +1,207 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.authentication.file;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.util.ArrayList;
+
+import org.apache.nifi.authentication.file.generated.UserCredentials;
+import org.apache.nifi.authentication.file.generated.UserCredentialsList;
+
+
+/**
+ * Command-line interface for working with a {@link CredentialsStore}
+ * persisted as an XML file.
+ *
+ * Usage:
+ * 
+ *   list credentials.xml
+ *   add credentials.xml admin password
--- End diff --

Accepting the raw password on the command line will mean that it is 
persisted in the terminal history and available to any other processes running. 
It is more secure to use 
[Console#readPassword()](https://docs.oracle.com/javase/7/docs/api/java/io/Console.html#readPassword%28%29)


> Simple Username/Password Authentication
> ---
>
> Key: NIFI-1614
> URL: https://issues.apache.org/jira/browse/NIFI-1614
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: James Wing
>Priority: Minor
>
> NiFi should include a simple option for username/password authentication 
> backed by a local file store.  NiFi's existing certificate and LDAP 
> authentication schemes are very secure.  However, the configuration and setup 
> is complex, making them more suitable for long-lived corporate and government 
> installations, but less accessible for casual or short-term use.  Simple 
> username/password authentication would help more users secure more NiFi 
> installations beyond anonymous admin access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-361) Create Processors to mutate JSON data

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267607#comment-15267607
 ] 

ASF GitHub Bot commented on NIFI-361:
-

GitHub user YolandaMDavis opened a pull request:

https://github.com/apache/nifi/pull/405

NIFI-361- Create Processors to mutate JSON data

Implementation of the TransformJSON processor using the Jolt library. 
TransformJSON supports Jolt specifications for the following transformations: 
Chain, Shift, Remove, Sort, Cardinality and Default. Users will be able to add 
the TransformJSON processor, select the transformation they wish to apply and 
enter the specification for the given transformation.

Details for creating Jolt specifications can be found 
[here](https://github.com/bazaarvoice/jolt)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/YolandaMDavis/nifi NIFI-361-0.x

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/405.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #405


commit 873c534a057b051f3b28cac3cc496d59f0c352ef
Author: Yolanda M. Davis 
Date:   2016-04-14T12:19:41Z

NIFI-361 Adding TransformJSON from master. Also changed streams in test 
since 1.7 only supported.
(cherry picked from commit ffc9d19)




> Create Processors to mutate JSON data
> -
>
> Key: NIFI-361
> URL: https://issues.apache.org/jira/browse/NIFI-361
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Aldrin Piri
>Assignee: Oleg Zhurakousky
>Priority: Minor
> Fix For: 1.0.0, 0.7.0
>
>
> Creating a separate issue to track these as a pull request has been submitted 
> for related issue NIFI-356.
> Also backed by JsonPath, processors should facilitate through specification 
> of user-defined properties:
> * Add - identify path and add key/value pair
> ** Handle if the path is an array, this would ignore the name specified and 
> just add the value to the collection
> * Remove - delete the element at the specified path
> * Update - change the value for the given path to a provided value
> Need to determine if objects/arrays make sense for values or if they are 
> needed.
> While it would be nice to be able to execute several operations per processor 
> instance, it may be hard to capture all the relevant information needed for 
> multiple operations in one processor configuration in a user friendly context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-1837) PutKafka - Decouple incoming message delimiter from outgoing batching

2016-05-02 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky reassigned NIFI-1837:
--

Assignee: Oleg Zhurakousky

> PutKafka - Decouple incoming message delimiter from outgoing batching
> -
>
> Key: NIFI-1837
> URL: https://issues.apache.org/jira/browse/NIFI-1837
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 0.6.1
>Reporter: Ralph Perko
>Assignee: Oleg Zhurakousky
>  Labels: kafka
> Fix For: 0.7.0
>
> Attachments: NIFI-1837.diff
>
>
> In 0.6.x batching outgoing messages to Kafka is only supported if a message 
> delimiter is provided for incoming flow files.  This appears to be a recently 
> added constraint.  It did not exist in 0.4.x.   Could you please remove this.
> PutKafka.java:453
> Remove the "if" statement and just have:
> properties.setProperty("batch.size", 
> context.getProperty(BATCH_NUM_MESSAGES).getValue());



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1837) PutKafka - Decouple incoming message delimiter from outgoing batching

2016-05-02 Thread Oleg Zhurakousky (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267471#comment-15267471
 ] 

Oleg Zhurakousky commented on NIFI-1837:


Ralf, thanks for a contribution. I'll look over when i get a chance. 

> PutKafka - Decouple incoming message delimiter from outgoing batching
> -
>
> Key: NIFI-1837
> URL: https://issues.apache.org/jira/browse/NIFI-1837
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 0.6.1
>Reporter: Ralph Perko
>Assignee: Oleg Zhurakousky
>  Labels: kafka
> Fix For: 0.7.0
>
> Attachments: NIFI-1837.diff
>
>
> In 0.6.x batching outgoing messages to Kafka is only supported if a message 
> delimiter is provided for incoming flow files.  This appears to be a recently 
> added constraint.  It did not exist in 0.4.x.   Could you please remove this.
> PutKafka.java:453
> Remove the "if" statement and just have:
> properties.setProperty("batch.size", 
> context.getProperty(BATCH_NUM_MESSAGES).getValue());



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (NIFI-1837) PutKafka - Decouple incoming message delimiter from outgoing batching

2016-05-02 Thread Ralph Perko (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ralph Perko updated NIFI-1837:
--
Comment: was deleted

(was: decouple incoming message delimiter from outgoing batch size)

> PutKafka - Decouple incoming message delimiter from outgoing batching
> -
>
> Key: NIFI-1837
> URL: https://issues.apache.org/jira/browse/NIFI-1837
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 0.6.1
>Reporter: Ralph Perko
>  Labels: kafka
> Fix For: 0.7.0
>
> Attachments: NIFI-1837.diff
>
>
> In 0.6.x batching outgoing messages to Kafka is only supported if a message 
> delimiter is provided for incoming flow files.  This appears to be a recently 
> added constraint.  It did not exist in 0.4.x.   Could you please remove this.
> PutKafka.java:453
> Remove the "if" statement and just have:
> properties.setProperty("batch.size", 
> context.getProperty(BATCH_NUM_MESSAGES).getValue());



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1837) PutKafka - Decouple incoming message delimiter from outgoing batching

2016-05-02 Thread Ralph Perko (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ralph Perko updated NIFI-1837:
--
Attachment: NIFI-1837.diff

decouple message delimiter from batch size

> PutKafka - Decouple incoming message delimiter from outgoing batching
> -
>
> Key: NIFI-1837
> URL: https://issues.apache.org/jira/browse/NIFI-1837
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Ralph Perko
> Attachments: NIFI-1837.diff
>
>
> In 0.6.x batching outgoing messages to Kafka is only supported if a message 
> delimiter is provided for incoming flow files.  This appears to be a recently 
> added constraint.  It did not exist in 0.4.x.   Could you please remove this.
> PutKafka.java:453
> Remove the "if" statement and just have:
> properties.setProperty("batch.size", 
> context.getProperty(BATCH_NUM_MESSAGES).getValue());



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MINIFI-32) Design logo for MiNiFi

2016-05-02 Thread Andrew Lim (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFI-32?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267447#comment-15267447
 ] 

Andrew Lim commented on MINIFI-32:
--

Agree, this logo is awesome.  Definitely like the single color text.  :)

> Design logo for MiNiFi
> --
>
> Key: MINIFI-32
> URL: https://issues.apache.org/jira/browse/MINIFI-32
> Project: Apache NiFi MiNiFi
>  Issue Type: Task
>  Components: Documentation
>Reporter: Rob Moran
>Priority: Minor
> Attachments: logo-minifi.png
>
>
> The proposed design of the MiNiFi logo aims to visually capture several key 
> concepts:
> * lighter in weight and a smaller footprint relative to NiFi
> * operating farther out, at the "edge" or source of data creation
> * circular or bi-directional movement of control/data between the edge and 
> core
> Additionally, the logo should communicate its close relationship to the NiFi 
> brand. The proposed design uses a variation of the original typeface and 
> color palette used in the NiFi brand, along with a representation of the 
> water drop seen in NiFi's logo.
> The proposed design can be see in the attached image.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1614) Simple Username/Password Authentication

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267446#comment-15267446
 ] 

ASF GitHub Bot commented on NIFI-1614:
--

Github user jvwing commented on the pull request:

https://github.com/apache/nifi/pull/267#issuecomment-216353947
  
I rebased the commits on the master branch to resolve conflicts and use the 
updated LoginIdentityProvider interface and Administrator's Guide content.  I 
apologize if it complicates reviewing.  Changes include:

- Improved performance by only reloading the credentials data if the file 
has been modified
- Provided a command-line utility reference implementation
- Added documentation to the Administrator's Guide
- Included a sample login-credentials.xml file to the conf directory


> Simple Username/Password Authentication
> ---
>
> Key: NIFI-1614
> URL: https://issues.apache.org/jira/browse/NIFI-1614
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: James Wing
>Priority: Minor
>
> NiFi should include a simple option for username/password authentication 
> backed by a local file store.  NiFi's existing certificate and LDAP 
> authentication schemes are very secure.  However, the configuration and setup 
> is complex, making them more suitable for long-lived corporate and government 
> installations, but less accessible for casual or short-term use.  Simple 
> username/password authentication would help more users secure more NiFi 
> installations beyond anonymous admin access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1152) Mock Framework allows processor to route to Relationships that the Processor does not support

2016-05-02 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267427#comment-15267427
 ] 

Puspendu Banerjee commented on NIFI-1152:
-

[~markap14] [~naveenmadhire] Is there any progress on this?

> Mock Framework allows processor to route to Relationships that the Processor 
> does not support
> -
>
> Key: NIFI-1152
> URL: https://issues.apache.org/jira/browse/NIFI-1152
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Reporter: Mark Payne
>  Labels: beginner, newbie
>
> If a processor calls ProcessSession.transfer(flowFile, 
> NON_EXISTENT_RELATIONSHIP) the NiFi framework will throw a 
> FlowFileHandlingException. However, the Mock Framework simply allows it and 
> does not throw any sort of Exception. This needs to be addressed so that the 
> Mock framework functions the same way as the normal NiFi framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1118) Enable SplitText processor to limit line length and filter header lines

2016-05-02 Thread Joe Skora (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Skora updated NIFI-1118:

Assignee: Mark Bean  (was: Joe Skora)

> Enable SplitText processor to limit line length and filter header lines
> ---
>
> Key: NIFI-1118
> URL: https://issues.apache.org/jira/browse/NIFI-1118
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Mark Bean
>Assignee: Mark Bean
> Fix For: 0.7.0
>
>
> Include the following functionality to the SplitText processor:
> 1) Maximum size limit of the split file(s)
> A new split file will be created if the next line to be added to the current 
> split file exceeds a user-defined maximum file size
> 2) Header line marker
> User-defined character(s) can be used to identify the header line(s) of the 
> data file rather than a predetermined number of lines
> These changes are additions, not a replacement of any property or behavior. 
> In the case of header line marker, the existing property "Header Line Count" 
> must be zero for the new property and behavior to be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1837) PutKafka - Decouple incoming message delimiter from outgoing batching

2016-05-02 Thread Ralph Perko (JIRA)
Ralph Perko created NIFI-1837:
-

 Summary: PutKafka - Decouple incoming message delimiter from 
outgoing batching
 Key: NIFI-1837
 URL: https://issues.apache.org/jira/browse/NIFI-1837
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Ralph Perko


In 0.6.x batching outgoing messages to Kafka is only supported if a message 
delimiter is provided for incoming flow files.  This appears to be a recently 
added constraint.  It did not exist in 0.4.x.   Could you please remove this.

PutKafka.java:453
Remove the "if" statement and just have:
properties.setProperty("batch.size", 
context.getProperty(BATCH_NUM_MESSAGES).getValue());





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1213) Allow Mock Framework to register "FlowFile Assertions"

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267093#comment-15267093
 ] 

ASF GitHub Bot commented on NIFI-1213:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/404

NIFI-1213 Added the possibility to register FlowFile assertions in mock 
framework



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-1213

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/404.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #404


commit 9c006043188a4175295120bfd96580c0c21e74a7
Author: Pierre Villard 
Date:   2016-05-02T17:54:53Z

NIFI-1213 Added the possibility to register FlowFile assertions in mock 
framework




> Allow Mock Framework to register "FlowFile Assertions"
> --
>
> Key: NIFI-1213
> URL: https://issues.apache.org/jira/browse/NIFI-1213
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Mark Payne
> Fix For: 1.0.0
>
>
> Often times, unit tests will invoke TestRunner.run, and then iterate through 
> the output FlowFiles, ensuring that a particular attribute exists, or that an 
> attribute is equal to some value.
> As a convenience, we should provide a mechanism to indicate that all output 
> FlowFiles (or all FlowFiles routed to a given relationship) meet some 
> criteria.
> For example:
> {code}
> TestRunner.assertAllFlowFilesContainAttribute( String attributeName );
> {code}
> {code}
> TestRunner.assertAllFlowFilesContainAttribute( Relationship relationship, 
> String attributeName );
> {code}
> Additionally, we should consider allowing a "callback" mechanism:
> {code}
> TestRunner.assertAllFlowFiles( FlowFileValidator validator );
> {code}
> {code}
> TestRunner.assertAllFlowFiles( Relationship relationship, FlowFileValidator 
> validator );
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-1213) Allow Mock Framework to register "FlowFile Assertions"

2016-05-02 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-1213:


Assignee: Pierre Villard

> Allow Mock Framework to register "FlowFile Assertions"
> --
>
> Key: NIFI-1213
> URL: https://issues.apache.org/jira/browse/NIFI-1213
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Mark Payne
>Assignee: Pierre Villard
> Fix For: 1.0.0
>
>
> Often times, unit tests will invoke TestRunner.run, and then iterate through 
> the output FlowFiles, ensuring that a particular attribute exists, or that an 
> attribute is equal to some value.
> As a convenience, we should provide a mechanism to indicate that all output 
> FlowFiles (or all FlowFiles routed to a given relationship) meet some 
> criteria.
> For example:
> {code}
> TestRunner.assertAllFlowFilesContainAttribute( String attributeName );
> {code}
> {code}
> TestRunner.assertAllFlowFilesContainAttribute( Relationship relationship, 
> String attributeName );
> {code}
> Additionally, we should consider allowing a "callback" mechanism:
> {code}
> TestRunner.assertAllFlowFiles( FlowFileValidator validator );
> {code}
> {code}
> TestRunner.assertAllFlowFiles( Relationship relationship, FlowFileValidator 
> validator );
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MINIFI-6) Basic C++ native MiniFi implementation

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFI-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267048#comment-15267048
 ] 

ASF GitHub Bot commented on MINIFI-6:
-

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/1#discussion_r61772403
  
--- Diff: Makefile ---
@@ -0,0 +1,40 @@
+CC=g++
+AR=ar
+TARGET_DIR= ./build
+TARGET_LIB=libminifi.a
+TARGET_EXE=minifi
+CFLAGS=-O0 -fexceptions -fpermissive -Wno-write-strings -std=c++11 -fPIC 
-Wall -g -Wno-unused-private-field
+INCLUDES=-I./inc -I./src -I./test -I/usr/include/libxml2 
-I/usr/local/opt/leveldb/include/
+LDDIRECTORY=-L/usr/local/opt/leveldb/out-static/ -L./build
--- End diff --

@benqiu2016 This out-static directory seems to be problematic on OS X.  Is 
additional setup needed beyond brew install leveldb.


> Basic C++ native MiniFi implementation
> --
>
> Key: MINIFI-6
> URL: https://issues.apache.org/jira/browse/MINIFI-6
> Project: Apache NiFi MiNiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 0.1.0
> Environment: C++
> Linux
>Reporter: bqiu
>  Labels: native
> Fix For: 0.1.0
>
>
> A Basic C++ isolated native MiNiFi implementation (not communicated to the 
> master NiFI yet).
> 1) Identify the all necessary software frameworks for C++ MiNiFi 
> implementation like database, xml parser, logging, etc.
> 2) Flow configuration from flow.xml
> 3) Processor init/enable/disable/running
> 4) Processor Scheduling
> 5) Processor Relationship/Connections
> 6) Flow record creation/clone/transfer between Processor
> 7) Flow record persistent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-924) Add Camel support in NiFi

2016-05-02 Thread Puspendu Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267022#comment-15267022
 ] 

Puspendu Banerjee commented on NIFI-924:


[~ozhurakousky] 
# Number of endpoint really matters because if a component is not readily 
available in NiFi , we can complement that with corresponding component from 
Camel. Result is wider adoption and market penetration for a relatively new 
tool [ NiFi ] in Camel world.
# With Dynamic class-loading: I meant to say, using 
*GroovyClassLoader.parseClass()*, it is possible to create a new Groovy class 
dynamically at run-time and use it from a Groovy script or a Java application. 
GroovyClassLoader.parseClass() will parse a string passed to it and attempt to 
create a Groovy class. This way, it is possible to add the necessary imports, 
set the class name and add the fields to the class being created. So, it will 
be very useful to add/modify functionalities, support camel's groovy DSL 
from/inside camel without *code-build-deploy-restart* SpringProcessor and the 
result is more agile.
# Dependency resolver : I think everyone would not like to re-invent the wheel. 
So, likely they will try to use something that is already established, proven. 
Again, maven repo/artifactory all these are just one or another implementation, 
it could be any other sort of repo structure, you just need to code a resolver 
for that. Can you please name some widely adopted CI tool which does not 
support maven repository structured data and any repo which can con prevent 
unauthorized access by some means.

Again,for success _[ adoption/ market penetration/ sustainability ] of a 
product we should think about re-utilizing existing resources, shorter learning 
curve in turn avoid additional cost burden._ For example, when I present a demo 
[ pre-sales, training etc.], I see a large audience is techie[ skilled with 
DataStage, Fuse ESB, DataPower, other Drag-Drop tools with little programming 
etc.] and Subject Matter Expert but not a Spring developer. So, ensuring that 
NiFi is easy to adopt & will not disturb current eco-system is a key.

[~joewitt] 
Good to know that you have a good sense of humor :D. Seems it has started to 
bloom at this Spring! 
h3. LOL
- Larger Dependency situation can be handled by dependency resolver on as and 
when basis, because all dependencies for camel routes should only have run-time 
scope and defined at corresponding Camel/SI processor instance.

> Add Camel support in NiFi
> -
>
> Key: NIFI-924
> URL: https://issues.apache.org/jira/browse/NIFI-924
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Jean-Baptiste Onofré
>
> I'm working on a NiFi Route able to leverage a Camel route (runtime routing), 
> and another one being able to bootstrap a Camel route starting from Camel 
> DSLs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MINIFI-33) Add option to delete Provenance Data after successful reporting

2016-05-02 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFI-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266948#comment-15266948
 ] 

Aldrin Piri commented on MINIFI-33:
---

Will have to consider where ownership of this resides and who is responsible 
for invoking.  Exposing this to higher level APIs seems like potentially too 
much exposure, but could potentially see mechanisms to provide acknowledgement 
to the repository implementation to help it make more informed decisions.  A 
successive implementation that maps more directly to the more transient nature 
of provenance in MiNiFi instances could certainly provide/support the earlier 
cleanup more efficiently. 

Regardless, nice point to work toward in terms of being good stewards over 
limited resources.

> Add option to delete Provenance Data after successful reporting
> ---
>
> Key: MINIFI-33
> URL: https://issues.apache.org/jira/browse/MINIFI-33
> Project: Apache NiFi MiNiFi
>  Issue Type: Wish
>Reporter: Joseph Percivall
>Priority: Minor
>
> After a MiNiFi instance reports it's provenance data back to a Core NiFi 
> instance it more than likely no longer needs to keep the data locally. There 
> should be the configuration option (potentially in the 
> ProvenanceReportingTask) to delete the data after successfully reporting it.
> This would take a major rewrite since the Provenance Repo only supports 
> purging of provenance events for age off and space limits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-1811) Remove ProcessorLog and update dependent interfaces.

2016-05-02 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-1811:


Assignee: Pierre Villard

> Remove ProcessorLog and update dependent interfaces.
> 
>
> Key: NIFI-1811
> URL: https://issues.apache.org/jira/browse/NIFI-1811
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Affects Versions: 0.6.1
>Reporter: Aldrin Piri
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.0.0
>
>
> From the comments:
> {quote}
> /**
>  * The ProcessorLog is an extension of ComponentLog but provides no additional
>  * functionality. It exists because ProcessorLog was created first, but when
>  * Controller Services and Reporting Tasks began to be used more heavily 
> loggers
>  * were needed for them as well. We did not want to return a ProcessorLog to a
>  * ControllerService or a ReportingTask, so all of the methods were moved to a
>  * higher interface named ComponentLog. However, we kept the ProcessorLog
>  * interface around in order to maintain backward compatibility.
>  */
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MINIFI-33) Add option to delete Provenance Data after successful reporting

2016-05-02 Thread Joseph Percivall (JIRA)
Joseph Percivall created MINIFI-33:
--

 Summary: Add option to delete Provenance Data after successful 
reporting
 Key: MINIFI-33
 URL: https://issues.apache.org/jira/browse/MINIFI-33
 Project: Apache NiFi MiNiFi
  Issue Type: Wish
Reporter: Joseph Percivall
Priority: Minor


After a MiNiFi instance reports it's provenance data back to a Core NiFi 
instance it more than likely no longer needs to keep the data locally. There 
should be the configuration option (potentially in the ProvenanceReportingTask) 
to delete the data after successfully reporting it.

This would take a major rewrite since the Provenance Repo only supports purging 
of provenance events for age off and space limits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1811) Remove ProcessorLog and update dependent interfaces.

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266856#comment-15266856
 ] 

ASF GitHub Bot commented on NIFI-1811:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/403

NIFI-1811 Removed ProcessorLog and updated dependent interfaces



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-1811

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/403.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #403


commit 8c6e20382fdff0a2fb3592b2627a67a5656d2e3a
Author: Pierre Villard 
Date:   2016-05-02T15:47:44Z

NIFI-1811 Removed ProcessorLog and updated dependent interfaces




> Remove ProcessorLog and update dependent interfaces.
> 
>
> Key: NIFI-1811
> URL: https://issues.apache.org/jira/browse/NIFI-1811
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Affects Versions: 0.6.1
>Reporter: Aldrin Piri
>Priority: Minor
> Fix For: 1.0.0
>
>
> From the comments:
> {quote}
> /**
>  * The ProcessorLog is an extension of ComponentLog but provides no additional
>  * functionality. It exists because ProcessorLog was created first, but when
>  * Controller Services and Reporting Tasks began to be used more heavily 
> loggers
>  * were needed for them as well. We did not want to return a ProcessorLog to a
>  * ControllerService or a ReportingTask, so all of the methods were moved to a
>  * higher interface named ComponentLog. However, we kept the ProcessorLog
>  * interface around in order to maintain backward compatibility.
>  */
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1836) Appveyor CI is failing for PR builds

2016-05-02 Thread Puspendu Banerjee (JIRA)
Puspendu Banerjee created NIFI-1836:
---

 Summary: Appveyor CI is failing for PR builds
 Key: NIFI-1836
 URL: https://issues.apache.org/jira/browse/NIFI-1836
 Project: Apache NiFi
  Issue Type: Bug
  Components: Tools and Build
Affects Versions: 1.0.0
 Environment: WINDOWS
Reporter: Puspendu Banerjee


All Appveyor CI is failing:

Build started
git clone -q https://github.com/apache/nifi.git C:\projects\nifi
git fetch -q origin +refs/pull/397/merge:
git checkout -qf FETCH_HEAD
Specify a project or solution file. The directory does not contain a project or 
solution file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1323) UI visual design enhancement

2016-05-02 Thread Rob Moran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Moran updated NIFI-1323:

Attachment: flowfont.zip

Updated to only include custom icons

> UI visual design enhancement
> 
>
> Key: NIFI-1323
> URL: https://issues.apache.org/jira/browse/NIFI-1323
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.0.0
>Reporter: Rob Moran
>Priority: Trivial
> Attachments: flowfont.zip, nifi-add-processor-dialog.png, 
> nifi-component-samples.png, 
> nifi-configure-processor-properties-set-value.png, 
> nifi-configure-processor-properties.png, 
> nifi-configure-processor-scheduling.png, 
> nifi-configure-processor-settings.png, nifi-dialog-samp...@800px.png, 
> nifi-drop.svg, nifi-global-menu.png, nifi-interaction-and-menu-samples.png, 
> nifi-lineage-gra...@800px.png, nifi-logo.svg, nifi-sample-fl...@800px.png, 
> nifi-sample-flow.png, nifi-shell-samp...@800px.png
>
>
> (I will attach mockups and supporting files as they become available)
> I am starting to work on a design to modernize the look and feel of the NiFi 
> UI. The initial focus of the design is to freshen the UI (flat design, SVG 
> icons, etc.). Additionally, the new design will propose usability 
> improvements such as exposing more flow-related actions into collapsible 
> panes, improving hierarchy of information, etc.
> Going forward, the design plan is to help lay the foundation for other UI/UX 
> related issues such as those documented in NIFI-951.
> ---
> *flowfont.zip*
> Contains icon font and supporting files
> *nifi-add-processor-dialog.png*
> Dialog sample. This sample shows the 'Add Processor' dialog.
> *nifi-component-samples.png*
> To show styling for all components, as well as those components when a user 
> is unauthorized to access.
> *nifi-configure-processor-properties*
> *nifi-configure-processor-properties-set-value*
> *nifi-configure-processor-scheduling*
> *nifi-configure-processor-settings*
> Configure Processor dialog. See related Comments below (in Activity section).
> *nifi-dialog-sample-@800px*
> Dialog sample in 800px wide viewport. This sample shows the the 'Details' tab 
> of a provenance event.
> *nifi-drop.svg*
> NiFi logo without 'nifi
> *nifi-global-menu*
> To show global menu
> *nifi-interaction-and-menu-samples.png*
> To demonstrate user interactions - hover states, tooltips, menus, etc.
> *nifi-lineage-graph-@800px*
> To show lineage graph with explicit action to get back to data provenance 
> event list.
> *nifi-sample-flow-@800px*
> Shows a very useable UI down to around 800px in width. The thinking here is 
> that at anything lower than this, the NiFi user experience will change to 
> more of a monitoring and/or administrative type workflow. Future mockups will 
> be created to illustrate this.
> *nifi-logo.svg*
> NiFi logo complete
> *nifi-sample-flow.png*
> Mockup of sample flow. Updated to show revised tool and status bars. 
> Management related actions will move to a menu via mouseover (see 
> _nifi-global-menu_). Added benefits here include reducing clutter and more 
> user-friendly menu with text labels to reduce time spent scanning only a 
> large set of icons. This also helps gain valuable viewport width in a browser 
> (see _nifi-sample-flow-@800px_)
> *nifi-shell-sample-@800px*
> Shell sample in 800px wide viewport. This sample shows the 'Data Provenance' 
> table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1323) UI visual design enhancement

2016-05-02 Thread Rob Moran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Moran updated NIFI-1323:

Attachment: (was: flowfont.zip)

> UI visual design enhancement
> 
>
> Key: NIFI-1323
> URL: https://issues.apache.org/jira/browse/NIFI-1323
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.0.0
>Reporter: Rob Moran
>Priority: Trivial
> Attachments: nifi-add-processor-dialog.png, 
> nifi-component-samples.png, 
> nifi-configure-processor-properties-set-value.png, 
> nifi-configure-processor-properties.png, 
> nifi-configure-processor-scheduling.png, 
> nifi-configure-processor-settings.png, nifi-dialog-samp...@800px.png, 
> nifi-drop.svg, nifi-global-menu.png, nifi-interaction-and-menu-samples.png, 
> nifi-lineage-gra...@800px.png, nifi-logo.svg, nifi-sample-fl...@800px.png, 
> nifi-sample-flow.png, nifi-shell-samp...@800px.png
>
>
> (I will attach mockups and supporting files as they become available)
> I am starting to work on a design to modernize the look and feel of the NiFi 
> UI. The initial focus of the design is to freshen the UI (flat design, SVG 
> icons, etc.). Additionally, the new design will propose usability 
> improvements such as exposing more flow-related actions into collapsible 
> panes, improving hierarchy of information, etc.
> Going forward, the design plan is to help lay the foundation for other UI/UX 
> related issues such as those documented in NIFI-951.
> ---
> *flowfont.zip*
> Contains icon font and supporting files
> *nifi-add-processor-dialog.png*
> Dialog sample. This sample shows the 'Add Processor' dialog.
> *nifi-component-samples.png*
> To show styling for all components, as well as those components when a user 
> is unauthorized to access.
> *nifi-configure-processor-properties*
> *nifi-configure-processor-properties-set-value*
> *nifi-configure-processor-scheduling*
> *nifi-configure-processor-settings*
> Configure Processor dialog. See related Comments below (in Activity section).
> *nifi-dialog-sample-@800px*
> Dialog sample in 800px wide viewport. This sample shows the the 'Details' tab 
> of a provenance event.
> *nifi-drop.svg*
> NiFi logo without 'nifi
> *nifi-global-menu*
> To show global menu
> *nifi-interaction-and-menu-samples.png*
> To demonstrate user interactions - hover states, tooltips, menus, etc.
> *nifi-lineage-graph-@800px*
> To show lineage graph with explicit action to get back to data provenance 
> event list.
> *nifi-sample-flow-@800px*
> Shows a very useable UI down to around 800px in width. The thinking here is 
> that at anything lower than this, the NiFi user experience will change to 
> more of a monitoring and/or administrative type workflow. Future mockups will 
> be created to illustrate this.
> *nifi-logo.svg*
> NiFi logo complete
> *nifi-sample-flow.png*
> Mockup of sample flow. Updated to show revised tool and status bars. 
> Management related actions will move to a menu via mouseover (see 
> _nifi-global-menu_). Added benefits here include reducing clutter and more 
> user-friendly menu with text labels to reduce time spent scanning only a 
> large set of icons. This also helps gain valuable viewport width in a browser 
> (see _nifi-sample-flow-@800px_)
> *nifi-shell-sample-@800px*
> Shell sample in 800px wide viewport. This sample shows the 'Data Provenance' 
> table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-981) Add support for Hive JDBC / ExecuteSQL

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266655#comment-15266655
 ] 

ASF GitHub Bot commented on NIFI-981:
-

Github user bbende commented on the pull request:

https://github.com/apache/nifi/pull/384#issuecomment-216246029
  
Latest commits look good, I am a +1 and going to merge to 0.x and master


> Add support for Hive JDBC / ExecuteSQL
> --
>
> Key: NIFI-981
> URL: https://issues.apache.org/jira/browse/NIFI-981
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Matt Burgess
>
> In this mailing list thread from September 2015 "NIFI DBCP connection pool 
> not working for hive" the main thrust of the converstation is to provide 
> proper support for delivering data to hive.  Hive's jdbc driver appears to 
> have dependencies on Hadoop libraries.  We need to be careful/thoughtful 
> about how to best support this so that different versions of Hadoop distros 
> can be supported (potentially in parallel on the same flow).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1678) Nodes in cluster should use ZooKeeper to store heartbeat messages instead of sending to NCM

2016-05-02 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266595#comment-15266595
 ] 

Pierre Villard commented on NIFI-1678:
--

I am not sure if I should open a new JIRA but when building with:

Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 
2015-11-10T17:41:47+01:00)
Maven home: D:\Dev\apache-maven-3.3.9\bin\..
Java version: 1.8.0_74, vendor: Oracle Corporation
Java home: C:\Program Files\Java\jdk1.8.0_74\jre
Default locale: fr_FR, platform encoding: Cp1252
OS name: "windows 10", version: "10.0", arch: "amd64", family: "dos"

I have the following test errors:

{noformat}
---
 T E S T S
---
Running TestSuite
Tests run: 45, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 19.919 sec <<< 
FAILURE! - in TestSuite
testDisconnectedHeartbeatOnStartup on 
testDisconnectedHeartbeatOnStartup(org.apache.nifi.cluster.coordination.heartbeat.TestAbstractHeartbeatMonitor)(org.apache.nifi.cluster.coordi
nation.heartbeat.TestAbstractHeartbeatMonitor)  Time elapsed: 2.117 sec  <<< 
FAILURE!
java.lang.NullPointerException: null
at 
org.apache.zookeeper.server.ZooKeeperServerMain.shutdown(ZooKeeperServerMain.java:132)
at 
org.apache.curator.test.TestingZooKeeperMain.close(TestingZooKeeperMain.java:122)
at 
org.apache.curator.test.TestingZooKeeperServer.stop(TestingZooKeeperServer.java:110)
at org.apache.curator.test.TestingServer.stop(TestingServer.java:161)
at 
org.apache.nifi.cluster.coordination.heartbeat.TestAbstractHeartbeatMonitor.clear(TestAbstractHeartbeatMonitor.java:62)

testConnectingNodeMarkedConnectedWhenHeartbeatReceived on 
testConnectingNodeMarkedConnectedWhenHeartbeatReceived(org.apache.nifi.cluster.coordination.heartbeat.TestAbstractHeartbea
tMonitor)(org.apache.nifi.cluster.coordination.heartbeat.TestAbstractHeartbeatMonitor)
  Time elapsed: 2.013 sec  <<< FAILURE!
java.lang.NullPointerException: null
at 
org.apache.zookeeper.server.ZooKeeperServerMain.shutdown(ZooKeeperServerMain.java:132)
at 
org.apache.curator.test.TestingZooKeeperMain.close(TestingZooKeeperMain.java:122)
at 
org.apache.curator.test.TestingZooKeeperServer.stop(TestingZooKeeperServer.java:110)
at org.apache.curator.test.TestingServer.stop(TestingServer.java:161)
at 
org.apache.nifi.cluster.coordination.heartbeat.TestAbstractHeartbeatMonitor.clear(TestAbstractHeartbeatMonitor.java:62)


Results :

Failed tests: 
  TestAbstractHeartbeatMonitor.clear:62 » NullPointer
  TestAbstractHeartbeatMonitor.clear:62 » NullPointer
{noformat}

Not sure why this is happening. When running in Eclipse, tests are OK.

> Nodes in cluster should use ZooKeeper to store heartbeat messages instead of 
> sending to NCM
> ---
>
> Key: NIFI-1678
> URL: https://issues.apache.org/jira/browse/NIFI-1678
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.0.0
>
>
> Currently, nodes send heartbeats to the NCM periodically in order to indicate 
> that they are actively participating in the cluster. As we move away from 
> using an NCM, we need these heartbeats to go somewhere else. ZooKeeper is a 
> reasonable location to push the heartbeats to, as it provides the HA that we 
> need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-924) Add Camel support in NiFi

2016-05-02 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266593#comment-15266593
 ] 

Joseph Witt commented on NIFI-924:
--

Just wanted to mention that this statement "NiFi primarily targets business 
community with its emphasis on simplicity, UI etc." does not really reflect the 
intent behind NiFi and certainly not within this open source developer driven 
community.

>From the beginning NiFi was about "How can I as a developer and operations 
>oriented person get my job done faster and how can I make what has been built 
>and configured more understandable and accessible to others"

This is a really important message as it starts with the developer and is meant 
to include others - in short it is to increase shared understanding.

Back to the topic at hand:
- I think the SpringContext processors that were made available are a great 
start and I think [~puspendu.baner...@gmail.com] agreed which is why he pulled 
back his PR and made supportive commentary.  However, there is still value in 
exploring a more direct NiFi and Camel integration as long as that is what 
motivated and capable parties would like to do.  We don't really have to be 
that careful here.  Extensions have for always and ever been a 'let a thousand 
flowers bloom' thing.  They are good and useful if some group of people use 
them and get what they need from them.  It is in the core framework that we 
need to be more careful.  Now, i do think this could create some large 
dependency situations and for that we must hurry up and get this registry in 
play.  Right now we're putting 1000 flowers in a small flower bed but it the 
registry would give us a large field and that is what we need.  Spring / 
flowers / Spring context?  Can I get an lol here?

> Add Camel support in NiFi
> -
>
> Key: NIFI-924
> URL: https://issues.apache.org/jira/browse/NIFI-924
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Jean-Baptiste Onofré
>
> I'm working on a NiFi Route able to leverage a Camel route (runtime routing), 
> and another one being able to bootstrap a Camel route starting from Camel 
> DSLs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-924) Add Camel support in NiFi

2016-05-02 Thread Oleg Zhurakousky (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266568#comment-15266568
 ] 

Oleg Zhurakousky commented on NIFI-924:
---

I think we need to be very careful what we do here and how we address this. We 
don't want to get ourself in the state where these components are hard to 
maintain.

Apache Camel (http://camel.apache.org/) and Spring Integration (SI) 
(http://projects.spring.io/spring-integration/) are both developer oriented EIP 
frameworks. This means that the target user for them is "the coder". NiFi 
primarily targets business community with its emphasis on simplicity, UI etc. 
So the expectation here is that SI or Camel developers would package 
application and expose it via Spring Application Context (sine both frameworks 
are Spring-based) leaving DFM with a simple processor configuration step where 
all they need to do is point to such package and start the processor which will 
send/receive FlowFiles to/from it. This essentially gives the developers 
outmost flexibility and quite frankly for Spring developer this also means that 
they can write custom processors as simple POJOs (no SI, no Camel) and never 
touch NiFi API if they don't want to. 

As for the other features, let me address then separately
1. The number of protocol-aware endpoints is just a number since the only ones 
that matter are the ones one needs (providing they also work). Also, we would 
then need to explain when would one want to use FTP support from Camel or SI or 
NiFi where each of the endpoints is doing the same thing.
2. Dynamic class-loading is already part of Spring Context processor, so may be 
you should clarify if any functionality is missing there.
3. Dependency resolver is indeed a great feature, however may large 
organizations have their own view of the world of dependencies and not "maven 
friendly" when it comes to production environments, so I am afraid that feature 
like this (if we ever decide to do it) would need to come with OFF button ;), 
otherwise it could be looked at as an exploit.

> Add Camel support in NiFi
> -
>
> Key: NIFI-924
> URL: https://issues.apache.org/jira/browse/NIFI-924
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Jean-Baptiste Onofré
>
> I'm working on a NiFi Route able to leverage a Camel route (runtime routing), 
> and another one being able to bootstrap a Camel route starting from Camel 
> DSLs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-1830) Empty Queue finishes before emptying all FlowFiles

2016-05-02 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-1830.
---
Resolution: Fixed

> Empty Queue finishes before emptying all FlowFiles
> --
>
> Key: NIFI-1830
> URL: https://issues.apache.org/jira/browse/NIFI-1830
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
> Fix For: 1.0.0, 0.7.0
>
>
> When running a cluster, if there are a lot of FlowFiles in a queue, and a 
> user clicks "Empty Queue", it will drop a large percentage of them and then 
> the progress bar gets to 100% and it indicates that it is finished, having 
> dropped X number of FlowFiles, even though there are many FlowFiles still in 
> the queue. Subsequently choosing Empty Queue will indicate that 0 FlowFiles 
> were dropped but will drop some number of FlowFiles anyway, as is evidenced 
> by the size given on the connection after refreshing stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1830) Empty Queue finishes before emptying all FlowFiles

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266540#comment-15266540
 ] 

ASF GitHub Bot commented on NIFI-1830:
--

Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/394#issuecomment-216226348
  
Looks good. +1


> Empty Queue finishes before emptying all FlowFiles
> --
>
> Key: NIFI-1830
> URL: https://issues.apache.org/jira/browse/NIFI-1830
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
> Fix For: 1.0.0, 0.7.0
>
>
> When running a cluster, if there are a lot of FlowFiles in a queue, and a 
> user clicks "Empty Queue", it will drop a large percentage of them and then 
> the progress bar gets to 100% and it indicates that it is finished, having 
> dropped X number of FlowFiles, even though there are many FlowFiles still in 
> the queue. Subsequently choosing Empty Queue will indicate that 0 FlowFiles 
> were dropped but will drop some number of FlowFiles anyway, as is evidenced 
> by the size given on the connection after refreshing stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1830) Empty Queue finishes before emptying all FlowFiles

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266538#comment-15266538
 ] 

ASF subversion and git services commented on NIFI-1830:
---

Commit 797c5ec077ce6b772112dc18b017516796102148 in nifi's branch refs/heads/0.x 
from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=797c5ec ]

NIFI-1830: Fixed problems in the merging logic for Drop FlowFile Requests. This 
closes #394


> Empty Queue finishes before emptying all FlowFiles
> --
>
> Key: NIFI-1830
> URL: https://issues.apache.org/jira/browse/NIFI-1830
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
> Fix For: 1.0.0, 0.7.0
>
>
> When running a cluster, if there are a lot of FlowFiles in a queue, and a 
> user clicks "Empty Queue", it will drop a large percentage of them and then 
> the progress bar gets to 100% and it indicates that it is finished, having 
> dropped X number of FlowFiles, even though there are many FlowFiles still in 
> the queue. Subsequently choosing Empty Queue will indicate that 0 FlowFiles 
> were dropped but will drop some number of FlowFiles anyway, as is evidenced 
> by the size given on the connection after refreshing stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


nifi git commit: NIFI-1830: Fixed problems in the merging logic for Drop FlowFile Requests. This closes #394

2016-05-02 Thread mcgilman
Repository: nifi
Updated Branches:
  refs/heads/0.x 72049d80b -> 797c5ec07


NIFI-1830: Fixed problems in the merging logic for Drop FlowFile Requests. This 
closes #394


Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/797c5ec0
Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/797c5ec0
Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/797c5ec0

Branch: refs/heads/0.x
Commit: 797c5ec077ce6b772112dc18b017516796102148
Parents: 72049d8
Author: Mark Payne 
Authored: Fri Apr 29 16:07:37 2016 -0400
Committer: Matt Gilman 
Committed: Mon May 2 08:48:28 2016 -0400

--
 .../cluster/manager/impl/WebClusterManager.java  | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/nifi/blob/797c5ec0/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/impl/WebClusterManager.java
--
diff --git 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/impl/WebClusterManager.java
 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/impl/WebClusterManager.java
index c0f4c63..7bf8de3 100644
--- 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/impl/WebClusterManager.java
+++ 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/impl/WebClusterManager.java
@@ -3041,9 +3041,18 @@ public class WebClusterManager implements 
HttpClusterManager, ProtocolHandler, C
 long droppedSize = 0;
 
 DropFlowFileState state = null;
+boolean allFinished = true;
+String failureReason = null;
 for (final Map.Entry nodeEntry : 
dropRequestMap.entrySet()) {
 final DropRequestDTO nodeDropRequest = nodeEntry.getValue();
 
+if (!nodeDropRequest.isFinished()) {
+allFinished = false;
+}
+if (nodeDropRequest.getFailureReason() != null) {
+failureReason = nodeDropRequest.getFailureReason();
+}
+
 currentCount += nodeDropRequest.getCurrentCount();
 currentSize += nodeDropRequest.getCurrentSize();
 droppedCount += nodeDropRequest.getDroppedCount();
@@ -3057,7 +3066,7 @@ public class WebClusterManager implements 
HttpClusterManager, ProtocolHandler, C
 }
 
 final DropFlowFileState nodeState = 
DropFlowFileState.valueOfDescription(nodeDropRequest.getState());
-if (state == null || state.compareTo(nodeState) > 0) {
+if (state == null || state.ordinal() > nodeState.ordinal()) {
 state = nodeState;
 }
 }
@@ -3070,6 +3079,14 @@ public class WebClusterManager implements 
HttpClusterManager, ProtocolHandler, C
 dropRequest.setDroppedSize(droppedSize);
 dropRequest.setDropped(FormatUtils.formatCount(droppedCount) + " / " + 
FormatUtils.formatDataSize(droppedSize));
 
+dropRequest.setFinished(allFinished);
+dropRequest.setFailureReason(failureReason);
+if (originalCount == 0) {
+dropRequest.setPercentCompleted(allFinished ? 100 : 0);
+} else {
+dropRequest.setPercentCompleted((int) ((double) droppedCount / 
(double) originalCount * 100D));
+}
+
 if (!nodeWaiting) {
 dropRequest.setOriginalCount(originalCount);
 dropRequest.setOriginalSize(originalSize);



[jira] [Commented] (NIFI-1830) Empty Queue finishes before emptying all FlowFiles

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266537#comment-15266537
 ] 

ASF GitHub Bot commented on NIFI-1830:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/394


> Empty Queue finishes before emptying all FlowFiles
> --
>
> Key: NIFI-1830
> URL: https://issues.apache.org/jira/browse/NIFI-1830
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
> Fix For: 1.0.0, 0.7.0
>
>
> When running a cluster, if there are a lot of FlowFiles in a queue, and a 
> user clicks "Empty Queue", it will drop a large percentage of them and then 
> the progress bar gets to 100% and it indicates that it is finished, having 
> dropped X number of FlowFiles, even though there are many FlowFiles still in 
> the queue. Subsequently choosing Empty Queue will indicate that 0 FlowFiles 
> were dropped but will drop some number of FlowFiles anyway, as is evidenced 
> by the size given on the connection after refreshing stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1830) Empty Queue finishes before emptying all FlowFiles

2016-05-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266535#comment-15266535
 ] 

ASF subversion and git services commented on NIFI-1830:
---

Commit 45ca978498725ca19530ed51138a252d17e4d676 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=45ca978 ]

NIFI-1830: Fixed problems in the merging logic for Drop FlowFile Requests. This 
closes #394


> Empty Queue finishes before emptying all FlowFiles
> --
>
> Key: NIFI-1830
> URL: https://issues.apache.org/jira/browse/NIFI-1830
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
> Fix For: 1.0.0, 0.7.0
>
>
> When running a cluster, if there are a lot of FlowFiles in a queue, and a 
> user clicks "Empty Queue", it will drop a large percentage of them and then 
> the progress bar gets to 100% and it indicates that it is finished, having 
> dropped X number of FlowFiles, even though there are many FlowFiles still in 
> the queue. Subsequently choosing Empty Queue will indicate that 0 FlowFiles 
> were dropped but will drop some number of FlowFiles anyway, as is evidenced 
> by the size given on the connection after refreshing stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


nifi git commit: NIFI-1830: Fixed problems in the merging logic for Drop FlowFile Requests. This closes #394

2016-05-02 Thread mcgilman
Repository: nifi
Updated Branches:
  refs/heads/master ff98d823e -> 45ca97849


NIFI-1830: Fixed problems in the merging logic for Drop FlowFile Requests. This 
closes #394


Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/45ca9784
Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/45ca9784
Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/45ca9784

Branch: refs/heads/master
Commit: 45ca978498725ca19530ed51138a252d17e4d676
Parents: ff98d82
Author: Mark Payne 
Authored: Fri Apr 29 16:07:37 2016 -0400
Committer: Matt Gilman 
Committed: Mon May 2 08:46:50 2016 -0400

--
 .../cluster/manager/impl/WebClusterManager.java  | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/nifi/blob/45ca9784/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/impl/WebClusterManager.java
--
diff --git 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/impl/WebClusterManager.java
 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/impl/WebClusterManager.java
index 88da5ff..fbf400b 100644
--- 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/impl/WebClusterManager.java
+++ 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/impl/WebClusterManager.java
@@ -2990,9 +2990,18 @@ public class WebClusterManager implements 
HttpClusterManager, ProtocolHandler, C
 long droppedSize = 0;
 
 DropFlowFileState state = null;
+boolean allFinished = true;
+String failureReason = null;
 for (final Map.Entry nodeEntry : 
dropRequestMap.entrySet()) {
 final DropRequestDTO nodeDropRequest = nodeEntry.getValue();
 
+if (!nodeDropRequest.isFinished()) {
+allFinished = false;
+}
+if (nodeDropRequest.getFailureReason() != null) {
+failureReason = nodeDropRequest.getFailureReason();
+}
+
 currentCount += nodeDropRequest.getCurrentCount();
 currentSize += nodeDropRequest.getCurrentSize();
 droppedCount += nodeDropRequest.getDroppedCount();
@@ -3006,7 +3015,7 @@ public class WebClusterManager implements 
HttpClusterManager, ProtocolHandler, C
 }
 
 final DropFlowFileState nodeState = 
DropFlowFileState.valueOfDescription(nodeDropRequest.getState());
-if (state == null || state.compareTo(nodeState) > 0) {
+if (state == null || state.ordinal() > nodeState.ordinal()) {
 state = nodeState;
 }
 }
@@ -3019,6 +3028,14 @@ public class WebClusterManager implements 
HttpClusterManager, ProtocolHandler, C
 dropRequest.setDroppedSize(droppedSize);
 dropRequest.setDropped(FormatUtils.formatCount(droppedCount) + " / " + 
FormatUtils.formatDataSize(droppedSize));
 
+dropRequest.setFinished(allFinished);
+dropRequest.setFailureReason(failureReason);
+if (originalCount == 0) {
+dropRequest.setPercentCompleted(allFinished ? 100 : 0);
+} else {
+dropRequest.setPercentCompleted((int) ((double) droppedCount / 
(double) originalCount * 100D));
+}
+
 if (!nodeWaiting) {
 dropRequest.setOriginalCount(originalCount);
 dropRequest.setOriginalSize(originalSize);



[jira] [Assigned] (NIFI-1827) PutKafka attempts to write to non-existent partition.

2016-05-02 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky reassigned NIFI-1827:
--

Assignee: Oleg Zhurakousky

> PutKafka attempts to write to non-existent partition. 
> --
>
> Key: NIFI-1827
> URL: https://issues.apache.org/jira/browse/NIFI-1827
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 0.6.1
>Reporter: Christopher McDermott
>Assignee: Oleg Zhurakousky
> Fix For: 1.0.0, 0.7.0
>
>
> PutKafka attempts to write to non-existent partition.  I have not verified 
> yet but I think the problem can be triggered by deleting a topic while the 
> processors is running, and then recreating the topic with the same name.  
> Since the problem has occurred I have not been able to make it go away.  I've 
> recreated the processor in the flow but the new processor exhibits the same 
> behavior.  
> 2016-04-29 12:00:53,550 ERROR [Timer-Driven Process Thread-1] 
> o.apache.nifi.processors.kafka.PutKafka
> java.lang.IllegalArgumentException: Invalid partition given with record: 4 is 
> not in the range [0...1].
> at 
> org.apache.kafka.clients.producer.internals.Partitioner.partition(Partitioner.java:52)
>  ~[na:na]
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:333) 
> ~[na:na]
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:248) 
> ~[na:na]
> at 
> org.apache.nifi.processors.kafka.KafkaPublisher.toKafka(KafkaPublisher.java:203)
>  ~[na:na]
> at 
> org.apache.nifi.processors.kafka.KafkaPublisher.publish(KafkaPublisher.java:137)
>  ~[na:na]
> at 
> org.apache.nifi.processors.kafka.PutKafka$1.process(PutKafka.java:300) 
> ~[na:na]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1807)
>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1778)
>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> at 
> org.apache.nifi.processors.kafka.PutKafka.onTrigger(PutKafka.java:296) 
> ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>  ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_45]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_45]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1827) PutKafka attempts to write to non-existent partition.

2016-05-02 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky updated NIFI-1827:
---
Description: 
PutKafka attempts to write to non-existent partition.  I have not verified yet 
but I think the problem can be triggered by deleting a topic while the 
processors is running, and then recreating the topic with the same name.  Since 
the problem has occurred I have not been able to make it go away.  I've 
recreated the processor in the flow but the new processor exhibits the same 
behavior.  
{code}
2016-04-29 12:00:53,550 ERROR [Timer-Driven Process Thread-1] 
o.apache.nifi.processors.kafka.PutKafka
java.lang.IllegalArgumentException: Invalid partition given with record: 4 is 
not in the range [0...1].
at 
org.apache.kafka.clients.producer.internals.Partitioner.partition(Partitioner.java:52)
 ~[na:na]
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:333) 
~[na:na]
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:248) 
~[na:na]
at 
org.apache.nifi.processors.kafka.KafkaPublisher.toKafka(KafkaPublisher.java:203)
 ~[na:na]
at 
org.apache.nifi.processors.kafka.KafkaPublisher.publish(KafkaPublisher.java:137)
 ~[na:na]
at 
org.apache.nifi.processors.kafka.PutKafka$1.process(PutKafka.java:300) ~[na:na]
at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1807)
 ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1778)
 ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.processors.kafka.PutKafka.onTrigger(PutKafka.java:296) ~[na:na]
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
 ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
 [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_45]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_45]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_45]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
{code}

  was:
PutKafka attempts to write to non-existent partition.  I have not verified yet 
but I think the problem can be triggered by deleting a topic while the 
processors is running, and then recreating the topic with the same name.  Since 
the problem has occurred I have not been able to make it go away.  I've 
recreated the processor in the flow but the new processor exhibits the same 
behavior.  

2016-04-29 12:00:53,550 ERROR [Timer-Driven Process Thread-1] 
o.apache.nifi.processors.kafka.PutKafka
java.lang.IllegalArgumentException: Invalid partition given with record: 4 is 
not in the range [0...1].
at 
org.apache.kafka.clients.producer.internals.Partitioner.partition(Partitioner.java:52)
 ~[na:na]
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:333) 
~[na:na]
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:248) 
~[na:na]
at 
org.apache.nifi.processors.kafka.KafkaPublisher.toKafka(KafkaPublisher.java:203)
 ~[na:na]
at 
org.apache.nifi.processors.kafka.KafkaPublisher.publish(KafkaPublisher.java:137)
 ~[na:na]
at 
org.apache.nifi.processors.kafka.PutKafka$1.process(PutKafka.java:300) ~[na:na]
at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1807)
 ~[nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1778)
 

[jira] [Commented] (NIFI-1830) Empty Queue finishes before emptying all FlowFiles

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266497#comment-15266497
 ] 

ASF GitHub Bot commented on NIFI-1830:
--

Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/394#issuecomment-216221458
  
Reviewing...


> Empty Queue finishes before emptying all FlowFiles
> --
>
> Key: NIFI-1830
> URL: https://issues.apache.org/jira/browse/NIFI-1830
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
> Fix For: 1.0.0, 0.7.0
>
>
> When running a cluster, if there are a lot of FlowFiles in a queue, and a 
> user clicks "Empty Queue", it will drop a large percentage of them and then 
> the progress bar gets to 100% and it indicates that it is finished, having 
> dropped X number of FlowFiles, even though there are many FlowFiles still in 
> the queue. Subsequently choosing Empty Queue will indicate that 0 FlowFiles 
> were dropped but will drop some number of FlowFiles anyway, as is evidenced 
> by the size given on the connection after refreshing stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1028) Document FlowFiles and the repos in depth

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266478#comment-15266478
 ] 

ASF GitHub Bot commented on NIFI-1028:
--

Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/339#discussion_r61730868
  
--- Diff: nifi-docs/src/main/asciidoc/nifi-in-depth.adoc ---
@@ -0,0 +1,209 @@
+//
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+Apache NiFi In Depth
+
+Apache NiFi Team 
+:homepage: http://nifi.apache.org
+
+Intro
+-
+This advanced level document is aimed at providing an in-depth look at the 
implementation and design decisions of NiFi. It assumes the reader has read 
enough of the other documentation to know the basics of NiFi.
+
+FlowFiles are at the heart of NiFi and its flow-based design. A FlowFile 
is just a collection of attributes and a pointer to content, which is 
associated with one or more provenance events. The attributes are key/value 
pairs that act as the metadata for the FlowFile, such as the FlowFile filename. 
The content is the actual data or the payload of the file. Provenance is a 
record of what’s happened to the FlowFile. Each one of these parts has its own 
repository (repo) for storage.
+
+One key aspect of the repositories is immutability. The content in the 
Content Repository and data within the FlowFile Repository are immutable. When 
a change occurs to the attributes of a FlowFile new copies of the attributes 
are created in memory and then persisted on disk. When content is being changed 
for a given FlowFile its original content is read, streamed through the 
transform, and written to a new stream. Then the FlowFile's content pointer is 
updated to the new location on disk. As a result, the default approach for 
FlowFile content storage can be said to be an immutable versioned content 
store.  The benefits of which are many including substantial reduction in 
storage space required for the typical complex graphs of processing, natural 
replay capability, takes advantage of OS caching, reduces random read/write 
performance hits, and is easy to reason over. The previous revisions are kept 
according to the archiving properties set in nifi.properties file and outlined 
in the NiFi System Administrator’s Guide.
+
+== Repositories
+There are three repositories that are utilized by NiFi. Each exists within 
the OS/Host's file system and provides specific functionality. In order to 
fully understand FlowFiles and how they are used by the underlying system it's 
important to know about these repositories. All three repositories are 
directories on local storage that NiFi uses to persist data.
+
+- The FlowFile Repository contains metadata for all the current FlowFiles 
in the flow.
+- The Content Repository holds the content for current and past FlowFiles.
+- The Provenance Repository holds the history of FlowFiles.
+
+image::NiFiArchitecture.png["NiFi Architecture Diagram"]
+
+=== FlowFile Repository
+FlowFiles that are actively being processed by the system is held in a 
hash map in the JVM memory (more about that in "Deeper View: FlowFiles in 
Memory and on Disk"). This makes it very efficient to process them, but 
requires a secondary mechanism to provide durability of data across process 
restarts due to any number of reasons. Reasons such as power loss, kernel 
panics, system upgrades, and maintenance cycles. The FlowFile Repository is a 
"Write-Ahead Log" (or data record) of the metadata of each of the FlowFiles 
that currently exist in the system. This FlowFile metadata includes all the 
attributes associated with the FlowFile, a pointer to the actual content of the 
FlowFile (which exists in the Content Repo) and the state of the FlowFile, such 
as which Connection/Queue the FlowFile belongs in. This Write-Ahead Log 
provides NiFi the resiliency it needs to handle restarts and unexpected system 
failures.
+
+The FlowFile Repository acts as 

svn commit: r1741971 [2/2] - in /nifi/site/trunk: assets/js/foundation.js assets/js/jquery.min.js videos.html

2016-05-02 Thread joewitt
Modified: nifi/site/trunk/assets/js/jquery.min.js
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/assets/js/jquery.min.js?rev=1741971=1741970=1741971=diff
==
--- nifi/site/trunk/assets/js/jquery.min.js (original)
+++ nifi/site/trunk/assets/js/jquery.min.js Mon May  2 12:05:41 2016
@@ -1,4 +1,5 @@
-/*! jQuery v2.2.2 | (c) jQuery Foundation | jquery.org/license */
-!function(a,b){"object"==typeof module&&"object"==typeof 
module.exports?module.exports=a.document?b(a,!0):function(a){if(!a.document)throw
 new Error("jQuery requires a window with a document");return 
b(a)}:b(a)}("undefined"!=typeof window?window:this,function(a,b){var 
c=[],d=a.document,e=c.slice,f=c.concat,g=c.push,h=c.indexOf,i={},j=i.toString,k=i.hasOwnProperty,l={},m="2.2.2",n=function(a,b){return
 new 
n.fn.init(a,b)},o=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g,p=/^-ms-/,q=/-([\da-z])/gi,r=function(a,b){return
 
b.toUpperCase()};n.fn=n.prototype={jquery:m,constructor:n,selector:"",length:0,toArray:function(){return
 e.call(this)},get:function(a){return 
null!=a?0>a?this[a+this.length]:this[a]:e.call(this)},pushStack:function(a){var 
b=n.merge(this.constructor(),a);return 
b.prevObject=this,b.context=this.context,b},each:function(a){return 
n.each(this,a)},map:function(a){return 
this.pushStack(n.map(this,function(b,c){return 
a.call(b,c,b)}))},slice:function(){return this.pushStack(e.apply(this
 ,arguments))},first:function(){return this.eq(0)},last:function(){return 
this.eq(-1)},eq:function(a){var b=this.length,c=+a+(0>a?b:0);return 
this.pushStack(c>=0&>c?[this[c]]:[])},end:function(){return 
this.prevObject||this.constructor()},push:g,sort:c.sort,splice:c.splice},n.extend=n.fn.extend=function(){var
 
a,b,c,d,e,f,g=arguments[0]||{},h=1,i=arguments.length,j=!1;for("boolean"==typeof
 g&&(j=g,g=arguments[h]||{},h++),"object"==typeof 
g||n.isFunction(g)||(g={}),h===i&&(g=this,h--);i>h;h++)if(null!=(a=arguments[h]))for(b
 in 
a)c=g[b],d=a[b],g!==d&&(j&&&(n.isPlainObject(d)||(e=n.isArray(d)))?(e?(e=!1,f=c&(c)?c:[]):f=c&(c)?c:{},g[b]=n.extend(j,f,d)):void
 0!==d&&(g[b]=d));return 
g},n.extend({expando:"jQuery"+(m+Math.random()).replace(/\D/g,""),isReady:!0,error:function(a){throw
 new 
Error(a)},noop:function(){},isFunction:function(a){return"function"===n.type(a)},isArray:Array.isArray,isWindow:function(a){return
 null!=a&===a.window},isNumeric:function(a){va
 r 
b=a&();return!n.isArray(a)&(b)+1>=0},isPlainObject:function(a){var
 
b;if("object"!==n.type(a)||a.nodeType||n.isWindow(a))return!1;if(a.constructor&&!k.call(a,"constructor")&&!k.call(a.constructor.prototype||{},"isPrototypeOf"))return!1;for(b
 in a);return void 0===b||k.call(a,b)},isEmptyObject:function(a){var b;for(b in 
a)return!1;return!0},type:function(a){return null==a?a+"":"object"==typeof 
a||"function"==typeof a?i[j.call(a)]||"object":typeof 
a},globalEval:function(a){var b,c=eval;a=n.trim(a),a&&(1===a.indexOf("use 
strict")?(b=d.createElement("script"),b.text=a,d.head.appendChild(b).parentNode.removeChild(b)):c(a))},camelCase:function(a){return
 a.replace(p,"ms-").replace(q,r)},nodeName:function(a,b){return 
a.nodeName&()===b.toLowerCase()},each:function(a,b){var 
c,d=0;if(s(a)){for(c=a.length;c>d;d++)if(b.call(a[d],d,a[d])===!1)break}else 
for(d in a)if(b.call(a[d],d,a[d])===!1)break;return a},trim:function(a){return 
null==a?"":(a+"").
 replace(o,"")},makeArray:function(a,b){var c=b||[];return 
null!=a&&(s(Object(a))?n.merge(c,"string"==typeof 
a?[a]:a):g.call(c,a)),c},inArray:function(a,b,c){return 
null==b?-1:h.call(b,a,c)},merge:function(a,b){for(var 
c=+b.length,d=0,e=a.length;c>d;d++)a[e++]=b[d];return 
a.length=e,a},grep:function(a,b,c){for(var 
d,e=[],f=0,g=a.length,h=!c;g>f;f++)d=!b(a[f],f),d!==h&(a[f]);return 
e},map:function(a,b,c){var 
d,e,g=0,h=[];if(s(a))for(d=a.length;d>g;g++)e=b(a[g],g,c),null!=e&(e);else
 for(g in a)e=b(a[g],g,c),null!=e&(e);return 
f.apply([],h)},guid:1,proxy:function(a,b){var c,d,f;return"string"==typeof 
b&&(c=a[b],b=a,a=c),n.isFunction(a)?(d=e.call(arguments,2),f=function(){return 
a.apply(b||this,d.concat(e.call(arguments)))},f.guid=a.guid=a.guid||n.guid++,f):void
 0},now:Date.now,support:l}),"function"==typeof 
Symbol&&(n.fn[Symbol.iterator]=c[Symbol.iterator]),n.each("Boolean Number 
String Function Array Date RegExp Object Error Symbol".split(" 
"),function(a,b){i["[obj
 ect "+b+"]"]=b.toLowerCase()});function s(a){var b=!!a&&"length"in 
a&,c=n.type(a);return"function"===c||n.isWindow(a)?!1:"array"===c||0===b||"number"==typeof
 b&>0& in a}var t=function(a){var 
b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u="sizzle"+1*new 
Date,v=a.document,w=0,x=0,y=ga(),z=ga(),A=ga(),B=function(a,b){return 
a===b&&(l=!0),0},C=1<<31,D={}.hasOwnProperty,E=[],F=E.pop,G=E.push,H=E.push,I=E.slice,J=function(a,b){for(var
 c=0,d=a.length;d>c;c++)if(a[c]===b)return 

svn commit: r1741971 [1/2] - in /nifi/site/trunk: assets/js/foundation.js assets/js/jquery.min.js videos.html

2016-05-02 Thread joewitt
Author: joewitt
Date: Mon May  2 12:05:41 2016
New Revision: 1741971

URL: http://svn.apache.org/viewvc?rev=1741971=rev
Log:
Added Bryan Bende Hadoop Summit talk to videos

Modified:
nifi/site/trunk/assets/js/foundation.js
nifi/site/trunk/assets/js/jquery.min.js
nifi/site/trunk/videos.html

Modified: nifi/site/trunk/assets/js/foundation.js
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/assets/js/foundation.js?rev=1741971=1741970=1741971=diff
==
--- nifi/site/trunk/assets/js/foundation.js (original)
+++ nifi/site/trunk/assets/js/foundation.js Mon May  2 12:05:41 2016
@@ -1,7 +1,7 @@
 /*
  * Foundation Responsive Library
  * http://foundation.zurb.com
- * Copyright 2015, ZURB
+ * Copyright 2014, ZURB
  * Free to use under the MIT license.
  * http://www.opensource.org/licenses/mit-license.php
 */
@@ -10,12 +10,14 @@
   'use strict';
 
   var header_helpers = function (class_array) {
+var i = class_array.length;
 var head = $('head');
-head.prepend($.map(class_array, function (class_name) {
-  if (head.has('.' + class_name).length === 0) {
-return '';
+
+while (i--) {
+  if (head.has('.' + class_array[i]).length === 0) {
+head.append('');
   }
-}));
+}
   };
 
   header_helpers([
@@ -288,30 +290,21 @@
 return string;
   }
 
-  function MediaQuery(selector) {
-this.selector = selector;
-this.query = '';
-  }
-
-  MediaQuery.prototype.toString = function () {
-return this.query || (this.query = 
S(this.selector).css('font-family').replace(/^[\/\\'"]+|(;\s?})+|[\/\\'"]+$/g, 
''));
-  };
-
   window.Foundation = {
 name : 'Foundation',
 
-version : '5.5.3',
+version : '5.5.2',
 
 media_queries : {
-  'small'   : new MediaQuery('.foundation-mq-small'),
-  'small-only'  : new MediaQuery('.foundation-mq-small-only'),
-  'medium'  : new MediaQuery('.foundation-mq-medium'),
-  'medium-only' : new MediaQuery('.foundation-mq-medium-only'),
-  'large'   : new MediaQuery('.foundation-mq-large'),
-  'large-only'  : new MediaQuery('.foundation-mq-large-only'),
-  'xlarge'  : new MediaQuery('.foundation-mq-xlarge'),
-  'xlarge-only' : new MediaQuery('.foundation-mq-xlarge-only'),
-  'xxlarge' : new MediaQuery('.foundation-mq-xxlarge')
+  'small'   : 
S('.foundation-mq-small').css('font-family').replace(/^[\/\\'"]+|(;\s?})+|[\/\\'"]+$/g,
 ''),
+  'small-only'  : 
S('.foundation-mq-small-only').css('font-family').replace(/^[\/\\'"]+|(;\s?})+|[\/\\'"]+$/g,
 ''),
+  'medium'  : 
S('.foundation-mq-medium').css('font-family').replace(/^[\/\\'"]+|(;\s?})+|[\/\\'"]+$/g,
 ''),
+  'medium-only' : 
S('.foundation-mq-medium-only').css('font-family').replace(/^[\/\\'"]+|(;\s?})+|[\/\\'"]+$/g,
 ''),
+  'large'   : 
S('.foundation-mq-large').css('font-family').replace(/^[\/\\'"]+|(;\s?})+|[\/\\'"]+$/g,
 ''),
+  'large-only'  : 
S('.foundation-mq-large-only').css('font-family').replace(/^[\/\\'"]+|(;\s?})+|[\/\\'"]+$/g,
 ''),
+  'xlarge'  : 
S('.foundation-mq-xlarge').css('font-family').replace(/^[\/\\'"]+|(;\s?})+|[\/\\'"]+$/g,
 ''),
+  'xlarge-only' : 
S('.foundation-mq-xlarge-only').css('font-family').replace(/^[\/\\'"]+|(;\s?})+|[\/\\'"]+$/g,
 ''),
+  'xxlarge' : 
S('.foundation-mq-xxlarge').css('font-family').replace(/^[\/\\'"]+|(;\s?})+|[\/\\'"]+$/g,
 '')
 },
 
 stylesheet : $('').appendTo('head')[0].sheet,
@@ -736,7 +729,7 @@
   Foundation.libs.topbar = {
 name : 'topbar',
 
-version : '5.5.3',
+version : '5.5.2',
 
 settings : {
   index : 0,
@@ -897,17 +890,17 @@
   self.toggle(this);
 })
 .on('click.fndtn.topbar contextmenu.fndtn.topbar', '.top-bar 
.top-bar-section li a[href^="#"],[' + this.attr_name() + '] .top-bar-section li 
a[href^="#"]', function (e) {
-  var li = $(this).closest('li'),
-  topbar = li.closest('[' + self.attr_name() + ']'),
-  settings = topbar.data(self.attr_name(true) + '-init');
-
-  if (settings.dropdown_autoclose && settings.is_hover) {
-var hoverLi = $(this).closest('.hover');
-hoverLi.removeClass('hover');
-  }
-  if (self.breakpoint() && !li.hasClass('back') && 
!li.hasClass('has-dropdown')) {
-self.toggle();
-  }
+var li = $(this).closest('li'),
+topbar = li.closest('[' + self.attr_name() + ']'),
+settings = topbar.data(self.attr_name(true) + '-init');
+
+if (settings.dropdown_autoclose && settings.is_hover) {
+  var hoverLi = $(this).closest('.hover');
+  hoverLi.removeClass('hover');
+}
+if (self.breakpoint() && !li.hasClass('back') && 
!li.hasClass('has-dropdown')) {
+  self.toggle();
+}
 
 })
 .on('click.fndtn.topbar', '[' 

[jira] [Commented] (NIFI-1832) Testing EL properties with AllowableValues

2016-05-02 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266454#comment-15266454
 ] 

Joseph Witt commented on NIFI-1832:
---

[~pvillard] [~simonellistonball] For background: The concept of allowable 
values and expression language was never really meant to work together.  The 
'allowable values' construct was so that we could nicely support an enumerate 
set of values that were carried through/available through the REST API and UI 
to constrain what a user could choose.  The concept of expression language came 
around after.  I think we just never blocked the usage of them together.  I am 
inclined to think we should not allow them to be used together but perhaps I'm 
missing a use case or concept.

> Testing EL properties with AllowableValues
> --
>
> Key: NIFI-1832
> URL: https://issues.apache.org/jira/browse/NIFI-1832
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 0.6.1
> Environment: Testing
>Reporter: Simon Elliston Ball
>Assignee: Pierre Villard
>Priority: Minor
>
> I’ve come across an interesting problem with MockFlowFile while testing a 
> custom processor. My property has an AllowableValue list, and supports 
> expression language. The test uses:
> runner.setProperty(PROPERTY_REF, "${attribute.name}”);
> However, the test fails on validation of in the MockFlowFile with the 
> unevaluated version of the EL invalid against the allowed values list. 
> 'Property' validated against '${attribute.name}' is invalid because Given 
> value is not found in allowed set ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1832) Testing EL properties with AllowableValues

2016-05-02 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266393#comment-15266393
 ] 

Pierre Villard commented on NIFI-1832:
--

In fact I just realized that it won't work since, on GUI, properties with a set 
of allowable values are represented using a panel list (it is not possible to 
manually set a value). So I guess that if we want a property with expression 
language enabled, it must be a classic property or use the first solution I 
mentioned.

> Testing EL properties with AllowableValues
> --
>
> Key: NIFI-1832
> URL: https://issues.apache.org/jira/browse/NIFI-1832
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 0.6.1
> Environment: Testing
>Reporter: Simon Elliston Ball
>Assignee: Pierre Villard
>Priority: Minor
>
> I’ve come across an interesting problem with MockFlowFile while testing a 
> custom processor. My property has an AllowableValue list, and supports 
> expression language. The test uses:
> runner.setProperty(PROPERTY_REF, "${attribute.name}”);
> However, the test fails on validation of in the MockFlowFile with the 
> unevaluated version of the EL invalid against the allowed values list. 
> 'Property' validated against '${attribute.name}' is invalid because Given 
> value is not found in allowed set ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1835) Support text/xml in the content view ui

2016-05-02 Thread Devin Fisher (JIRA)
Devin Fisher created NIFI-1835:
--

 Summary: Support text/xml in the content view ui
 Key: NIFI-1835
 URL: https://issues.apache.org/jira/browse/NIFI-1835
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core UI
Reporter: Devin Fisher
Priority: Minor
 Fix For: 0.7.0


Currently, the content view supports application/xml mime type. This support is 
nice when dealing with XML (especially the formatted view). But this content 
viewer does not seem to support the text/xml mime type even though it is the 
same as application/xml.

I'm using a work around of using UpdateAttribute to change the mime.type 
attribute to appliaction/xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1834) Create PutTCP Processor

2016-05-02 Thread Matt Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Brown updated NIFI-1834:
-
Attachment: 0001-PutTCP-Processor-created.patch

Patch uploaded to create a PutTCP Processor.

I've added two new properties to AbstractPutEventProcessor that are specific to 
this processor.

1) Outgoing Message Delimiter - optional string value used to delimit the 
application layer messages being transmitted over the TCP connection. This is 
similar to the Batching Message Delimiter on the ListenTCP Processor.

2) Connection Per FlowFile - option to send each FlowFiles contents over a 
single TCP connection. The default value for this property is false i.e. send 
over the same TCP connection.

> Create PutTCP Processor
> ---
>
> Key: NIFI-1834
> URL: https://issues.apache.org/jira/browse/NIFI-1834
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Brown
>Priority: Minor
> Attachments: 0001-PutTCP-Processor-created.patch
>
>
> Create a PutTCP Processor to send FlowFile content over a TCP connection to a 
> TCP server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1560) Error message in LdapProvider is malformed

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266357#comment-15266357
 ] 

ASF GitHub Bot commented on NIFI-1560:
--

GitHub user devin-fisher opened a pull request:

https://github.com/apache/nifi/pull/402

NIFI-1560 - Fixing a copy and paste error

Looks like when the original coder copied code from AuthenticationStrategy 
for the ReferralStrategy and did not change this reference for the error case.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/devin-fisher/nifi NIFI-1560

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/402.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #402


commit 931c7c32b56268d46b4ae4e0caf0e8920d28e83e
Author: Devin Fisher 
Date:   2016-05-02T10:26:13Z

Fixing a copy and paste error

Looks like when the original coder copied code from AuthenticationStrategy 
for the ReferralStrategy and did not change this reference for the error case.




> Error message in LdapProvider is malformed
> --
>
> Key: NIFI-1560
> URL: https://issues.apache.org/jira/browse/NIFI-1560
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Devin Fisher
>Priority: Trivial
>
> The error message for a bad ReferralStrategy uses the variable 
> rawAuthenticationStrategy instead of rawReferralStrategy. Looks like it is a 
> simple copy and paste issue.
> final ReferralStrategy referralStrategy;
> try {
> referralStrategy = ReferralStrategy.valueOf(rawReferralStrategy);
> } catch (final IllegalArgumentException iae) {
> throw new ProviderCreationException(String.format("Unrecgonized 
> authentication strategy '%s'. Possible values are [%s]",
> **rawAuthenticationStrategy**, 
> StringUtils.join(ReferralStrategy.values(), ", ")));
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1834) Create PutTCP Processor

2016-05-02 Thread Matt Brown (JIRA)
Matt Brown created NIFI-1834:


 Summary: Create PutTCP Processor
 Key: NIFI-1834
 URL: https://issues.apache.org/jira/browse/NIFI-1834
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Matt Brown
Priority: Minor


Create a PutTCP Processor to send FlowFile content over a TCP connection to a 
TCP server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-1832) Testing EL properties with AllowableValues

2016-05-02 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-1832:


Assignee: Pierre Villard

> Testing EL properties with AllowableValues
> --
>
> Key: NIFI-1832
> URL: https://issues.apache.org/jira/browse/NIFI-1832
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 0.6.1
> Environment: Testing
>Reporter: Simon Elliston Ball
>Assignee: Pierre Villard
>Priority: Minor
>
> I’ve come across an interesting problem with MockFlowFile while testing a 
> custom processor. My property has an AllowableValue list, and supports 
> expression language. The test uses:
> runner.setProperty(PROPERTY_REF, "${attribute.name}”);
> However, the test fails on validation of in the MockFlowFile with the 
> unevaluated version of the EL invalid against the allowed values list. 
> 'Property' validated against '${attribute.name}' is invalid because Given 
> value is not found in allowed set ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1832) Testing EL properties with AllowableValues

2016-05-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266310#comment-15266310
 ] 

ASF GitHub Bot commented on NIFI-1832:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/401

NIFI-1832 Allowing expression language in properties with a set of 
allowable values

At the moment, if a property is defined with a set of allowable values AND 
is supporting expression language, then validation will always fail when the 
expression language value is not matching an allowable value. There are two 
options:

- In the processor, add in the allowable values the expression language 
value that is authorized and let the user knows that the expected value MUST BE 
in this specific attribute.

Example: .allowableValues("A", "B", "C", "${myProcessor.type}")
Then the incoming flow files must have this attribute (using 
UpdateAttribute before).

- Authorize any input value when expression language is supported for this 
kind of properties.

This PR is a proposition for the second option.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-1832

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/401.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #401


commit de320a6e9e1370964783016538fe07aa1ae6cb17
Author: Pierre Villard 
Date:   2016-05-02T09:57:28Z

NIFI-1832 Allowing expression language in properties with a set of 
allowable values




> Testing EL properties with AllowableValues
> --
>
> Key: NIFI-1832
> URL: https://issues.apache.org/jira/browse/NIFI-1832
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 0.6.1
> Environment: Testing
>Reporter: Simon Elliston Ball
>Priority: Minor
>
> I’ve come across an interesting problem with MockFlowFile while testing a 
> custom processor. My property has an AllowableValue list, and supports 
> expression language. The test uses:
> runner.setProperty(PROPERTY_REF, "${attribute.name}”);
> However, the test fails on validation of in the MockFlowFile with the 
> unevaluated version of the EL invalid against the allowed values list. 
> 'Property' validated against '${attribute.name}' is invalid because Given 
> value is not found in allowed set ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1560) Error message in LdapProvider is malformed

2016-05-02 Thread Devin Fisher (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devin Fisher updated NIFI-1560:
---
Component/s: Core Framework

> Error message in LdapProvider is malformed
> --
>
> Key: NIFI-1560
> URL: https://issues.apache.org/jira/browse/NIFI-1560
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Devin Fisher
>Priority: Trivial
>
> The error message for a bad ReferralStrategy uses the variable 
> rawAuthenticationStrategy instead of rawReferralStrategy. Looks like it is a 
> simple copy and paste issue.
> final ReferralStrategy referralStrategy;
> try {
> referralStrategy = ReferralStrategy.valueOf(rawReferralStrategy);
> } catch (final IllegalArgumentException iae) {
> throw new ProviderCreationException(String.format("Unrecgonized 
> authentication strategy '%s'. Possible values are [%s]",
> **rawAuthenticationStrategy**, 
> StringUtils.join(ReferralStrategy.values(), ", ")));
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)