[GitHub] nifi pull request #1116: NIFI-2851 initial commit of perf improvements on Sp...

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1116#discussion_r83263078
  
--- Diff: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/stream/io/util/TextLineDemarcator.java
 ---
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stream.io.util;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStream;
+
+/**
+ * Implementation of demarcator of text lines in the provided
+ * {@link InputStream}. It works similar to the {@link BufferedReader} and 
its
+ * {@link BufferedReader#readLine()} methods except that it does not 
create a
+ * String representing the text line and instead returns the offset info 
for the
+ * computed text line. See {@link #nextOffsetInfo()} and
+ * {@link #nextOffsetInfo(byte[])} for more details.
+ * 
+ * This class is NOT thread-safe.
+ * 
+ */
+public class TextLineDemarcator {
+
+private final static int INIT_BUFFER_SIZE = 8192;
+
+private final InputStream is;
+
+private final int initialBufferSize;
+
+private byte[] buffer;
+
+private int index;
+
+private int mark;
+
+private long offset;
+
+private int bufferLength;
+
+/**
+ * Constructs an instance of demarcator with provided {@link 
InputStream}
+ * and default buffer size.
+ */
+public TextLineDemarcator(InputStream is) {
+this(is, INIT_BUFFER_SIZE);
+}
+
+/**
+ * Constructs an instance of demarcator with provided {@link 
InputStream}
+ * and initial buffer size.
+ */
+public TextLineDemarcator(InputStream is, int initialBufferSize) {
+if (is == null) {
+throw new IllegalArgumentException("'is' must not be null.");
+}
+if (initialBufferSize < 1) {
+throw new IllegalArgumentException("'initialBufferSize' must 
be > 0.");
+}
+this.is = is;
+this.initialBufferSize = initialBufferSize;
+this.buffer = new byte[initialBufferSize];
+}
+
+/**
+ * Will compute the next offset info for a
+ * text line (line terminated by either '\r', '\n' or '\r\n').
+ * 
+ * The offset info computed and returned as long[] 
consisting of
+ * 4 elements {startOffset, length, crlfLength, 
startsWithMatch}.
+ *  
+ *startOffset - the offset in the overall stream which 
represents the beginning of the text line
+ *length - length of the text line including CRLF 
characters
+ *crlfLength - the length of the CRLF. Could be either 
1 (if line ends with '\n' or '\r')
+ *  or 2 (if line ends with 
'\r\n').
+ *startsWithMatch - value is always 1. See {@link 
#nextOffsetInfo(byte[])} for more info.
+ *  
+ *
+ * @return offset info as long[]
+ */
+public long[] nextOffsetInfo() {
+return this.nextOffsetInfo(null);
+}
+
+/**
+ * Will compute the next offset info for a
+ * text line (line terminated by either '\r', '\n' or '\r\n').
+ * 
+ * The offset info computed and returned as long[] 
consisting of
+ * 4 elements {startOffset, length, crlfLength, 
startsWithMatch}.
+ *  
+ *startOffset - the offset in the overall stream which 
represents the beginning of the text line
+ *length - length of the text line including CRLF 
characters
+ *crlfLength - the length of the CRLF. Could be either 
1 (if line ends with '\n' or '\r')
+ *  or 2 (if line ends with 
'\r\n').
+ *startsWithMatch - value is always 1 unless 
'startsWith' is provided. If 'startsWith' is provided it will
+ *  be compared to the 

[GitHub] nifi pull request #1116: NIFI-2851 initial commit of perf improvements on Sp...

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1116#discussion_r83256378
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/SplitText.java
 ---
@@ -150,548 +145,320 @@
 .description("If a file cannot be split for some reason, the 
original file will be routed to this destination and nothing will be routed 
elsewhere")
 .build();
 
-private List properties;
-private Set relationships;
+private static final List properties;
+private static final Set relationships;
 
-@Override
-protected void init(final ProcessorInitializationContext context) {
-final List properties = new ArrayList<>();
+static {
+properties = new ArrayList<>();
 properties.add(LINE_SPLIT_COUNT);
 properties.add(FRAGMENT_MAX_SIZE);
 properties.add(HEADER_LINE_COUNT);
 properties.add(HEADER_MARKER);
 properties.add(REMOVE_TRAILING_NEWLINES);
-this.properties = Collections.unmodifiableList(properties);
 
-final Set relationships = new HashSet<>();
+relationships = new HashSet<>();
 relationships.add(REL_ORIGINAL);
 relationships.add(REL_SPLITS);
 relationships.add(REL_FAILURE);
-this.relationships = Collections.unmodifiableSet(relationships);
 }
 
-@Override
-protected Collection 
customValidate(ValidationContext validationContext) {
-List results = new ArrayList<>();
-
-final boolean invalidState = 
(validationContext.getProperty(LINE_SPLIT_COUNT).asInteger() == 0
-&& 
!validationContext.getProperty(FRAGMENT_MAX_SIZE).isSet());
-
-results.add(new ValidationResult.Builder()
-.subject("Maximum Fragment Size")
-.valid(!invalidState)
-.explanation("Property must be specified when Line Split Count 
is 0")
-.build()
-);
-return results;
-}
-
-@Override
-public Set getRelationships() {
-return relationships;
-}
-
-@Override
-protected List getSupportedPropertyDescriptors() {
-return properties;
-}
-
-private int readLines(final InputStream in, final int maxNumLines, 
final long maxByteCount, final OutputStream out,
-  final boolean includeLineDelimiter, final byte[] 
leadingNewLineBytes) throws IOException {
-final EndOfLineBuffer eolBuffer = new EndOfLineBuffer();
-
-byte[] leadingBytes = leadingNewLineBytes;
-int numLines = 0;
-long totalBytes = 0L;
-for (int i = 0; i < maxNumLines; i++) {
-final EndOfLineMarker eolMarker = countBytesToSplitPoint(in, 
out, totalBytes, maxByteCount, includeLineDelimiter, eolBuffer, leadingBytes);
-final long bytes = eolMarker.getBytesConsumed();
-leadingBytes = eolMarker.getLeadingNewLineBytes();
-
-if (includeLineDelimiter && out != null) {
-if (leadingBytes != null) {
-out.write(leadingBytes);
-leadingBytes = null;
-}
-eolBuffer.drainTo(out);
-}
-totalBytes += bytes;
-if (bytes <= 0) {
-return numLines;
-}
-numLines++;
-if (totalBytes >= maxByteCount) {
-break;
-}
-}
-return numLines;
-}
-
-private EndOfLineMarker countBytesToSplitPoint(final InputStream in, 
final OutputStream out, final long bytesReadSoFar, final long maxSize,
-   final boolean 
includeLineDelimiter, final EndOfLineBuffer eolBuffer, final byte[] 
leadingNewLineBytes) throws IOException {
-long bytesRead = 0L;
-final ByteArrayOutputStream buffer;
-if (out != null) {
-buffer = new ByteArrayOutputStream();
-} else {
-buffer = null;
-}
-byte[] bytesToWriteFirst = leadingNewLineBytes;
-
-in.mark(Integer.MAX_VALUE);
-while (true) {
-final int nextByte = in.read();
-
-// if we hit end of stream we're done
-if (nextByte == -1) {
-if (buffer != null) {
-buffer.writeTo(out);
-buffer.close();
-}
-return new EndOfLineMarker(bytesRead, eolBuffer, true, 
bytesToWriteFirst);  // 

[GitHub] nifi pull request #1116: NIFI-2851 initial commit of perf improvements on Sp...

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1116#discussion_r83260348
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/SplitText.java
 ---
@@ -150,548 +145,320 @@
 .description("If a file cannot be split for some reason, the 
original file will be routed to this destination and nothing will be routed 
elsewhere")
 .build();
 
-private List properties;
-private Set relationships;
+private static final List properties;
+private static final Set relationships;
 
-@Override
-protected void init(final ProcessorInitializationContext context) {
-final List properties = new ArrayList<>();
+static {
+properties = new ArrayList<>();
 properties.add(LINE_SPLIT_COUNT);
 properties.add(FRAGMENT_MAX_SIZE);
 properties.add(HEADER_LINE_COUNT);
 properties.add(HEADER_MARKER);
 properties.add(REMOVE_TRAILING_NEWLINES);
-this.properties = Collections.unmodifiableList(properties);
 
-final Set relationships = new HashSet<>();
+relationships = new HashSet<>();
 relationships.add(REL_ORIGINAL);
 relationships.add(REL_SPLITS);
 relationships.add(REL_FAILURE);
-this.relationships = Collections.unmodifiableSet(relationships);
 }
 
-@Override
-protected Collection 
customValidate(ValidationContext validationContext) {
-List results = new ArrayList<>();
-
-final boolean invalidState = 
(validationContext.getProperty(LINE_SPLIT_COUNT).asInteger() == 0
-&& 
!validationContext.getProperty(FRAGMENT_MAX_SIZE).isSet());
-
-results.add(new ValidationResult.Builder()
-.subject("Maximum Fragment Size")
-.valid(!invalidState)
-.explanation("Property must be specified when Line Split Count 
is 0")
-.build()
-);
-return results;
-}
-
-@Override
-public Set getRelationships() {
-return relationships;
-}
-
-@Override
-protected List getSupportedPropertyDescriptors() {
-return properties;
-}
-
-private int readLines(final InputStream in, final int maxNumLines, 
final long maxByteCount, final OutputStream out,
-  final boolean includeLineDelimiter, final byte[] 
leadingNewLineBytes) throws IOException {
-final EndOfLineBuffer eolBuffer = new EndOfLineBuffer();
-
-byte[] leadingBytes = leadingNewLineBytes;
-int numLines = 0;
-long totalBytes = 0L;
-for (int i = 0; i < maxNumLines; i++) {
-final EndOfLineMarker eolMarker = countBytesToSplitPoint(in, 
out, totalBytes, maxByteCount, includeLineDelimiter, eolBuffer, leadingBytes);
-final long bytes = eolMarker.getBytesConsumed();
-leadingBytes = eolMarker.getLeadingNewLineBytes();
-
-if (includeLineDelimiter && out != null) {
-if (leadingBytes != null) {
-out.write(leadingBytes);
-leadingBytes = null;
-}
-eolBuffer.drainTo(out);
-}
-totalBytes += bytes;
-if (bytes <= 0) {
-return numLines;
-}
-numLines++;
-if (totalBytes >= maxByteCount) {
-break;
-}
-}
-return numLines;
-}
-
-private EndOfLineMarker countBytesToSplitPoint(final InputStream in, 
final OutputStream out, final long bytesReadSoFar, final long maxSize,
-   final boolean 
includeLineDelimiter, final EndOfLineBuffer eolBuffer, final byte[] 
leadingNewLineBytes) throws IOException {
-long bytesRead = 0L;
-final ByteArrayOutputStream buffer;
-if (out != null) {
-buffer = new ByteArrayOutputStream();
-} else {
-buffer = null;
-}
-byte[] bytesToWriteFirst = leadingNewLineBytes;
-
-in.mark(Integer.MAX_VALUE);
-while (true) {
-final int nextByte = in.read();
-
-// if we hit end of stream we're done
-if (nextByte == -1) {
-if (buffer != null) {
-buffer.writeTo(out);
-buffer.close();
-}
-return new EndOfLineMarker(bytesRead, eolBuffer, true, 
bytesToWriteFirst);  // 

[GitHub] nifi pull request #1116: NIFI-2851 initial commit of perf improvements on Sp...

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1116#discussion_r83257640
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/SplitText.java
 ---
@@ -150,548 +145,320 @@
 .description("If a file cannot be split for some reason, the 
original file will be routed to this destination and nothing will be routed 
elsewhere")
 .build();
 
-private List properties;
-private Set relationships;
+private static final List properties;
+private static final Set relationships;
 
-@Override
-protected void init(final ProcessorInitializationContext context) {
-final List properties = new ArrayList<>();
+static {
+properties = new ArrayList<>();
 properties.add(LINE_SPLIT_COUNT);
 properties.add(FRAGMENT_MAX_SIZE);
 properties.add(HEADER_LINE_COUNT);
 properties.add(HEADER_MARKER);
 properties.add(REMOVE_TRAILING_NEWLINES);
-this.properties = Collections.unmodifiableList(properties);
 
-final Set relationships = new HashSet<>();
+relationships = new HashSet<>();
 relationships.add(REL_ORIGINAL);
 relationships.add(REL_SPLITS);
 relationships.add(REL_FAILURE);
-this.relationships = Collections.unmodifiableSet(relationships);
 }
 
-@Override
-protected Collection 
customValidate(ValidationContext validationContext) {
-List results = new ArrayList<>();
-
-final boolean invalidState = 
(validationContext.getProperty(LINE_SPLIT_COUNT).asInteger() == 0
-&& 
!validationContext.getProperty(FRAGMENT_MAX_SIZE).isSet());
-
-results.add(new ValidationResult.Builder()
-.subject("Maximum Fragment Size")
-.valid(!invalidState)
-.explanation("Property must be specified when Line Split Count 
is 0")
-.build()
-);
-return results;
-}
-
-@Override
-public Set getRelationships() {
-return relationships;
-}
-
-@Override
-protected List getSupportedPropertyDescriptors() {
-return properties;
-}
-
-private int readLines(final InputStream in, final int maxNumLines, 
final long maxByteCount, final OutputStream out,
-  final boolean includeLineDelimiter, final byte[] 
leadingNewLineBytes) throws IOException {
-final EndOfLineBuffer eolBuffer = new EndOfLineBuffer();
-
-byte[] leadingBytes = leadingNewLineBytes;
-int numLines = 0;
-long totalBytes = 0L;
-for (int i = 0; i < maxNumLines; i++) {
-final EndOfLineMarker eolMarker = countBytesToSplitPoint(in, 
out, totalBytes, maxByteCount, includeLineDelimiter, eolBuffer, leadingBytes);
-final long bytes = eolMarker.getBytesConsumed();
-leadingBytes = eolMarker.getLeadingNewLineBytes();
-
-if (includeLineDelimiter && out != null) {
-if (leadingBytes != null) {
-out.write(leadingBytes);
-leadingBytes = null;
-}
-eolBuffer.drainTo(out);
-}
-totalBytes += bytes;
-if (bytes <= 0) {
-return numLines;
-}
-numLines++;
-if (totalBytes >= maxByteCount) {
-break;
-}
-}
-return numLines;
-}
-
-private EndOfLineMarker countBytesToSplitPoint(final InputStream in, 
final OutputStream out, final long bytesReadSoFar, final long maxSize,
-   final boolean 
includeLineDelimiter, final EndOfLineBuffer eolBuffer, final byte[] 
leadingNewLineBytes) throws IOException {
-long bytesRead = 0L;
-final ByteArrayOutputStream buffer;
-if (out != null) {
-buffer = new ByteArrayOutputStream();
-} else {
-buffer = null;
-}
-byte[] bytesToWriteFirst = leadingNewLineBytes;
-
-in.mark(Integer.MAX_VALUE);
-while (true) {
-final int nextByte = in.read();
-
-// if we hit end of stream we're done
-if (nextByte == -1) {
-if (buffer != null) {
-buffer.writeTo(out);
-buffer.close();
-}
-return new EndOfLineMarker(bytesRead, eolBuffer, true, 
bytesToWriteFirst);  // 

[GitHub] nifi pull request #1116: NIFI-2851 initial commit of perf improvements on Sp...

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1116#discussion_r83260507
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/SplitText.java
 ---
@@ -150,548 +145,320 @@
 .description("If a file cannot be split for some reason, the 
original file will be routed to this destination and nothing will be routed 
elsewhere")
 .build();
 
-private List properties;
-private Set relationships;
+private static final List properties;
+private static final Set relationships;
 
-@Override
-protected void init(final ProcessorInitializationContext context) {
-final List properties = new ArrayList<>();
+static {
+properties = new ArrayList<>();
 properties.add(LINE_SPLIT_COUNT);
 properties.add(FRAGMENT_MAX_SIZE);
 properties.add(HEADER_LINE_COUNT);
 properties.add(HEADER_MARKER);
 properties.add(REMOVE_TRAILING_NEWLINES);
-this.properties = Collections.unmodifiableList(properties);
 
-final Set relationships = new HashSet<>();
+relationships = new HashSet<>();
 relationships.add(REL_ORIGINAL);
 relationships.add(REL_SPLITS);
 relationships.add(REL_FAILURE);
-this.relationships = Collections.unmodifiableSet(relationships);
 }
 
-@Override
-protected Collection 
customValidate(ValidationContext validationContext) {
-List results = new ArrayList<>();
-
-final boolean invalidState = 
(validationContext.getProperty(LINE_SPLIT_COUNT).asInteger() == 0
-&& 
!validationContext.getProperty(FRAGMENT_MAX_SIZE).isSet());
-
-results.add(new ValidationResult.Builder()
-.subject("Maximum Fragment Size")
-.valid(!invalidState)
-.explanation("Property must be specified when Line Split Count 
is 0")
-.build()
-);
-return results;
-}
-
-@Override
-public Set getRelationships() {
-return relationships;
-}
-
-@Override
-protected List getSupportedPropertyDescriptors() {
-return properties;
-}
-
-private int readLines(final InputStream in, final int maxNumLines, 
final long maxByteCount, final OutputStream out,
-  final boolean includeLineDelimiter, final byte[] 
leadingNewLineBytes) throws IOException {
-final EndOfLineBuffer eolBuffer = new EndOfLineBuffer();
-
-byte[] leadingBytes = leadingNewLineBytes;
-int numLines = 0;
-long totalBytes = 0L;
-for (int i = 0; i < maxNumLines; i++) {
-final EndOfLineMarker eolMarker = countBytesToSplitPoint(in, 
out, totalBytes, maxByteCount, includeLineDelimiter, eolBuffer, leadingBytes);
-final long bytes = eolMarker.getBytesConsumed();
-leadingBytes = eolMarker.getLeadingNewLineBytes();
-
-if (includeLineDelimiter && out != null) {
-if (leadingBytes != null) {
-out.write(leadingBytes);
-leadingBytes = null;
-}
-eolBuffer.drainTo(out);
-}
-totalBytes += bytes;
-if (bytes <= 0) {
-return numLines;
-}
-numLines++;
-if (totalBytes >= maxByteCount) {
-break;
-}
-}
-return numLines;
-}
-
-private EndOfLineMarker countBytesToSplitPoint(final InputStream in, 
final OutputStream out, final long bytesReadSoFar, final long maxSize,
-   final boolean 
includeLineDelimiter, final EndOfLineBuffer eolBuffer, final byte[] 
leadingNewLineBytes) throws IOException {
-long bytesRead = 0L;
-final ByteArrayOutputStream buffer;
-if (out != null) {
-buffer = new ByteArrayOutputStream();
-} else {
-buffer = null;
-}
-byte[] bytesToWriteFirst = leadingNewLineBytes;
-
-in.mark(Integer.MAX_VALUE);
-while (true) {
-final int nextByte = in.read();
-
-// if we hit end of stream we're done
-if (nextByte == -1) {
-if (buffer != null) {
-buffer.writeTo(out);
-buffer.close();
-}
-return new EndOfLineMarker(bytesRead, eolBuffer, true, 
bytesToWriteFirst);  // 

[jira] [Commented] (NIFI-2372) Allow ProcessSession to be passed to operations annotated with @OnUnscsheduled

2016-10-13 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572518#comment-15572518
 ] 

Joseph Witt commented on NIFI-2372:
---

I don' think this would be possible without having a different API specific to 
processors which have no ability to construct process sessions on their own but 
rather only use what is passed into them.  Also this probably violates some 
assumptions about process session lifecycle.  Alternative approaches should be 
considered.

> Allow ProcessSession to be passed to operations annotated with @OnUnscsheduled
> --
>
> Key: NIFI-2372
> URL: https://issues.apache.org/jira/browse/NIFI-2372
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>Priority: Minor
>
> With adoption of NiFi there are more and more cases where Processors that 
> source their data from the external systems (e.g., Email, JMS, MQTT etc) may 
> need to hold an internal queue of data to be sent as content in individual 
> FlowFiles. This implies somewhat of a persistent *state* between subsequent 
> invocation of the _onTriggered(..)_ operation. This creates a problem for the 
> processors that still have data in the internal data queue while being 
> stopped. 
> While stoping of the processor is not a real issue (since the instance of the 
> processor is preserved), the subsequent stopping of NiFi that may follow is. 
> One of the way of dealing with it is to drain the internal data queue  before 
> shitting down processor, but that requires access to ProcessSession which is 
> not currently supported by NiFi to operations annotated with 
> _@OnUnscheduled_, resulting in variety of workarounds (e.g., have 
> ProcessSession as instance variable of the Processor instance, etc.).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2372) Allow ProcessSession to be passed to operations annotated with @OnUnscsheduled

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2372:
--
Fix Version/s: (was: 1.1.0)

> Allow ProcessSession to be passed to operations annotated with @OnUnscsheduled
> --
>
> Key: NIFI-2372
> URL: https://issues.apache.org/jira/browse/NIFI-2372
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>Priority: Minor
>
> With adoption of NiFi there are more and more cases where Processors that 
> source their data from the external systems (e.g., Email, JMS, MQTT etc) may 
> need to hold an internal queue of data to be sent as content in individual 
> FlowFiles. This implies somewhat of a persistent *state* between subsequent 
> invocation of the _onTriggered(..)_ operation. This creates a problem for the 
> processors that still have data in the internal data queue while being 
> stopped. 
> While stoping of the processor is not a real issue (since the instance of the 
> processor is preserved), the subsequent stopping of NiFi that may follow is. 
> One of the way of dealing with it is to drain the internal data queue  before 
> shitting down processor, but that requires access to ProcessSession which is 
> not currently supported by NiFi to operations annotated with 
> _@OnUnscheduled_, resulting in variety of workarounds (e.g., have 
> ProcessSession as instance variable of the Processor instance, etc.).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2049) Bulletins do not show the 'Category'

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2049:
--
Fix Version/s: (was: 1.2.0)

> Bulletins do not show the 'Category'
> 
>
> Key: NIFI-2049
> URL: https://issues.apache.org/jira/browse/NIFI-2049
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Mark Payne
>Priority: Minor
>
> Currently, when bulletins are rendered in the UI, the bulletin's category is 
> not shown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2314) UI - Consistent Dialog Resizing

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2314:
--
Fix Version/s: (was: 1.2.0)

> UI - Consistent Dialog Resizing
> ---
>
> Key: NIFI-2314
> URL: https://issues.apache.org/jira/browse/NIFI-2314
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Matt Gilman
>Priority: Minor
>
> For dialogs that support resizing, that behavior should be consistent. The 
> stats history resizing originates from the center of the screen while the 
> property value dialogs expand down and to the right.
> Possibly consider using center oriented throughout the application to further 
> increase consistency with regards to dialog positioning/draggable behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1949) Group and AccessPolicy should have a "description" field

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-1949:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Group and AccessPolicy should have a "description" field
> 
>
> Key: NIFI-1949
> URL: https://issues.apache.org/jira/browse/NIFI-1949
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Jeff Storck
>Priority: Minor
> Fix For: 1.2.0
>
>
> Add a nullable string description field to Group and AccessPolicy objects of 
> the Authorization API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2049) Bulletins do not show the 'Category'

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2049:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Bulletins do not show the 'Category'
> 
>
> Key: NIFI-2049
> URL: https://issues.apache.org/jira/browse/NIFI-2049
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Mark Payne
>Priority: Minor
> Fix For: 1.2.0
>
>
> Currently, when bulletins are rendered in the UI, the bulletin's category is 
> not shown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2315) Allow ZK ACL to be configurable for clustering z-nodes

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2315:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Allow ZK ACL to be configurable for clustering z-nodes
> --
>
> Key: NIFI-2315
> URL: https://issues.apache.org/jira/browse/NIFI-2315
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Bryan Bende
>Assignee: Mark Payne
>Priority: Minor
> Fix For: 1.2.0
>
>
> In the state-management.xml file we provide a configurable property for the 
> ZK ACL and we said:
> "Access Control - Specifies which Access Controls will be applied to the 
> ZooKeeper ZNodes that are created by this State Provider. This value must be 
> set to one of:
> - Open  : ZNodes will be open to any ZooKeeper 
> client.
> - CreatorOnly  : ZNodes will be accessible only 
> by the creator. The creator will have full access to create children, read, 
> write, delete, and administer the ZNodes.
>  This option is available only if 
> access to ZooKeeper is secured via Kerberos or if a Username and Password are 
> set."
> We don't have any corresponding ACL property for clustering, we only specify 
> the following in nifi.properties:
> nifi.zookeeper.connect.string=${nifi.zookeeper.connect.string}
> nifi.zookeeper.connect.timeout=${nifi.zookeeper.connect.timeout}
> nifi.zookeeper.session.timeout=${nifi.zookeeper.session.timeout}
> nifi.zookeeper.root.node=${nifi.zookeeper.root.node}
> We would want to set both the CreatorOnly when securing the connection with 
> Kerberos.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2017) Root Group Port Transmission Icon is inaccurate

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2017:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Root Group Port Transmission Icon is inaccurate
> ---
>
> Key: NIFI-2017
> URL: https://issues.apache.org/jira/browse/NIFI-2017
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Gilman
>Priority: Minor
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2345) No longer a pop-up for deleting a connection going to a running processor

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2345:
--
Fix Version/s: (was: 1.1.0)

> No longer a pop-up for deleting a connection going to a running processor
> -
>
> Key: NIFI-2345
> URL: https://issues.apache.org/jira/browse/NIFI-2345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Percivall
>Priority: Minor
>
> While testing NIFI-2035, I noticed that when I tried to delete a connection 
> that has a destination that was running nothing seemed to happen. This means 
> it correctly stops me from deleting it but there is no longer a pop-up 
> telling me why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2314) UI - Consistent Dialog Resizing

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2314:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> UI - Consistent Dialog Resizing
> ---
>
> Key: NIFI-2314
> URL: https://issues.apache.org/jira/browse/NIFI-2314
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Matt Gilman
>Priority: Minor
> Fix For: 1.2.0
>
>
> For dialogs that support resizing, that behavior should be consistent. The 
> stats history resizing originates from the center of the screen while the 
> property value dialogs expand down and to the right.
> Possibly consider using center oriented throughout the application to further 
> increase consistency with regards to dialog positioning/draggable behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2868) nifi.flowcontroller.autoResumeState=false does not work when NiFi is clustered

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2868:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> nifi.flowcontroller.autoResumeState=false does not work when NiFi is clustered
> --
>
> Key: NIFI-2868
> URL: https://issues.apache.org/jira/browse/NIFI-2868
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
> Environment: Centos 7
>Reporter: Matthew Clarke
> Fix For: 1.2.0
>
>
> In a NiFi clustered environment it is not possible to change the 
> nifi.flowcontroller.autoResumeState= nifi property from true to false in 
> order to bring up flow in a completely stopped state.
> This property only seems to work for standalone instances.
> While the nodes do come up with all processors in a stop state initially, as 
> soon as Election completes, previously running processors are started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2854) Allow Provenance Repository to roll back to a previous implementation

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2854:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Allow Provenance Repository to roll back to a previous implementation
> -
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> Currently, if we start up a new version of NiFi and the format of the Data 
> Provenance data has changed, we are not able to roll back to a previous 
> version of NiFi. If we do, NiFi will fail to read the Provenance Data and not 
> start up. We should instead provide the ability to write data to the 
> repository in such a way that old versions of the repository will still be 
> able to read the data, so that we can roll back



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2884) UI - Support bulk user/group add when editing a policy

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2884:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> UI - Support bulk user/group add when editing a policy
> --
>
> Key: NIFI-2884
> URL: https://issues.apache.org/jira/browse/NIFI-2884
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2886) Framework doesn't release thread if processor administratively yields

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2886:
--
Fix Version/s: 1.2.0

> Framework doesn't release thread if processor administratively yields
> -
>
> Key: NIFI-2886
> URL: https://issues.apache.org/jira/browse/NIFI-2886
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: David A. Wynne
> Fix For: 1.2.0
>
>
> If a processor yields due to a exception from onScheduled it doesn't 
> immediately release the thread back to the pool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2888) Display processor fill color when sufficiently zoomed out.

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2888:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Display processor fill color when sufficiently zoomed out.
> --
>
> Key: NIFI-2888
> URL: https://issues.apache.org/jira/browse/NIFI-2888
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Scott Aslan
>Assignee: Scott Aslan
> Fix For: 1.2.0
>
>
> As a user when viewing the zoomed out overview of my flow I want to be able 
> to quickly identify processors based on their fill color.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-2798) Upgrade Zookeeper version due to CVE-2016-5017

2016-10-13 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto resolved NIFI-2798.
-
Resolution: Not A Problem

The vulnerability is only in the C shell and this is not provided as part of 
NiFi. 

> Upgrade Zookeeper version due to CVE-2016-5017
> --
>
> Key: NIFI-2798
> URL: https://issues.apache.org/jira/browse/NIFI-2798
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andy LoPresto
>Priority: Critical
>  Labels: cve, security, zookeeper
> Fix For: 1.1.0, 0.8.0
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> The currently used version of Zookeeper {{3.4.6}} is subject to a buffer 
> overflow attack using the C command-line interface (documented as 
> CVE-2016-5017 [1]). Version {{3.4.9}} patches this issue. In 
> {{nifi/pom.xml}}, this version number should be updated, and basic 
> compatibility/smoke tests should be run to ensure no new issues are 
> introduced by the version increment. 
> [1] https://zookeeper.apache.org/security.html#CVE-2016-5017



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2798) Upgrade Zookeeper version due to CVE-2016-5017

2016-10-13 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-2798:
-
Assignee: (was: Mark Payne)

> Upgrade Zookeeper version due to CVE-2016-5017
> --
>
> Key: NIFI-2798
> URL: https://issues.apache.org/jira/browse/NIFI-2798
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andy LoPresto
>Priority: Critical
>  Labels: cve, security, zookeeper
> Fix For: 1.1.0, 0.8.0
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> The currently used version of Zookeeper {{3.4.6}} is subject to a buffer 
> overflow attack using the C command-line interface (documented as 
> CVE-2016-5017 [1]). Version {{3.4.9}} patches this issue. In 
> {{nifi/pom.xml}}, this version number should be updated, and basic 
> compatibility/smoke tests should be run to ensure no new issues are 
> introduced by the version increment. 
> [1] https://zookeeper.apache.org/security.html#CVE-2016-5017



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-2799) AWS Credentials for Assume Role Need Proxy

2016-10-13 Thread James Wing (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Wing resolved NIFI-2799.
--
Resolution: Fixed

Thanks, [~ktseytlin].

> AWS Credentials for Assume Role Need Proxy
> --
>
> Key: NIFI-2799
> URL: https://issues.apache.org/jira/browse/NIFI-2799
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Keren Tseytlin
>Assignee: James Wing
>Priority: Minor
> Fix For: 1.1.0
>
>
> As a user of Nifi, when I want to enable cross account fetching of S3 objects 
> I need the proxy variables to be set in order to generate temporary AWS 
> tokens for STS:AssumeRole.
> Within some enterprise environments, it is necessary to set the proxy 
> variables prior to running AssumeRole methods. Without this being set, the 
> machine in VPC A times out on generating temporary keys and is unable to 
> assume a role as a machine in VPC B. 
> This ticket arose from this conversation: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Nifi-Cross-Account-Download-With-A-Profile-Flag-td13232.html#a13252
> Goal: There are files stored in an S3 bucket in VPC B. My Nifi machines are 
> in VPC A. I want Nifi to be able to get those files from VPC B. VPC A and VPC 
> B need to be communicating in the FetchS3Object component.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1113: NIFI-2873: Nifi throws UnknownHostException with HA NameNo...

2016-10-13 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1113
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2873) PutHiveStreaming throws UnknownHostException with HA NameNode

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572456#comment-15572456
 ] 

ASF GitHub Bot commented on NIFI-2873:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1113
  
Reviewing...


> PutHiveStreaming throws UnknownHostException with HA NameNode
> -
>
> Key: NIFI-2873
> URL: https://issues.apache.org/jira/browse/NIFI-2873
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Franco
> Fix For: 1.1.0
>
>
> This is the same issue that previously affected Spark:
> https://github.com/Jianfeng-chs/spark/commit/9f2b2bf001262215742be418f24d5093c92ff10f
> We are experiencing this issue consistently when trying to use 
> PutHiveStreaming. In theory this should be a problem with GetHDFS but for 
> whatever reason it is not.
> The fix is identical namely preloading the Hadoop configuration during the 
> processor setup phase. Pull request forthcoming.
> {code:title=Stack Trace|borderStyle=solid}
> 2016-10-06 16:07:59,225 ERROR [Timer-Driven Process Thread-9] 
> o.a.n.processors.hive.PutHiveStreaming
> java.lang.IllegalArgumentException: java.net.UnknownHostException: tdcdv2
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
>  ~[hadoop-common-2.6.2.jar:na]
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
>  ~[hadoop-hdfs-2.6.2.jar:na]
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) 
> ~[hadoop-hdfs-2.6.2.jar:na]
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:668) 
> ~[hadoop-hdfs-2.6.2.jar:na]
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:604) 
> ~[hadoop-hdfs-2.6.2.jar:na]
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
>  ~[hadoop-hdfs-2.6.2.jar:na]
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) 
> ~[hadoop-common-2.6.2.jar:na]
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) 
> ~[hadoop-common-2.6.2.jar:na]
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) 
> ~[hadoop-common-2.6.2.jar:na]
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) 
> ~[hadoop-common-2.6.2.jar:na]
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) 
> ~[hadoop-common-2.6.2.jar:na]
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) 
> ~[hadoop-common-2.6.2.jar:na]
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.(OrcRecordUpdater.java:221)
>  ~[hive-exec-1.2.1.jar:1.2.1]
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRecordUpdater(OrcOutputFormat.java:292)
>  ~[hive-exec-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdater(AbstractRecordWriter.java:141)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.AbstractRecordWriter.newBatch(AbstractRecordWriter.java:121)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.StrictJsonWriter.newBatch(StrictJsonWriter.java:37)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.(HiveEndPoint.java:509)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.(HiveEndPoint.java:461)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$1(HiveWriter.java:250)
>  ~[nifi-hive-processors-1.0.0.jar:1.0.0]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2851) Improve performance of SplitText

2016-10-13 Thread Oleg Zhurakousky (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572440#comment-15572440
 ] 

Oleg Zhurakousky commented on NIFI-2851:


So the test has been fixed and CapabilityDescription was updated

> Improve performance of SplitText
> 
>
> Key: NIFI-2851
> URL: https://issues.apache.org/jira/browse/NIFI-2851
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Oleg Zhurakousky
> Fix For: 1.1.0
>
>
> SplitText is fairly CPU-intensive and quite slow. A simple flow that splits a 
> 1.4 million line text file into 5k line chunks and then splits those 5k line 
> chunks into 1 line chunks is only capable of pushing through about 10k lines 
> per second. This equates to about 10 MB/sec. JVisualVM shows that the 
> majority of the time is spent in the locateSplitPoint() method. Isolating 
> this code and inspecting how it works, and using some micro-benchmarking, it 
> appears that if we refactor the calls to InputStream.read() to instead read 
> into a byte array, we can improve performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2791) Create a new Expression Language Function to support Java.lang.Math operations

2016-10-13 Thread Joseph Percivall (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572429#comment-15572429
 ] 

Joseph Percivall commented on NIFI-2791:


With the pruning going on for 1.1.0 I'd like this to keep it's fix version. I 
have done much of the work for it already but it is blocked by NIFI-1662. So 
just need NIFI-1662 to be reviewed/merged (fix version of 1.1.0) and then I 
will rebase, finalize and open a PR. 

These go hand in hand and having this Math functionality will go a long way to 
facilitate numeric operations using EL.

> Create a new Expression Language Function to support Java.lang.Math operations
> --
>
> Key: NIFI-2791
> URL: https://issues.apache.org/jira/browse/NIFI-2791
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Percivall
>Assignee: Joseph Percivall
> Fix For: 1.1.0
>
>
> Once EL is improved to support decimals (NIFI-1662) it will be desired to 
> support higher level math functions than are currently implemented. The 
> easiest way to do this is to provide access to the Math class[1]. This should 
> provide all the building blocks necessary to do any desired operations on 
> decimals.
> [1] https://docs.oracle.com/javase/7/docs/api/java/lang/Math.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-2850) Provide ability for a FlowFile to be migrated from one Process Session to another

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt reassigned NIFI-2850:
-

Assignee: Joseph Witt  (was: Mark Payne)

will review

> Provide ability for a FlowFile to be migrated from one Process Session to 
> another
> -
>
> Key: NIFI-2850
> URL: https://issues.apache.org/jira/browse/NIFI-2850
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Joseph Witt
> Fix For: 1.1.0
>
>
> Currently, the MergeContent processor creates a separate ProcessSession for 
> each FlowFile that it pulls. This is done so that we can ensure that we can 
> commit all Process Sessions when a bin is full. Unfortunately, this means 
> that MergeContent is required to call ProcessSession.get() many times, which 
> adds a lot of contention on the FlowFile Queue. If we allow FlowFiles to be 
> migrated from 1 session to another, we can have a session per bin, and then 
> use ProcessSession.get(100) to greatly reduce lock contention. This will 
> likely have benefits in other processors as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-2830) Template UUIDs out of sync in cluster

2016-10-13 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-2830.
---
Resolution: Fixed

Both sub tasks have been resolved.

> Template UUIDs out of sync in cluster
> -
>
> Key: NIFI-2830
> URL: https://issues.apache.org/jira/browse/NIFI-2830
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Aldrin Piri
>Assignee: Matt Gilman
> Fix For: 1.1.0
>
>
> I uploaded a template to a clustered instance of NiFi which presented me with 
> a success dialog.  This, and subsequent actions, were performed through a 
> node, nifi1.  Upon trying to add this template to the canvas, I received a 
> message that
> {code}
> Node nifi2:8443 is unable to fulfill this request due to: Unable to locate 
> template with id '76c907b3-2664-30e4-987b-e9021ebcc844'.
> {code}
> Upon performing an inspection on several of the nodes in the cluster, I saw 
> that all but nifi2 had the above listed UUID, but nifi2 had a different UUID. 
>  Upon trying to delete the template, I was presented with the same error.  
> This behavior, inclusive of the error message was the same when performing 
> the same actions on nifi2.
> Re-uploading the template with a different name, everything worked as 
> anticipated.
> There was nothing of note in nifi-app or nifi-user logs beyond the 
> information provided in the error dialog.  Of possible note is that the node 
> in question was possibly having some intermittent network issues but was 
> listed as continuously connected on the node I was accessing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2818) NIFI requires write access to NIFI_HOME/lib upon start

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2818:
--
Fix Version/s: (was: 1.1.0)

> NIFI requires write access to NIFI_HOME/lib upon start
> --
>
> Key: NIFI-2818
> URL: https://issues.apache.org/jira/browse/NIFI-2818
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andre
>Assignee: Andre
>
> As part of NIFI-1500 we noted that NiFi requires what can be described as 
> excessive filesystem privileges to be executed.
> One of the issues identified is that NiFi requires write access to 
> NIFI_HOME/lib as illustrated by the following:
> {code}
> nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-nar-utils/src/main/java/org/apache/nifi/nar/NarUnpacker.java
>  for (Path narLibraryDir : narLibraryDirs) {
> File narDir = narLibraryDir.toFile();
> FileUtils.ensureDirectoryExistAndCanAccess(narDir);
> File[] dirFiles = narDir.listFiles(NAR_FILTER);
> if (dirFiles != null) {
> List fileList = Arrays.asList(dirFiles);
> narFiles.addAll(fileList);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-766) UI should indicate when backpressure is configured for a Connection

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572385#comment-15572385
 ] 

ASF GitHub Bot commented on NIFI-766:
-

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1080
  
@mcgilman @pvillard31 I made one slight change to ConnectionStatus to 
rename the getters/setters from setBackpressureDataSizeThresholdLong to 
setBackpressureBytesThreshold to align with the existing naming convention. 
Otherwise, all looks great! Tested and verified that all worked as expected. 
The UI looks great. Thanks for updating this, guys. It's been a long time 
coming and will be very helpful! +1 merged to master.


> UI should indicate when backpressure is configured for a Connection
> ---
>
> Key: NIFI-766
> URL: https://issues.apache.org/jira/browse/NIFI-766
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Core UI
>Reporter: Mark Payne
>Assignee: Pierre Villard
> Fix For: 1.1.0
>
> Attachments: backpressure.png, backpressure_and_expiration.png, 
> normal.png
>
>
> It is sometimes unclear why a Processor is not running, if it is due to 
> backpressure. Recommend we add an icon to the Connection label to indicate 
> that backpressure is configured. If backpressure is "applied" (i.e., the 
> backpressure threshold has been reached), that icon should be highlighted 
> somehow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-766) UI should indicate when backpressure is configured for a Connection

2016-10-13 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-766:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> UI should indicate when backpressure is configured for a Connection
> ---
>
> Key: NIFI-766
> URL: https://issues.apache.org/jira/browse/NIFI-766
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Core UI
>Reporter: Mark Payne
>Assignee: Pierre Villard
> Fix For: 1.1.0
>
> Attachments: backpressure.png, backpressure_and_expiration.png, 
> normal.png
>
>
> It is sometimes unclear why a Processor is not running, if it is due to 
> backpressure. Recommend we add an icon to the Connection label to indicate 
> that backpressure is configured. If backpressure is "applied" (i.e., the 
> backpressure threshold has been reached), that icon should be highlighted 
> somehow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1080: NIFI-766 Added icon on connection when backpressure is ena...

2016-10-13 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1080
  
@mcgilman @pvillard31 I made one slight change to ConnectionStatus to 
rename the getters/setters from setBackpressureDataSizeThresholdLong to 
setBackpressureBytesThreshold to align with the existing naming convention. 
Otherwise, all looks great! Tested and verified that all worked as expected. 
The UI looks great. Thanks for updating this, guys. It's been a long time 
coming and will be very helpful! +1 merged to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-2723) UI - Decouple component and status refresh

2016-10-13 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-2723:
--
Issue Type: Improvement  (was: Bug)

> UI - Decouple component and status refresh
> --
>
> Key: NIFI-2723
> URL: https://issues.apache.org/jira/browse/NIFI-2723
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Matt Gilman
>
> Currently, the UI refreshes a component and it's status together. This logic 
> should be decoupled so we can update the status without running through the 
> logic to update the component.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2723) UI - Decouple component and status refresh

2016-10-13 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-2723:
--
Priority: Minor  (was: Major)

> UI - Decouple component and status refresh
> --
>
> Key: NIFI-2723
> URL: https://issues.apache.org/jira/browse/NIFI-2723
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Matt Gilman
>Priority: Minor
>
> Currently, the UI refreshes a component and it's status together. This logic 
> should be decoupled so we can update the status without running through the 
> logic to update the component.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-766) UI should indicate when backpressure is configured for a Connection

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572380#comment-15572380
 ] 

ASF GitHub Bot commented on NIFI-766:
-

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1080


> UI should indicate when backpressure is configured for a Connection
> ---
>
> Key: NIFI-766
> URL: https://issues.apache.org/jira/browse/NIFI-766
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Core UI
>Reporter: Mark Payne
>Assignee: Pierre Villard
> Fix For: 1.1.0
>
> Attachments: backpressure.png, backpressure_and_expiration.png, 
> normal.png
>
>
> It is sometimes unclear why a Processor is not running, if it is due to 
> backpressure. Recommend we add an icon to the Connection label to indicate 
> that backpressure is configured. If backpressure is "applied" (i.e., the 
> backpressure threshold has been reached), that icon should be highlighted 
> somehow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-766) UI should indicate when backpressure is configured for a Connection

2016-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572378#comment-15572378
 ] 

ASF subversion and git services commented on NIFI-766:
--

Commit 26f46538b3cb6649918a64014f92d4adb9165133 in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=26f4653 ]

NIFI-766:
- Improved connection UI display when backpressure is enabled
- Updating the connection label to include backpressure indicators for object 
count and data size thresholds.
- Coloring the connection path and drop shadow once backpressure is engaged.
- Fixing bug with expiration icon tooltip.
- Including columns in the summary table for backpressure.
- Updating empty queue action to reload the connection status upon completion 
to ensure an updated count.

This closes #1080.


> UI should indicate when backpressure is configured for a Connection
> ---
>
> Key: NIFI-766
> URL: https://issues.apache.org/jira/browse/NIFI-766
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Core UI
>Reporter: Mark Payne
>Assignee: Pierre Villard
> Fix For: 1.1.0
>
> Attachments: backpressure.png, backpressure_and_expiration.png, 
> normal.png
>
>
> It is sometimes unclear why a Processor is not running, if it is due to 
> backpressure. Recommend we add an icon to the Connection label to indicate 
> that backpressure is configured. If backpressure is "applied" (i.e., the 
> backpressure threshold has been reached), that icon should be highlighted 
> somehow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1080: NIFI-766 Added icon on connection when backpressure...

2016-10-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1080


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2500) Allow request buffer to be configurable on HandleHTTPRequest processor

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572358#comment-15572358
 ] 

ASF GitHub Bot commented on NIFI-2500:
--

GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/1131

NIFI-2500 made container queue configurable

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-2500

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1131.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1131


commit f6979bc63fce5643f4001e726d8384305fdf24d4
Author: Oleg Zhurakousky 
Date:   2016-10-13T16:08:29Z

NIFI-2500 made container queue configurable




> Allow request buffer to be configurable on HandleHTTPRequest processor
> --
>
> Key: NIFI-2500
> URL: https://issues.apache.org/jira/browse/NIFI-2500
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.1
>Reporter: Matthew Clarke
>Assignee: Oleg Zhurakousky
>Priority: Critical
> Fix For: 1.1.0
>
>
> The request buffer for the HandleHTTPRequest buffer is hard coded to 50.  For 
> environments where bursts of requests can come in that exceed that threshold, 
> the processor will trigger Service Unavailable responses. Users should be 
> able to increase that buffer to meet their dataflow needs similar to how the 
> ConsumeMQTT processor works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2500) Allow request buffer to be configurable on HandleHTTPRequest processor

2016-10-13 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky updated NIFI-2500:
---
Status: Patch Available  (was: Open)

> Allow request buffer to be configurable on HandleHTTPRequest processor
> --
>
> Key: NIFI-2500
> URL: https://issues.apache.org/jira/browse/NIFI-2500
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.1
>Reporter: Matthew Clarke
>Assignee: Oleg Zhurakousky
>Priority: Critical
> Fix For: 1.1.0
>
>
> The request buffer for the HandleHTTPRequest buffer is hard coded to 50.  For 
> environments where bursts of requests can come in that exceed that threshold, 
> the processor will trigger Service Unavailable responses. Users should be 
> able to increase that buffer to meet their dataflow needs similar to how the 
> ConsumeMQTT processor works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1131: NIFI-2500 made container queue configurable

2016-10-13 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/1131

NIFI-2500 made container queue configurable

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-2500

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1131.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1131


commit f6979bc63fce5643f4001e726d8384305fdf24d4
Author: Oleg Zhurakousky 
Date:   2016-10-13T16:08:29Z

NIFI-2500 made container queue configurable




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-2500) Allow request buffer to be configurable on HandleHTTPRequest processor

2016-10-13 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky updated NIFI-2500:
---
Fix Version/s: 1.1.0

> Allow request buffer to be configurable on HandleHTTPRequest processor
> --
>
> Key: NIFI-2500
> URL: https://issues.apache.org/jira/browse/NIFI-2500
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.1
>Reporter: Matthew Clarke
>Assignee: Oleg Zhurakousky
>Priority: Critical
> Fix For: 1.1.0
>
>
> The request buffer for the HandleHTTPRequest buffer is hard coded to 50.  For 
> environments where bursts of requests can come in that exceed that threshold, 
> the processor will trigger Service Unavailable responses. Users should be 
> able to increase that buffer to meet their dataflow needs similar to how the 
> ConsumeMQTT processor works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi-minifi-cpp pull request #18: MINIFI-34 - attempt to progress the CMake ...

2016-10-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/18


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-2873) PutHiveStreaming throws UnknownHostException with HA NameNode

2016-10-13 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-2873:
---
Status: Patch Available  (was: Open)

> PutHiveStreaming throws UnknownHostException with HA NameNode
> -
>
> Key: NIFI-2873
> URL: https://issues.apache.org/jira/browse/NIFI-2873
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Franco
> Fix For: 1.1.0
>
>
> This is the same issue that previously affected Spark:
> https://github.com/Jianfeng-chs/spark/commit/9f2b2bf001262215742be418f24d5093c92ff10f
> We are experiencing this issue consistently when trying to use 
> PutHiveStreaming. In theory this should be a problem with GetHDFS but for 
> whatever reason it is not.
> The fix is identical namely preloading the Hadoop configuration during the 
> processor setup phase. Pull request forthcoming.
> {code:title=Stack Trace|borderStyle=solid}
> 2016-10-06 16:07:59,225 ERROR [Timer-Driven Process Thread-9] 
> o.a.n.processors.hive.PutHiveStreaming
> java.lang.IllegalArgumentException: java.net.UnknownHostException: tdcdv2
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
>  ~[hadoop-common-2.6.2.jar:na]
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
>  ~[hadoop-hdfs-2.6.2.jar:na]
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) 
> ~[hadoop-hdfs-2.6.2.jar:na]
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:668) 
> ~[hadoop-hdfs-2.6.2.jar:na]
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:604) 
> ~[hadoop-hdfs-2.6.2.jar:na]
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
>  ~[hadoop-hdfs-2.6.2.jar:na]
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) 
> ~[hadoop-common-2.6.2.jar:na]
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) 
> ~[hadoop-common-2.6.2.jar:na]
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) 
> ~[hadoop-common-2.6.2.jar:na]
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) 
> ~[hadoop-common-2.6.2.jar:na]
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) 
> ~[hadoop-common-2.6.2.jar:na]
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) 
> ~[hadoop-common-2.6.2.jar:na]
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.(OrcRecordUpdater.java:221)
>  ~[hive-exec-1.2.1.jar:1.2.1]
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRecordUpdater(OrcOutputFormat.java:292)
>  ~[hive-exec-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdater(AbstractRecordWriter.java:141)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.AbstractRecordWriter.newBatch(AbstractRecordWriter.java:121)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.StrictJsonWriter.newBatch(StrictJsonWriter.java:37)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.(HiveEndPoint.java:509)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.(HiveEndPoint.java:461)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
> at 
> org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$1(HiveWriter.java:250)
>  ~[nifi-hive-processors-1.0.0.jar:1.0.0]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2773) Allow search results to be kept open

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2773:
--
Fix Version/s: (was: 1.1.0)

> Allow search results to be kept open
> 
>
> Key: NIFI-2773
> URL: https://issues.apache.org/jira/browse/NIFI-2773
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Mark Payne
>
> I wanted to make a change to each instance of the PublishKafka processors on 
> my canvas. I have 5 instances. To do this, I had to search for PublishKafka, 
> select the first result, change it, search for PublishKafka, select the 
> second result, change it, and so on. This is time consuming and gets much 
> more difficult if there are more search results.
> We should allow the user to 'pin' the search results or something of that 
> nature so that the results do not go away when one is selected. Instead, they 
> should go away only after I choose to close them. This way, I could search 
> for PublishKafka, update the first one, then just click the next result and 
> update it, click the next result, and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2896) Perform Release Management Functions for 0.7.1

2016-10-13 Thread Joe Skora (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572196#comment-15572196
 ] 

Joe Skora commented on NIFI-2896:
-

Bumping to commit 
[9cbb001|https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=99874055bd649895dd4ddc68e294998929cbb001]
 to catch the NIFI-2801 documentation cleanup.

> Perform Release Management Functions for 0.7.1
> --
>
> Key: NIFI-2896
> URL: https://issues.apache.org/jira/browse/NIFI-2896
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Skora
>Assignee: Joe Skora
>  Labels: release
> Fix For: 0.7.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi-minifi-cpp pull request #14: MINIFI-34 Establishing CMake build system ...

2016-10-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/14


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-2730) File Authorizer - Support configurable anonymous access

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2730:
--
Fix Version/s: (was: 1.1.0)

> File Authorizer - Support configurable anonymous access
> ---
>
> Key: NIFI-2730
> URL: https://issues.apache.org/jira/browse/NIFI-2730
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Matt Gilman
>
> In 1.0.0 a delegated authorization model is used to make access decisions. 
> Whether or not an anonymous user is allowed would be a function of the 
> authorizer.
> Additionally, may need a framework property to indicate if we want to 
> authenticate anonymous users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2723) UI - Decouple component and status refresh

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2723:
--
Fix Version/s: (was: 1.1.0)

> UI - Decouple component and status refresh
> --
>
> Key: NIFI-2723
> URL: https://issues.apache.org/jira/browse/NIFI-2723
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Gilman
>
> Currently, the UI refreshes a component and it's status together. This logic 
> should be decoupled so we can update the status without running through the 
> logic to update the component.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2684) Validation error messages should refer to propertyDescriptor using its displayName

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2684:
--
Status: Patch Available  (was: Open)

> Validation error messages should refer to propertyDescriptor using its 
> displayName
> --
>
> Key: NIFI-2684
> URL: https://issues.apache.org/jira/browse/NIFI-2684
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Andre
>Assignee: Andre
> Fix For: 1.1.0
>
>
> When certain validation violation are triggered, 
> {{AbstractConfigurableComponent}} refers to the descriptor using  
> {{descriptor.getName()}} 
> the result is that error messages end up referring to properties by sometimes 
> cryptic names (instead of the "pretty names" defined in {{.displayName}} )
> Users would be better of if we used {{displayName}} on error messages when 
> available, but to fallback to names in case displayName is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2473) UX regarding deleting components and connections regressed unintentionally

2016-10-13 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572138#comment-15572138
 ] 

Joseph Witt commented on NIFI-2473:
---

Had a nice discussion with Matt Gilman to better understand the scenario at 
play here.  The deal is that in older NiFi versions there was a bug in 
standalone mode which meant it wasn't checking the case where multiple 
components were selected and then blocking deletes which didn't account for all 
items in a chain.  This has been fixed so it appears to be a regression but 
really it was simply made more consistent.

The idea is that single selection actions like (delete, move, copy) do 
generally infer the other impacted relationships (except connection moves which 
are always explicit).  In the case of multiple component selection all items 
involved in the action (delete, move, copy) must be explicitly selected.

Let's leave this ticket open because we could either make the single select 
case and the multi-select case behave the same way single select does now or 
the way multi-select does.  I think that JoeP's expectation of behavior is the 
most intuitive.  And, as time/interest dictates we can improve this but let's 
not have the ticket lurking on any fix version until there is traction.

> UX regarding deleting components and connections regressed unintentionally
> --
>
> Key: NIFI-2473
> URL: https://issues.apache.org/jira/browse/NIFI-2473
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joseph Percivall
>Assignee: Scott Aslan
>
> Currently in master each of these fail but succeed in 0.x (connection is 
> coming from the source and is connected to the destination):
> destination and connection selected, attempt to delete
> source and connection selected, attempt to delete
> source and destination selected, attempt to delete
> I believe this is due to the connection being "connected" to a component 
> outside of the snippet and in the third case, the components being 
> "connected" to the connection (which is outside the snippet).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2473) UX regarding deleting components and connections regressed unintentionally

2016-10-13 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-2473:
--
Fix Version/s: (was: 1.1.0)

> UX regarding deleting components and connections regressed unintentionally
> --
>
> Key: NIFI-2473
> URL: https://issues.apache.org/jira/browse/NIFI-2473
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joseph Percivall
>Assignee: Scott Aslan
>
> Currently in master each of these fail but succeed in 0.x (connection is 
> coming from the source and is connected to the destination):
> destination and connection selected, attempt to delete
> source and connection selected, attempt to delete
> source and destination selected, attempt to delete
> I believe this is due to the connection being "connected" to a component 
> outside of the snippet and in the third case, the components being 
> "connected" to the connection (which is outside the snippet).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572100#comment-15572100
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83229088
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestGrokParser.java
 ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+/**
+ * Created by snamsi on 05/10/16.
--- End diff --

We should not have usernames here, as Git will provide this information for 
us.


> NiFi processor to parse logs using Grok patterns
> 
>
> Key: NIFI-2565
> URL: https://issues.apache.org/jira/browse/NIFI-2565
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
> Fix For: 1.1.0
>
>
> Following up on Ryan Ward to create a Grok capable parser
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201606.mbox/%3CCADD=rnPa8nHkJbeM280=PTQ=wurtwhstm5u+7btoo9pcym2...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83228132
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
+@SeeAlso({})
+@ReadsAttributes({@ReadsAttribute(attribute="", description="")})
+@WritesAttributes({@WritesAttribute(attribute="", description="")})
+public class GrokParser extends AbstractProcessor {
+
+
+public static final String DESTINATION_ATTRIBUTE = 
"flowfile-attribute";
+public static final String DESTINATION_CONTENT = "flowfile-content";
+private static final String APPLICATION_JSON = "application/json";
+
+public static final PropertyDescriptor GROK_EXPRESSION = new 
PropertyDescriptor
+.Builder().name("Grok Expression")
+.description("Grok expression")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor GROK_PATTERN_FILE = new 
PropertyDescriptor
+.Builder().name("Grok Pattern file")
+.description("Grok Pattern file definition")
+.required(false)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor DESTINATION = new 
PropertyDescriptor.Builder()
+.name("Destination")
+.description("Control if Grok output value is written as a new 
flowfile attribute  " +
+"or written in the flowfile content. Writing to 
flowfile content will overwrite any " +
+"existing flowfile content.")
+.required(true)
+.allowableValues(DESTINATION_ATTRIBUTE, DESTINATION_CONTENT)
+

[GitHub] nifi pull request #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83229088
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestGrokParser.java
 ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+/**
+ * Created by snamsi on 05/10/16.
--- End diff --

We should not have usernames here, as Git will provide this information for 
us.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83229754
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/TestGrokParser/apache.log
 ---
@@ -0,0 +1 @@
+64.242.88.10 - - [07/Mar/2004:16:05:49 -0800] "GET 
/twiki/bin/edit/Main/Double_bounce_sender?topicparent=Main.ConfigurationVariables
 HTTP/1.1" 401 12846
--- End diff --

We have to ensure that we have proper licensing for these test files. This 
one may be one that you created yourself? If not, we need to ensure that its 
license is properly accounted for - or just mock out a new one.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83229592
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/TestGrokParser/patterns
 ---
@@ -0,0 +1,108 @@
+# Forked from 
https://github.com/elasticsearch/logstash/tree/v1.4.0/patterns
--- End diff --

We have to ensure that we have proper licensing for these test files.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83227140
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
+@SeeAlso({})
+@ReadsAttributes({@ReadsAttribute(attribute="", description="")})
+@WritesAttributes({@WritesAttribute(attribute="", description="")})
+public class GrokParser extends AbstractProcessor {
+
+
+public static final String DESTINATION_ATTRIBUTE = 
"flowfile-attribute";
+public static final String DESTINATION_CONTENT = "flowfile-content";
+private static final String APPLICATION_JSON = "application/json";
+
+public static final PropertyDescriptor GROK_EXPRESSION = new 
PropertyDescriptor
+.Builder().name("Grok Expression")
+.description("Grok expression")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor GROK_PATTERN_FILE = new 
PropertyDescriptor
+.Builder().name("Grok Pattern file")
+.description("Grok Pattern file definition")
+.required(false)
--- End diff --

If this is not required, how will the processor work if not set?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83228785
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
+@SeeAlso({})
+@ReadsAttributes({@ReadsAttribute(attribute="", description="")})
+@WritesAttributes({@WritesAttribute(attribute="", description="")})
+public class GrokParser extends AbstractProcessor {
+
+
+public static final String DESTINATION_ATTRIBUTE = 
"flowfile-attribute";
+public static final String DESTINATION_CONTENT = "flowfile-content";
+private static final String APPLICATION_JSON = "application/json";
+
+public static final PropertyDescriptor GROK_EXPRESSION = new 
PropertyDescriptor
+.Builder().name("Grok Expression")
+.description("Grok expression")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor GROK_PATTERN_FILE = new 
PropertyDescriptor
+.Builder().name("Grok Pattern file")
+.description("Grok Pattern file definition")
+.required(false)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor DESTINATION = new 
PropertyDescriptor.Builder()
+.name("Destination")
+.description("Control if Grok output value is written as a new 
flowfile attribute  " +
+"or written in the flowfile content. Writing to 
flowfile content will overwrite any " +
+"existing flowfile content.")
+.required(true)
+.allowableValues(DESTINATION_ATTRIBUTE, DESTINATION_CONTENT)
+

[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572099#comment-15572099
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83229023
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
+@SeeAlso({})
+@ReadsAttributes({@ReadsAttribute(attribute="", description="")})
+@WritesAttributes({@WritesAttribute(attribute="", description="")})
+public class GrokParser extends AbstractProcessor {
+
+
+public static final String DESTINATION_ATTRIBUTE = 
"flowfile-attribute";
+public static final String DESTINATION_CONTENT = "flowfile-content";
+private static final String APPLICATION_JSON = "application/json";
+
+public static final PropertyDescriptor GROK_EXPRESSION = new 
PropertyDescriptor
+.Builder().name("Grok Expression")
+.description("Grok expression")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor GROK_PATTERN_FILE = new 
PropertyDescriptor
+.Builder().name("Grok Pattern file")
+.description("Grok Pattern file definition")
+.required(false)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor DESTINATION = new 
PropertyDescriptor.Builder()
+.name("Destination")
+.description("Control if Grok output value is written as a new 
flowfile attribute  " +
+"or written in the flowfile content. 

[GitHub] nifi pull request #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83226436
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
--- End diff --

We should probably expand on this a bit more. Many users will not know what 
Grok is.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83229335
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestGrokParser.java
 ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+/**
+ * Created by snamsi on 05/10/16.
+ */
+public class TestGrokParser {
+
+private TestRunner testRunner;
+final static Path GROK_LOG_INPUT = 
Paths.get("src/test/resources/TestGrokParser/apache.log");
+final static Path GROK_TEXT_INPUT = 
Paths.get("src/test/resources/TestGrokParser/simple_text.log");
+
+
+@Before
+public void init() {
+testRunner = TestRunners.newTestRunner(GrokParser.class);
+}
+
+@Test
+public void testGrokParserWithMatchedContent() throws IOException {
+
+
+testRunner.setProperty(GrokParser.GROK_EXPRESSION, 
"%{COMMONAPACHELOG}");
+testRunner.setProperty(GrokParser.GROK_PATTERN_FILE, 
"src/test/resources/TestGrokParser/patterns");
+testRunner.enqueue(GROK_LOG_INPUT);
+testRunner.run();
+testRunner.assertAllFlowFilesTransferred(GrokParser.REL_MATCH);
+final MockFlowFile matched = 
testRunner.getFlowFilesForRelationship(GrokParser.REL_MATCH).get(0);
+
+matched.assertAttributeEquals("verb","GET");
+matched.assertAttributeEquals("response","401");
+matched.assertAttributeEquals("bytes","12846");
+matched.assertAttributeEquals("clientip","64.242.88.10");
+matched.assertAttributeEquals("auth","-");
+matched.assertAttributeEquals("timestamp","07/Mar/2004:16:05:49 
-0800");
+
matched.assertAttributeEquals("request","/twiki/bin/edit/Main/Double_bounce_sender?topicparent=Main.ConfigurationVariables");
+matched.assertAttributeEquals("httpversion","1.1");
+
+}
+
+@Test
+public void testGrokParserWithUnMatchedContent() throws IOException {
+
+
+testRunner.setProperty(GrokParser.GROK_EXPRESSION, "%{ADDRESS}");
+testRunner.setProperty(GrokParser.GROK_PATTERN_FILE, 
"src/test/resources/TestGrokParser/patterns");
+testRunner.enqueue(GROK_TEXT_INPUT);
+testRunner.run();
+testRunner.assertAllFlowFilesTransferred(GrokParser.REL_NO_MATCH);
+final MockFlowFile notMatched = 
testRunner.getFlowFilesForRelationship(GrokParser.REL_NO_MATCH).get(0);
+notMatched.assertContentEquals(GROK_TEXT_INPUT);
+
+}
+
+@Test(expected = java.lang.AssertionError.class)
--- End diff --

Rather than expected an AssertionError, we should avoid calling 
testRunner.run() and instead just use testRunner.assertNotValid()


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572089#comment-15572089
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83226835
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
--- End diff --

We should consider several more tags: grok, log, text, parse, delimit, 
extract


> NiFi processor to parse logs using Grok patterns
> 
>
> Key: NIFI-2565
> URL: https://issues.apache.org/jira/browse/NIFI-2565
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
> Fix For: 1.1.0
>
>
> Following up on Ryan Ward to create a Grok capable parser
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201606.mbox/%3CCADD=rnPa8nHkJbeM280=PTQ=wurtwhstm5u+7btoo9pcym2...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572094#comment-15572094
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83228313
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
+@SeeAlso({})
+@ReadsAttributes({@ReadsAttribute(attribute="", description="")})
+@WritesAttributes({@WritesAttribute(attribute="", description="")})
+public class GrokParser extends AbstractProcessor {
+
+
+public static final String DESTINATION_ATTRIBUTE = 
"flowfile-attribute";
+public static final String DESTINATION_CONTENT = "flowfile-content";
+private static final String APPLICATION_JSON = "application/json";
+
+public static final PropertyDescriptor GROK_EXPRESSION = new 
PropertyDescriptor
+.Builder().name("Grok Expression")
+.description("Grok expression")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
--- End diff --

We should probably use a custom validator to make sure that the configured 
value is valid.


> NiFi processor to parse logs using Grok patterns
> 
>
> Key: NIFI-2565
> URL: https://issues.apache.org/jira/browse/NIFI-2565
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
> Fix For: 1.1.0
>
>
> Following up on Ryan Ward to create a Grok capable parser
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201606.mbox/%3CCADD=rnPa8nHkJbeM280=PTQ=wurtwhstm5u+7btoo9pcym2...@mail.gmail.com%3E



--
This message was 

[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572090#comment-15572090
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83226928
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
+@SeeAlso({})
--- End diff --

No need for the @SeeAlso, @ReadsAtributes, and @WritesAttributes 
annotations if they are not being used.


> NiFi processor to parse logs using Grok patterns
> 
>
> Key: NIFI-2565
> URL: https://issues.apache.org/jira/browse/NIFI-2565
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
> Fix For: 1.1.0
>
>
> Following up on Ryan Ward to create a Grok capable parser
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201606.mbox/%3CCADD=rnPa8nHkJbeM280=PTQ=wurtwhstm5u+7btoo9pcym2...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572088#comment-15572088
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83226308
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
+@SeeAlso({})
+@ReadsAttributes({@ReadsAttribute(attribute="", description="")})
+@WritesAttributes({@WritesAttribute(attribute="", description="")})
+public class GrokParser extends AbstractProcessor {
--- End diff --

The naming convention that we try to stick with for Processors is 
. While this may be counter-intuitive for a Java Developer, it 
results in making the flow much more readable for users. So we should consider 
ParseLog or GrokLog.


> NiFi processor to parse logs using Grok patterns
> 
>
> Key: NIFI-2565
> URL: https://issues.apache.org/jira/browse/NIFI-2565
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
> Fix For: 1.1.0
>
>
> Following up on Ryan Ward to create a Grok capable parser
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201606.mbox/%3CCADD=rnPa8nHkJbeM280=PTQ=wurtwhstm5u+7btoo9pcym2...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83228313
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
+@SeeAlso({})
+@ReadsAttributes({@ReadsAttribute(attribute="", description="")})
+@WritesAttributes({@WritesAttribute(attribute="", description="")})
+public class GrokParser extends AbstractProcessor {
+
+
+public static final String DESTINATION_ATTRIBUTE = 
"flowfile-attribute";
+public static final String DESTINATION_CONTENT = "flowfile-content";
+private static final String APPLICATION_JSON = "application/json";
+
+public static final PropertyDescriptor GROK_EXPRESSION = new 
PropertyDescriptor
+.Builder().name("Grok Expression")
+.description("Grok expression")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
--- End diff --

We should probably use a custom validator to make sure that the configured 
value is valid.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83229023
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
+@SeeAlso({})
+@ReadsAttributes({@ReadsAttribute(attribute="", description="")})
+@WritesAttributes({@WritesAttribute(attribute="", description="")})
+public class GrokParser extends AbstractProcessor {
+
+
+public static final String DESTINATION_ATTRIBUTE = 
"flowfile-attribute";
+public static final String DESTINATION_CONTENT = "flowfile-content";
+private static final String APPLICATION_JSON = "application/json";
+
+public static final PropertyDescriptor GROK_EXPRESSION = new 
PropertyDescriptor
+.Builder().name("Grok Expression")
+.description("Grok expression")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor GROK_PATTERN_FILE = new 
PropertyDescriptor
+.Builder().name("Grok Pattern file")
+.description("Grok Pattern file definition")
+.required(false)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor DESTINATION = new 
PropertyDescriptor.Builder()
+.name("Destination")
+.description("Control if Grok output value is written as a new 
flowfile attribute  " +
+"or written in the flowfile content. Writing to 
flowfile content will overwrite any " +
+"existing flowfile content.")
+.required(true)
+.allowableValues(DESTINATION_ATTRIBUTE, DESTINATION_CONTENT)
+

[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572098#comment-15572098
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83229352
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestGrokParser.java
 ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+/**
+ * Created by snamsi on 05/10/16.
+ */
+public class TestGrokParser {
+
+private TestRunner testRunner;
+final static Path GROK_LOG_INPUT = 
Paths.get("src/test/resources/TestGrokParser/apache.log");
+final static Path GROK_TEXT_INPUT = 
Paths.get("src/test/resources/TestGrokParser/simple_text.log");
+
+
+@Before
+public void init() {
+testRunner = TestRunners.newTestRunner(GrokParser.class);
+}
+
+@Test
+public void testGrokParserWithMatchedContent() throws IOException {
+
+
+testRunner.setProperty(GrokParser.GROK_EXPRESSION, 
"%{COMMONAPACHELOG}");
+testRunner.setProperty(GrokParser.GROK_PATTERN_FILE, 
"src/test/resources/TestGrokParser/patterns");
+testRunner.enqueue(GROK_LOG_INPUT);
+testRunner.run();
+testRunner.assertAllFlowFilesTransferred(GrokParser.REL_MATCH);
+final MockFlowFile matched = 
testRunner.getFlowFilesForRelationship(GrokParser.REL_MATCH).get(0);
+
+matched.assertAttributeEquals("verb","GET");
+matched.assertAttributeEquals("response","401");
+matched.assertAttributeEquals("bytes","12846");
+matched.assertAttributeEquals("clientip","64.242.88.10");
+matched.assertAttributeEquals("auth","-");
+matched.assertAttributeEquals("timestamp","07/Mar/2004:16:05:49 
-0800");
+
matched.assertAttributeEquals("request","/twiki/bin/edit/Main/Double_bounce_sender?topicparent=Main.ConfigurationVariables");
+matched.assertAttributeEquals("httpversion","1.1");
+
+}
+
+@Test
+public void testGrokParserWithUnMatchedContent() throws IOException {
+
+
+testRunner.setProperty(GrokParser.GROK_EXPRESSION, "%{ADDRESS}");
+testRunner.setProperty(GrokParser.GROK_PATTERN_FILE, 
"src/test/resources/TestGrokParser/patterns");
+testRunner.enqueue(GROK_TEXT_INPUT);
+testRunner.run();
+testRunner.assertAllFlowFilesTransferred(GrokParser.REL_NO_MATCH);
+final MockFlowFile notMatched = 
testRunner.getFlowFilesForRelationship(GrokParser.REL_NO_MATCH).get(0);
+notMatched.assertContentEquals(GROK_TEXT_INPUT);
+
+}
+
+@Test(expected = java.lang.AssertionError.class)
+public void testGrokParserWithNotFoundPatternFile() throws IOException 
{
+
+testRunner.setProperty(GrokParser.GROK_EXPRESSION, 
"%{COMMONAPACHELOG}");
+testRunner.setProperty(GrokParser.GROK_PATTERN_FILE, 
"src/test/resources/TestGrokParser/toto_file");
+testRunner.enqueue(GROK_LOG_INPUT);
+testRunner.run();
+
+}
+
+
+@Test(expected = java.lang.AssertionError.class)
--- End diff --

Rather than expected an AssertionError, we should avoid calling 
testRunner.run() and instead just use testRunner.assertNotValid()


> NiFi processor to parse logs using Grok patterns
> 
>
> Key: NIFI-2565
> 

[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572087#comment-15572087
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83226436
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GrokParser.java
 ---
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import oi.thekraken.grok.api.Grok;
+import oi.thekraken.grok.api.Match;
+import oi.thekraken.grok.api.exception.GrokException;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedOutputStream;
+import org.apache.nifi.stream.io.StreamUtils;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+@Tags({"Grok Processor"})
+@CapabilityDescription("Use Grok expression ,a la logstash, to parse 
data.")
--- End diff --

We should probably expand on this a bit more. Many users will not know what 
Grok is.


> NiFi processor to parse logs using Grok patterns
> 
>
> Key: NIFI-2565
> URL: https://issues.apache.org/jira/browse/NIFI-2565
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
> Fix For: 1.1.0
>
>
> Following up on Ryan Ward to create a Grok capable parser
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201606.mbox/%3CCADD=rnPa8nHkJbeM280=PTQ=wurtwhstm5u+7btoo9pcym2...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572103#comment-15572103
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83229592
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/TestGrokParser/patterns
 ---
@@ -0,0 +1,108 @@
+# Forked from 
https://github.com/elasticsearch/logstash/tree/v1.4.0/patterns
--- End diff --

We have to ensure that we have proper licensing for these test files.


> NiFi processor to parse logs using Grok patterns
> 
>
> Key: NIFI-2565
> URL: https://issues.apache.org/jira/browse/NIFI-2565
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
> Fix For: 1.1.0
>
>
> Following up on Ryan Ward to create a Grok capable parser
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201606.mbox/%3CCADD=rnPa8nHkJbeM280=PTQ=wurtwhstm5u+7btoo9pcym2...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572097#comment-15572097
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1108#discussion_r83229335
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestGrokParser.java
 ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+/**
+ * Created by snamsi on 05/10/16.
+ */
+public class TestGrokParser {
+
+private TestRunner testRunner;
+final static Path GROK_LOG_INPUT = 
Paths.get("src/test/resources/TestGrokParser/apache.log");
+final static Path GROK_TEXT_INPUT = 
Paths.get("src/test/resources/TestGrokParser/simple_text.log");
+
+
+@Before
+public void init() {
+testRunner = TestRunners.newTestRunner(GrokParser.class);
+}
+
+@Test
+public void testGrokParserWithMatchedContent() throws IOException {
+
+
+testRunner.setProperty(GrokParser.GROK_EXPRESSION, 
"%{COMMONAPACHELOG}");
+testRunner.setProperty(GrokParser.GROK_PATTERN_FILE, 
"src/test/resources/TestGrokParser/patterns");
+testRunner.enqueue(GROK_LOG_INPUT);
+testRunner.run();
+testRunner.assertAllFlowFilesTransferred(GrokParser.REL_MATCH);
+final MockFlowFile matched = 
testRunner.getFlowFilesForRelationship(GrokParser.REL_MATCH).get(0);
+
+matched.assertAttributeEquals("verb","GET");
+matched.assertAttributeEquals("response","401");
+matched.assertAttributeEquals("bytes","12846");
+matched.assertAttributeEquals("clientip","64.242.88.10");
+matched.assertAttributeEquals("auth","-");
+matched.assertAttributeEquals("timestamp","07/Mar/2004:16:05:49 
-0800");
+
matched.assertAttributeEquals("request","/twiki/bin/edit/Main/Double_bounce_sender?topicparent=Main.ConfigurationVariables");
+matched.assertAttributeEquals("httpversion","1.1");
+
+}
+
+@Test
+public void testGrokParserWithUnMatchedContent() throws IOException {
+
+
+testRunner.setProperty(GrokParser.GROK_EXPRESSION, "%{ADDRESS}");
+testRunner.setProperty(GrokParser.GROK_PATTERN_FILE, 
"src/test/resources/TestGrokParser/patterns");
+testRunner.enqueue(GROK_TEXT_INPUT);
+testRunner.run();
+testRunner.assertAllFlowFilesTransferred(GrokParser.REL_NO_MATCH);
+final MockFlowFile notMatched = 
testRunner.getFlowFilesForRelationship(GrokParser.REL_NO_MATCH).get(0);
+notMatched.assertContentEquals(GROK_TEXT_INPUT);
+
+}
+
+@Test(expected = java.lang.AssertionError.class)
--- End diff --

Rather than expected an AssertionError, we should avoid calling 
testRunner.run() and instead just use testRunner.assertNotValid()


> NiFi processor to parse logs using Grok patterns
> 
>
> Key: NIFI-2565
> URL: https://issues.apache.org/jira/browse/NIFI-2565
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
> Fix For: 1.1.0
>
>
> Following up on Ryan Ward to create a Grok capable parser
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201606.mbox/%3CCADD=rnPa8nHkJbeM280=PTQ=wurtwhstm5u+7btoo9pcym2...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2893) Missing or wrong API doc

2016-10-13 Thread Matt Gilman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572072#comment-15572072
 ] 

Matt Gilman commented on NIFI-2893:
---

The controller services endpoint is not under /process-groups. The endpoints 
are grouped largely according to the resource that's used to authorize the 
request. Because controller services are a cross-cutting concern (meaning 
different processors with potentially different access policies can reference 
the same services) access to them is provided through

- /flow/process-groups/{id}/controller-services
- /flow/controller/controller-services

The /flow endpoints are authorized with the resource that provides access to 
the flow (flow structure... what components, including services, are connected 
but not including what the components actually are) and the UI.

Also, I looked into the documentation reporting a 200 response for all POSTs 
requests. This is a bug in swagger [1][2] that's generating the incorrect 
response code. I tried out a couple of the suggestions and none of them worked 
as I would like it. I'd like to keep this JIRA open until we're able to pull in 
a newer version of swagger that addresses the incorrect response code.

[1] https://github.com/kongchen/swagger-maven-plugin/issues/107
[2] https://github.com/kongchen/swagger-maven-plugin/issues/216

> Missing or wrong API doc
> 
>
> Key: NIFI-2893
> URL: https://issues.apache.org/jira/browse/NIFI-2893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.0.0
>Reporter: Stephane Maarek
>Priority: Minor
>
> At:
> https://nifi.apache.org/docs/nifi-docs/rest-api/
> Missing:
> - GET  /process-groups/{id}/controller-services
> Erroneous:
> - POST /process-groups/{id}/templates/upload
> > should give a 201 in case of created, returns Location in header so we 
> can get the ID out of it
> > it'd be good to have a description of the returned xml (nodes, etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1108: NIFI-2565: add Grok parser

2016-10-13 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1108
  
@selim-namsi Thanks for contributing this! I have actually been very 
interested in using NiFi to do some log parsing but hadn't really dug in very 
much to understand the best way to go about it. This looks like it could be 
very powerful!

Before we get this merged into the codebase, though, it looks like there is 
some work that needs to be done to the PR. The concern stems, I think, from you 
not yet being overly familiar with the API, as there are empty 
@ReadsAttributes, @WritesAttributes annotations, etc. But the great news is 
that the NiFi community tends to be very inclusive and will help to get 
everything in great shape!

One thing that I did notice is that you updated the Licensing information, 
which is one of the most commonly overlooked issues. So very glad that's there. 
I'll leave some inline feedback on things that I notice, but very much looking 
forward to this getting in!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2565) NiFi processor to parse logs using Grok patterns

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572044#comment-15572044
 ] 

ASF GitHub Bot commented on NIFI-2565:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1108
  
@selim-namsi Thanks for contributing this! I have actually been very 
interested in using NiFi to do some log parsing but hadn't really dug in very 
much to understand the best way to go about it. This looks like it could be 
very powerful!

Before we get this merged into the codebase, though, it looks like there is 
some work that needs to be done to the PR. The concern stems, I think, from you 
not yet being overly familiar with the API, as there are empty 
@ReadsAttributes, @WritesAttributes annotations, etc. But the great news is 
that the NiFi community tends to be very inclusive and will help to get 
everything in great shape!

One thing that I did notice is that you updated the Licensing information, 
which is one of the most commonly overlooked issues. So very glad that's there. 
I'll leave some inline feedback on things that I notice, but very much looking 
forward to this getting in!


> NiFi processor to parse logs using Grok patterns
> 
>
> Key: NIFI-2565
> URL: https://issues.apache.org/jira/browse/NIFI-2565
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andre
> Fix For: 1.1.0
>
>
> Following up on Ryan Ward to create a Grok capable parser
> https://mail-archives.apache.org/mod_mbox/nifi-dev/201606.mbox/%3CCADD=rnPa8nHkJbeM280=PTQ=wurtwhstm5u+7btoo9pcym2...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2896) Perform Release Management Functions for 0.7.1

2016-10-13 Thread Joe Skora (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572029#comment-15572029
 ] 

Joe Skora commented on NIFI-2896:
-

Per email threads[1] and [2], basing the release on 
40618364e70a966f9c1e425674b53b22b1fb0fb0.

[1]http://mail-archives.apache.org/mod_mbox/nifi-dev/201609.mbox/%3CCACkT4waF3f7W%3DbeJp%2BbnV-OD5D6XfVkRQpypMao1hBSkg7irzg%40mail.gmail.com%3E
[2]http://mail-archives.apache.org/mod_mbox/nifi-dev/201610.mbox/%3CCA%2BLyY57EWffhWfpJ-DYaCtLwW8%3DgJ8kX8%3DQdWtGu5a873WyN4A%40mail.gmail.com%3E

> Perform Release Management Functions for 0.7.1
> --
>
> Key: NIFI-2896
> URL: https://issues.apache.org/jira/browse/NIFI-2896
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Skora
>Assignee: Joe Skora
>  Labels: release
> Fix For: 0.7.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2863) A remote process group pointed to a host without the trailing "/nifi" will fail with mis-leading bulletins

2016-10-13 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-2863:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> A remote process group pointed to a host without the trailing "/nifi" will 
> fail with mis-leading bulletins
> --
>
> Key: NIFI-2863
> URL: https://issues.apache.org/jira/browse/NIFI-2863
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Joseph Percivall
>Assignee: Koji Kawamura
> Fix For: 1.1.0
>
>
> To replicate:
> 1: Set up NiFi instance on port 8080 and remote port 8081 (unsecure S2S)
> 2: create input port
> 3: create RPG pointing to "http://localhost:8080;
> This RPG will correctly get the instance name, listing of ports and port 
> status but when transmission is enabled and a flowfile is queued to be sent 
> the  following error is generated:
> "RemoteGroupPort[name=test1,target=http://localhost:8080] failed to 
> communicate with http://localhost:8080 due to 
> org.codehaus.jackson.JsonParseException: Unexpected character ('<' (code 
> 60)): expected a valid value (number, String, array, object, 'true', 'false' 
> or 'null')
>  at [Source: java.io.StringReader@44d519ce; line: 3, column: 2]"
> Looking at the logs there is this message:
> 2016-10-04 14:11:34,298 WARN [Timer-Driven Process Thread-2] 
> o.a.n.r.util.SiteToSiteRestApiClient Failed to parse Json, response=
> 
> 
> 
> 
> 
> NiFi
> 
>  />
>  type="text/css" />
>  href="/nifi/assets/font-awesome/css/font-awesome.min.css" type="text/css" />
>  type="text/css" />
>  type="text/css" />
> 
> 
> 
> 
> Did you mean: /nifi
> 
> You may have mistyped...
> 
> 
> 
> 2016-10-04 14:11:34,298 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.remote.StandardRemoteGroupPort 
> RemoteGroupPort[name=test1,target=http://localhost:8080] failed to 
> communicate with http://localhost:8080 due to 
> org.codehaus.jackson.JsonParseException: Unexpected character ('<' (code 
> 60)): expected a valid value (number, String, array, object, 'true', 'false' 
> or 'null')
>  at [Source: java.io.StringReader@44d519ce; line: 3, column: 2]
> This should either be fixed (to allow without "/nifi") or explicitly 
> validated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2861) ControlRate should accept more than one flow file per execution

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571998#comment-15571998
 ] 

ASF GitHub Bot commented on NIFI-2861:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1127#discussion_r83218149
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java
 ---
@@ -381,6 +392,14 @@ public boolean tryAdd(final long value) {
 
 private class ThrottleFilter implements FlowFileFilter {
 
+private final long flowFilesPerTrigger;
+private final AtomicLong flowFilesFiltered = new AtomicLong(0L);
+
+ThrottleFilter(final String ffPerTrigger) {
+super();
+flowFilesPerTrigger = ffPerTrigger == null ? 1L : 
Long.parseLong(ffPerTrigger);
--- End diff --

Should probably be passed in an int or a long, rather than a String


> ControlRate should accept more than one flow file per execution
> ---
>
> Key: NIFI-2861
> URL: https://issues.apache.org/jira/browse/NIFI-2861
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Joe Skora
>Assignee: Joe Skora
>
> The {{ControlRate}} processor implements a {{FlowFileFilter}} that returns 
> the {{FlowFileFilter.ACCEPT_AND_TERMINATE}} result if the {{FlowFile}} fits 
> with the rate limit, affectively limiting it to one {{FlowFile}} per 
> {{ConrolRate.onTrigger()}} invocation.  This is a significant bottleneck when 
> processing very large quantities of small files making it unlikely to hit the 
> rate limits.
> It should allow multiple files, perhaps with a configurable maximum, per 
> {{ControlRate.onTrigger()}} invocation by issuing the 
> {{FlowFileFilter.ACCEPT_AND_CONTINUE}} result until the limits are reached.  
> In a preliminary test this eliminated the bottleneck.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2861) ControlRate should accept more than one flow file per execution

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571996#comment-15571996
 ] 

ASF GitHub Bot commented on NIFI-2861:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1127#discussion_r83220874
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java
 ---
@@ -381,6 +392,14 @@ public boolean tryAdd(final long value) {
 
 private class ThrottleFilter implements FlowFileFilter {
 
+private final long flowFilesPerTrigger;
+private final AtomicLong flowFilesFiltered = new AtomicLong(0L);
+
+ThrottleFilter(final String ffPerTrigger) {
+super();
--- End diff --

The parent class here is Object. I don't think there's a need to call 
super()


> ControlRate should accept more than one flow file per execution
> ---
>
> Key: NIFI-2861
> URL: https://issues.apache.org/jira/browse/NIFI-2861
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Joe Skora
>Assignee: Joe Skora
>
> The {{ControlRate}} processor implements a {{FlowFileFilter}} that returns 
> the {{FlowFileFilter.ACCEPT_AND_TERMINATE}} result if the {{FlowFile}} fits 
> with the rate limit, affectively limiting it to one {{FlowFile}} per 
> {{ConrolRate.onTrigger()}} invocation.  This is a significant bottleneck when 
> processing very large quantities of small files making it unlikely to hit the 
> rate limits.
> It should allow multiple files, perhaps with a configurable maximum, per 
> {{ControlRate.onTrigger()}} invocation by issuing the 
> {{FlowFileFilter.ACCEPT_AND_CONTINUE}} result until the limits are reached.  
> In a preliminary test this eliminated the bottleneck.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2861) ControlRate should accept more than one flow file per execution

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571997#comment-15571997
 ] 

ASF GitHub Bot commented on NIFI-2861:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1127#discussion_r83217567
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java
 ---
@@ -115,6 +115,13 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .expressionLanguageSupported(false)
 .build();
+public static final PropertyDescriptor MAX_FF_PER_TRIGGER = new 
PropertyDescriptor.Builder()
--- End diff --

@jskora I'm not sure that this needs to be configurable. This is an 
implementation detail that feels a bit leaky to me. Users do not know what an 
'onTrigger() call' is. We should probably just cap it at say 1000 and not more 
than the max number of FlowFiles to transfer per 'Time Duration'.


> ControlRate should accept more than one flow file per execution
> ---
>
> Key: NIFI-2861
> URL: https://issues.apache.org/jira/browse/NIFI-2861
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Joe Skora
>Assignee: Joe Skora
>
> The {{ControlRate}} processor implements a {{FlowFileFilter}} that returns 
> the {{FlowFileFilter.ACCEPT_AND_TERMINATE}} result if the {{FlowFile}} fits 
> with the rate limit, affectively limiting it to one {{FlowFile}} per 
> {{ConrolRate.onTrigger()}} invocation.  This is a significant bottleneck when 
> processing very large quantities of small files making it unlikely to hit the 
> rate limits.
> It should allow multiple files, perhaps with a configurable maximum, per 
> {{ControlRate.onTrigger()}} invocation by issuing the 
> {{FlowFileFilter.ACCEPT_AND_CONTINUE}} result until the limits are reached.  
> In a preliminary test this eliminated the bottleneck.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2861) ControlRate should accept more than one flow file per execution

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572000#comment-15572000
 ] 

ASF GitHub Bot commented on NIFI-2861:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1127#discussion_r83220736
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java
 ---
@@ -228,12 +238,13 @@ public void onScheduled(final ProcessContext context) 
{
 rateControlAttribute = 
context.getProperty(RATE_CONTROL_ATTRIBUTE_NAME).getValue();
 maximumRateStr = 
context.getProperty(MAX_RATE).getValue().toUpperCase();
 groupingAttributeName = 
context.getProperty(GROUPING_ATTRIBUTE_NAME).getValue();
+maxFlowFilePerTrigger = 
context.getProperty(MAX_FF_PER_TRIGGER).getValue();
--- End diff --

This should probably be defined as an int, rather than a String, and can 
then just use context.getProperty().asInteger(). But I really prefer to remove 
this property all together.


> ControlRate should accept more than one flow file per execution
> ---
>
> Key: NIFI-2861
> URL: https://issues.apache.org/jira/browse/NIFI-2861
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Joe Skora
>Assignee: Joe Skora
>
> The {{ControlRate}} processor implements a {{FlowFileFilter}} that returns 
> the {{FlowFileFilter.ACCEPT_AND_TERMINATE}} result if the {{FlowFile}} fits 
> with the rate limit, affectively limiting it to one {{FlowFile}} per 
> {{ConrolRate.onTrigger()}} invocation.  This is a significant bottleneck when 
> processing very large quantities of small files making it unlikely to hit the 
> rate limits.
> It should allow multiple files, perhaps with a configurable maximum, per 
> {{ControlRate.onTrigger()}} invocation by issuing the 
> {{FlowFileFilter.ACCEPT_AND_CONTINUE}} result until the limits are reached.  
> In a preliminary test this eliminated the bottleneck.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1127: NIFI-2861 ControlRate should accept more than one f...

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1127#discussion_r83218107
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java
 ---
@@ -381,6 +392,14 @@ public boolean tryAdd(final long value) {
 
 private class ThrottleFilter implements FlowFileFilter {
 
+private final long flowFilesPerTrigger;
+private final AtomicLong flowFilesFiltered = new AtomicLong(0L);
--- End diff --

This filter is not thread-safe... don't think we need an AtomicLong here. 
Can just use an int.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1127: NIFI-2861 ControlRate should accept more than one f...

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1127#discussion_r83220874
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java
 ---
@@ -381,6 +392,14 @@ public boolean tryAdd(final long value) {
 
 private class ThrottleFilter implements FlowFileFilter {
 
+private final long flowFilesPerTrigger;
+private final AtomicLong flowFilesFiltered = new AtomicLong(0L);
+
+ThrottleFilter(final String ffPerTrigger) {
+super();
--- End diff --

The parent class here is Object. I don't think there's a need to call 
super()


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1127: NIFI-2861 ControlRate should accept more than one f...

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1127#discussion_r83217567
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java
 ---
@@ -115,6 +115,13 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .expressionLanguageSupported(false)
 .build();
+public static final PropertyDescriptor MAX_FF_PER_TRIGGER = new 
PropertyDescriptor.Builder()
--- End diff --

@jskora I'm not sure that this needs to be configurable. This is an 
implementation detail that feels a bit leaky to me. Users do not know what an 
'onTrigger() call' is. We should probably just cap it at say 1000 and not more 
than the max number of FlowFiles to transfer per 'Time Duration'.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-2896) Perform Release Management Functions for 0.7.1

2016-10-13 Thread Joe Skora (JIRA)
Joe Skora created NIFI-2896:
---

 Summary: Perform Release Management Functions for 0.7.1
 Key: NIFI-2896
 URL: https://issues.apache.org/jira/browse/NIFI-2896
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Skora
Assignee: Joe Skora
 Fix For: 0.7.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2861) ControlRate should accept more than one flow file per execution

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571999#comment-15571999
 ] 

ASF GitHub Bot commented on NIFI-2861:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1127#discussion_r83218107
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java
 ---
@@ -381,6 +392,14 @@ public boolean tryAdd(final long value) {
 
 private class ThrottleFilter implements FlowFileFilter {
 
+private final long flowFilesPerTrigger;
+private final AtomicLong flowFilesFiltered = new AtomicLong(0L);
--- End diff --

This filter is not thread-safe... don't think we need an AtomicLong here. 
Can just use an int.


> ControlRate should accept more than one flow file per execution
> ---
>
> Key: NIFI-2861
> URL: https://issues.apache.org/jira/browse/NIFI-2861
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Joe Skora
>Assignee: Joe Skora
>
> The {{ControlRate}} processor implements a {{FlowFileFilter}} that returns 
> the {{FlowFileFilter.ACCEPT_AND_TERMINATE}} result if the {{FlowFile}} fits 
> with the rate limit, affectively limiting it to one {{FlowFile}} per 
> {{ConrolRate.onTrigger()}} invocation.  This is a significant bottleneck when 
> processing very large quantities of small files making it unlikely to hit the 
> rate limits.
> It should allow multiple files, perhaps with a configurable maximum, per 
> {{ControlRate.onTrigger()}} invocation by issuing the 
> {{FlowFileFilter.ACCEPT_AND_CONTINUE}} result until the limits are reached.  
> In a preliminary test this eliminated the bottleneck.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2819) Improve ModifyBytes (Add expression language support)

2016-10-13 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-2819:
---
Status: Patch Available  (was: In Progress)

> Improve ModifyBytes (Add expression language support)
> -
>
> Key: NIFI-2819
> URL: https://issues.apache.org/jira/browse/NIFI-2819
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: n h
>Assignee: Matt Burgess
> Fix For: 1.1.0
>
>
> Add expression language support to "Start Offset" and "End Offset" 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2819) Improve ModifyBytes (Add expression language support)

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571991#comment-15571991
 ] 

ASF GitHub Bot commented on NIFI-2819:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/1130

NIFI-2819: Added support for Expresssion Language in ModifyBytes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-2819

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1130.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1130


commit 0c4b9e56e1729723039339ceee4dc78e0cd74862
Author: Matt Burgess 
Date:   2016-10-13T13:50:56Z

NIFI-2819: Added support for Expresssion Language in ModifyBytes




> Improve ModifyBytes (Add expression language support)
> -
>
> Key: NIFI-2819
> URL: https://issues.apache.org/jira/browse/NIFI-2819
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: n h
>Assignee: Matt Burgess
> Fix For: 1.1.0
>
>
> Add expression language support to "Start Offset" and "End Offset" 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1127: NIFI-2861 ControlRate should accept more than one f...

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1127#discussion_r83220736
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java
 ---
@@ -228,12 +238,13 @@ public void onScheduled(final ProcessContext context) 
{
 rateControlAttribute = 
context.getProperty(RATE_CONTROL_ATTRIBUTE_NAME).getValue();
 maximumRateStr = 
context.getProperty(MAX_RATE).getValue().toUpperCase();
 groupingAttributeName = 
context.getProperty(GROUPING_ATTRIBUTE_NAME).getValue();
+maxFlowFilePerTrigger = 
context.getProperty(MAX_FF_PER_TRIGGER).getValue();
--- End diff --

This should probably be defined as an int, rather than a String, and can 
then just use context.getProperty().asInteger(). But I really prefer to remove 
this property all together.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1127: NIFI-2861 ControlRate should accept more than one f...

2016-10-13 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1127#discussion_r83218149
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java
 ---
@@ -381,6 +392,14 @@ public boolean tryAdd(final long value) {
 
 private class ThrottleFilter implements FlowFileFilter {
 
+private final long flowFilesPerTrigger;
+private final AtomicLong flowFilesFiltered = new AtomicLong(0L);
+
+ThrottleFilter(final String ffPerTrigger) {
+super();
+flowFilesPerTrigger = ffPerTrigger == null ? 1L : 
Long.parseLong(ffPerTrigger);
--- End diff --

Should probably be passed in an int or a long, rather than a String


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1130: NIFI-2819: Added support for Expresssion Language i...

2016-10-13 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/1130

NIFI-2819: Added support for Expresssion Language in ModifyBytes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-2819

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1130.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1130


commit 0c4b9e56e1729723039339ceee4dc78e0cd74862
Author: Matt Burgess 
Date:   2016-10-13T13:50:56Z

NIFI-2819: Added support for Expresssion Language in ModifyBytes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2863) A remote process group pointed to a host without the trailing "/nifi" will fail with mis-leading bulletins

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571984#comment-15571984
 ] 

ASF GitHub Bot commented on NIFI-2863:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1122


> A remote process group pointed to a host without the trailing "/nifi" will 
> fail with mis-leading bulletins
> --
>
> Key: NIFI-2863
> URL: https://issues.apache.org/jira/browse/NIFI-2863
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Joseph Percivall
>Assignee: Koji Kawamura
> Fix For: 1.1.0
>
>
> To replicate:
> 1: Set up NiFi instance on port 8080 and remote port 8081 (unsecure S2S)
> 2: create input port
> 3: create RPG pointing to "http://localhost:8080;
> This RPG will correctly get the instance name, listing of ports and port 
> status but when transmission is enabled and a flowfile is queued to be sent 
> the  following error is generated:
> "RemoteGroupPort[name=test1,target=http://localhost:8080] failed to 
> communicate with http://localhost:8080 due to 
> org.codehaus.jackson.JsonParseException: Unexpected character ('<' (code 
> 60)): expected a valid value (number, String, array, object, 'true', 'false' 
> or 'null')
>  at [Source: java.io.StringReader@44d519ce; line: 3, column: 2]"
> Looking at the logs there is this message:
> 2016-10-04 14:11:34,298 WARN [Timer-Driven Process Thread-2] 
> o.a.n.r.util.SiteToSiteRestApiClient Failed to parse Json, response=
> 
> 
> 
> 
> 
> NiFi
> 
>  />
>  type="text/css" />
>  href="/nifi/assets/font-awesome/css/font-awesome.min.css" type="text/css" />
>  type="text/css" />
>  type="text/css" />
> 
> 
> 
> 
> Did you mean: /nifi
> 
> You may have mistyped...
> 
> 
> 
> 2016-10-04 14:11:34,298 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.remote.StandardRemoteGroupPort 
> RemoteGroupPort[name=test1,target=http://localhost:8080] failed to 
> communicate with http://localhost:8080 due to 
> org.codehaus.jackson.JsonParseException: Unexpected character ('<' (code 
> 60)): expected a valid value (number, String, array, object, 'true', 'false' 
> or 'null')
>  at [Source: java.io.StringReader@44d519ce; line: 3, column: 2]
> This should either be fixed (to allow without "/nifi") or explicitly 
> validated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2863) A remote process group pointed to a host without the trailing "/nifi" will fail with mis-leading bulletins

2016-10-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571987#comment-15571987
 ] 

ASF GitHub Bot commented on NIFI-2863:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/1122
  
Thanks @ijokarumawak! This has been merged to master.


> A remote process group pointed to a host without the trailing "/nifi" will 
> fail with mis-leading bulletins
> --
>
> Key: NIFI-2863
> URL: https://issues.apache.org/jira/browse/NIFI-2863
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Joseph Percivall
>Assignee: Koji Kawamura
> Fix For: 1.1.0
>
>
> To replicate:
> 1: Set up NiFi instance on port 8080 and remote port 8081 (unsecure S2S)
> 2: create input port
> 3: create RPG pointing to "http://localhost:8080;
> This RPG will correctly get the instance name, listing of ports and port 
> status but when transmission is enabled and a flowfile is queued to be sent 
> the  following error is generated:
> "RemoteGroupPort[name=test1,target=http://localhost:8080] failed to 
> communicate with http://localhost:8080 due to 
> org.codehaus.jackson.JsonParseException: Unexpected character ('<' (code 
> 60)): expected a valid value (number, String, array, object, 'true', 'false' 
> or 'null')
>  at [Source: java.io.StringReader@44d519ce; line: 3, column: 2]"
> Looking at the logs there is this message:
> 2016-10-04 14:11:34,298 WARN [Timer-Driven Process Thread-2] 
> o.a.n.r.util.SiteToSiteRestApiClient Failed to parse Json, response=
> 
> 
> 
> 
> 
> NiFi
> 
>  />
>  type="text/css" />
>  href="/nifi/assets/font-awesome/css/font-awesome.min.css" type="text/css" />
>  type="text/css" />
>  type="text/css" />
> 
> 
> 
> 
> Did you mean: /nifi
> 
> You may have mistyped...
> 
> 
> 
> 2016-10-04 14:11:34,298 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.remote.StandardRemoteGroupPort 
> RemoteGroupPort[name=test1,target=http://localhost:8080] failed to 
> communicate with http://localhost:8080 due to 
> org.codehaus.jackson.JsonParseException: Unexpected character ('<' (code 
> 60)): expected a valid value (number, String, array, object, 'true', 'false' 
> or 'null')
>  at [Source: java.io.StringReader@44d519ce; line: 3, column: 2]
> This should either be fixed (to allow without "/nifi") or explicitly 
> validated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2863) A remote process group pointed to a host without the trailing "/nifi" will fail with mis-leading bulletins

2016-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571982#comment-15571982
 ] 

ASF subversion and git services commented on NIFI-2863:
---

Commit c470fae0653add23ff5ceaf04a814b98f3f612cb in nifi's branch 
refs/heads/master from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=c470fae ]

NIFI-2863: S2S to allow cluster URL more leniently. This closes #1122

- Consolidated the target cluster URL resolving logic into
  SiteToSiteRestApiClient's as a common method
- Changed to more descriptive error message
- Added more unit test cases


> A remote process group pointed to a host without the trailing "/nifi" will 
> fail with mis-leading bulletins
> --
>
> Key: NIFI-2863
> URL: https://issues.apache.org/jira/browse/NIFI-2863
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Joseph Percivall
>Assignee: Koji Kawamura
> Fix For: 1.1.0
>
>
> To replicate:
> 1: Set up NiFi instance on port 8080 and remote port 8081 (unsecure S2S)
> 2: create input port
> 3: create RPG pointing to "http://localhost:8080;
> This RPG will correctly get the instance name, listing of ports and port 
> status but when transmission is enabled and a flowfile is queued to be sent 
> the  following error is generated:
> "RemoteGroupPort[name=test1,target=http://localhost:8080] failed to 
> communicate with http://localhost:8080 due to 
> org.codehaus.jackson.JsonParseException: Unexpected character ('<' (code 
> 60)): expected a valid value (number, String, array, object, 'true', 'false' 
> or 'null')
>  at [Source: java.io.StringReader@44d519ce; line: 3, column: 2]"
> Looking at the logs there is this message:
> 2016-10-04 14:11:34,298 WARN [Timer-Driven Process Thread-2] 
> o.a.n.r.util.SiteToSiteRestApiClient Failed to parse Json, response=
> 
> 
> 
> 
> 
> NiFi
> 
>  />
>  type="text/css" />
>  href="/nifi/assets/font-awesome/css/font-awesome.min.css" type="text/css" />
>  type="text/css" />
>  type="text/css" />
> 
> 
> 
> 
> Did you mean: /nifi
> 
> You may have mistyped...
> 
> 
> 
> 2016-10-04 14:11:34,298 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.remote.StandardRemoteGroupPort 
> RemoteGroupPort[name=test1,target=http://localhost:8080] failed to 
> communicate with http://localhost:8080 due to 
> org.codehaus.jackson.JsonParseException: Unexpected character ('<' (code 
> 60)): expected a valid value (number, String, array, object, 'true', 'false' 
> or 'null')
>  at [Source: java.io.StringReader@44d519ce; line: 3, column: 2]
> This should either be fixed (to allow without "/nifi") or explicitly 
> validated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1122: NIFI-2863: S2S to allow cluster URL more leniently

2016-10-13 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/1122
  
Thanks @ijokarumawak! This has been merged to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1122: NIFI-2863: S2S to allow cluster URL more leniently

2016-10-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1122


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (NIFI-2819) Improve ModifyBytes (Add expression language support)

2016-10-13 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-2819:
--

Assignee: Matt Burgess

> Improve ModifyBytes (Add expression language support)
> -
>
> Key: NIFI-2819
> URL: https://issues.apache.org/jira/browse/NIFI-2819
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: n h
>Assignee: Matt Burgess
> Fix For: 1.1.0
>
>
> Add expression language support to "Start Offset" and "End Offset" 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-1991) Zero Master Cluster: Modifying a disconnected node causes other nodes to not be able to join cluster

2016-10-13 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim resolved NIFI-1991.
--
Resolution: Won't Fix

Closing per discussion.

> Zero Master Cluster:  Modifying a disconnected node causes other nodes to not 
> be able to join cluster
> -
>
> Key: NIFI-1991
> URL: https://issues.apache.org/jira/browse/NIFI-1991
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Andrew Lim
>
> I created a zero master cluster with 3 nodes.  All three were up and running.
> I stopped all 3 instances and then brought up just Node1.
> In the Node1 UI, I made some edits to the flow.
> I attempted but was not able to bring up Node2 and Node3 to join the cluster.
> Here is what I saw in the logs:
> 2016-06-09 14:35:47,051 WARN [main] org.apache.nifi.web.server.JettyServer 
> Failed to start web server... shutting down.
> java.lang.Exception: Unable to load flow due to: java.io.IOException: 
> org.apache.nifi.controller.UninheritableFlowException: Failed to connect node 
> to cluster because local flow is different than cluster flow.
> at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:753) 
> ~[nifi-jetty-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at org.apache.nifi.NiFi.(NiFi.java:137) 
> [nifi-runtime-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at org.apache.nifi.NiFi.main(NiFi.java:227) 
> [nifi-runtime-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> Caused by: java.io.IOException: 
> org.apache.nifi.controller.UninheritableFlowException: Failed to connect node 
> to cluster because local flow is different than cluster flow.
> at 
> org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:501)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:744) 
> ~[nifi-jetty-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> ... 2 common frames omitted
> Caused by: org.apache.nifi.controller.UninheritableFlowException: Failed to 
> connect node to cluster because local flow is different than cluster flow.
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:862)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:497)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> ... 3 common frames omitted
> Caused by: org.apache.nifi.controller.UninheritableFlowException: Proposed 
> configuration is not inheritable by the flow controller because of flow 
> differences: Found difference in Flows:
> Local Fingerprint contains additional configuration from Cluster Fingerprint: 
> eba77dac-32ad-356b-8a82-f669d046aa21eba77dac-32ad-356b-8a82-f669d046aa21org.apache.nifi.processors.standard.LogAttributeNO_VALUEAttributes
>  to IgnoreNO_VALUEAttributes to LogNO_VALUELog prefixNO_VALUEs
> at 
> org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:219)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1329)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:75)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:668)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:839)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> ... 4 common frames omitted
> 2016-06-09 14:35:47,052 INFO [Thread-1] org.apache.nifi.NiFi Initiating 
> shutdown of Jetty web server...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi-minifi-cpp issue #18: MINIFI-34 - attempt to progress the CMake environ...

2016-10-13 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/18
  
Yeah, seems like we have a good start.  Will do some testing around on a 
few systems and get it merged in today.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1991) Zero Master Cluster: Modifying a disconnected node causes other nodes to not be able to join cluster

2016-10-13 Thread Andrew Lim (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571948#comment-15571948
 ] 

Andrew Lim commented on NIFI-1991:
--

[~markap14], thanks for the comments.  I agree that what has already been put 
in place for cluster management is sufficient.

> Zero Master Cluster:  Modifying a disconnected node causes other nodes to not 
> be able to join cluster
> -
>
> Key: NIFI-1991
> URL: https://issues.apache.org/jira/browse/NIFI-1991
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Andrew Lim
>
> I created a zero master cluster with 3 nodes.  All three were up and running.
> I stopped all 3 instances and then brought up just Node1.
> In the Node1 UI, I made some edits to the flow.
> I attempted but was not able to bring up Node2 and Node3 to join the cluster.
> Here is what I saw in the logs:
> 2016-06-09 14:35:47,051 WARN [main] org.apache.nifi.web.server.JettyServer 
> Failed to start web server... shutting down.
> java.lang.Exception: Unable to load flow due to: java.io.IOException: 
> org.apache.nifi.controller.UninheritableFlowException: Failed to connect node 
> to cluster because local flow is different than cluster flow.
> at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:753) 
> ~[nifi-jetty-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at org.apache.nifi.NiFi.(NiFi.java:137) 
> [nifi-runtime-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at org.apache.nifi.NiFi.main(NiFi.java:227) 
> [nifi-runtime-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> Caused by: java.io.IOException: 
> org.apache.nifi.controller.UninheritableFlowException: Failed to connect node 
> to cluster because local flow is different than cluster flow.
> at 
> org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:501)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:744) 
> ~[nifi-jetty-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> ... 2 common frames omitted
> Caused by: org.apache.nifi.controller.UninheritableFlowException: Failed to 
> connect node to cluster because local flow is different than cluster flow.
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:862)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:497)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> ... 3 common frames omitted
> Caused by: org.apache.nifi.controller.UninheritableFlowException: Proposed 
> configuration is not inheritable by the flow controller because of flow 
> differences: Found difference in Flows:
> Local Fingerprint contains additional configuration from Cluster Fingerprint: 
> eba77dac-32ad-356b-8a82-f669d046aa21eba77dac-32ad-356b-8a82-f669d046aa21org.apache.nifi.processors.standard.LogAttributeNO_VALUEAttributes
>  to IgnoreNO_VALUEAttributes to LogNO_VALUELog prefixNO_VALUEs
> at 
> org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:219)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1329)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:75)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:668)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:839)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
> ... 4 common frames omitted
> 2016-06-09 14:35:47,052 INFO [Thread-1] org.apache.nifi.NiFi Initiating 
> shutdown of Jetty web server...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2   3   >