[GitHub] nifi issue #487: NIFI-1956 added 'keyboard-interactive' option to SFTPTransf...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/487
  
@olegz 
Just to be sure before merging in: do you plan to add a unit test?
My guess is that it is not possible to simulate this server-side behavior, 
but I want to confirm.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #486: NIFI-1929: Improvements for PutHDFS attribute handli...

2016-06-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/486


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #486: NIFI-1929: Improvements for PutHDFS attribute handling

2016-06-06 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/486
  
+1 @mattyb149 
merged in master and 0.x


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #487: NIFI-1956 added 'keyboard-interactive' option to SFTPTransf...

2016-06-06 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/487
  
Well, while you can simulate the server behavior by changing SSH server 
configs locally, indeed it would be impossible to ensure consistency on all 
machines where build may be running. So the only thing I can test is that it is 
set which I should probably do.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #397: NIFI-1815

2016-06-06 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/397
  
Ok, while all is good on Windows I can't seem to have any success building 
those _.so_ files on OSx. Normally I would not worry about it that much but 
given that Tesseract distribution includes DLLs inside the JAR means that for 
"all other" OS such native libraries will come from outside and need to be 
known to the processor, so we probably would need another property and 
definitely test with at least one non-Win system.
So, I'll keep on trying (when I get a chance) to get/build those native 
libraries, but could use some help here as well


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65878170
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/GetHDFSEvents.java
 ---
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+
+import org.apache.hadoop.hdfs.DFSInotifyEventInputStream;
+import org.apache.hadoop.hdfs.client.HdfsAdmin;
+import org.apache.hadoop.hdfs.inotify.Event;
+import org.apache.hadoop.hdfs.inotify.EventBatch;
+import org.apache.hadoop.hdfs.inotify.MissingEventsException;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.hadoop.AbstractHadoopProcessor;
+import org.apache.nifi.processors.hadoop.FetchHDFS;
+import org.apache.nifi.processors.hadoop.GetHDFS;
+import org.apache.nifi.processors.hadoop.ListHDFS;
+import org.apache.nifi.processors.hadoop.PutHDFS;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"hadoop", "events", "inotify", "notifications", "filesystem"})
+@WritesAttributes({
+@WritesAttribute(attribute = EventAttributes.MIME_TYPE, 
description = "This is always application/json."),
+@WritesAttribute(attribute = EventAttributes.EVENT_TYPE, 
description = "This will specify the specific HDFS notification event type. 
Currently there are six types of events " +
+"(append, close, create, metadata, rename, and unlink)."),
+@WritesAttribute(attribute = EventAttributes.EVENT_PATH, 
description = "The specific path that the event is tied to.")
+})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("This processor polls the notification events 
provided by the HdfsAdmin API. Since this uses the HdfsAdmin APIs it is 
required to run as an HDFS super user. Currently there " +
+"are six types of events (append, close, create, metadata, rename, 
and unlink). Please see org.apache.hadoop.hdfs.inotify.Event documentation for 
full explanations of each event. " +
+"This processor will poll for new events based on a defined 
duration. For each event received a new flow file will be created with the 
expected attributes and the event itself serialized " +
+"to JSON and written to the flow file's content. For example, if 
event.type is APPEND then the content of the flow file will contain a JSON file 
containing the information about the " +
  

[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65878313
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/GetHDFSEvents.java
 ---
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+
+import org.apache.hadoop.hdfs.DFSInotifyEventInputStream;
+import org.apache.hadoop.hdfs.client.HdfsAdmin;
+import org.apache.hadoop.hdfs.inotify.Event;
+import org.apache.hadoop.hdfs.inotify.EventBatch;
+import org.apache.hadoop.hdfs.inotify.MissingEventsException;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.hadoop.AbstractHadoopProcessor;
+import org.apache.nifi.processors.hadoop.FetchHDFS;
+import org.apache.nifi.processors.hadoop.GetHDFS;
+import org.apache.nifi.processors.hadoop.ListHDFS;
+import org.apache.nifi.processors.hadoop.PutHDFS;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"hadoop", "events", "inotify", "notifications", "filesystem"})
+@WritesAttributes({
+@WritesAttribute(attribute = EventAttributes.MIME_TYPE, 
description = "This is always application/json."),
+@WritesAttribute(attribute = EventAttributes.EVENT_TYPE, 
description = "This will specify the specific HDFS notification event type. 
Currently there are six types of events " +
+"(append, close, create, metadata, rename, and unlink)."),
+@WritesAttribute(attribute = EventAttributes.EVENT_PATH, 
description = "The specific path that the event is tied to.")
+})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("This processor polls the notification events 
provided by the HdfsAdmin API. Since this uses the HdfsAdmin APIs it is 
required to run as an HDFS super user. Currently there " +
+"are six types of events (append, close, create, metadata, rename, 
and unlink). Please see org.apache.hadoop.hdfs.inotify.Event documentation for 
full explanations of each event. " +
+"This processor will poll for new events based on a defined 
duration. For each event received a new flow file will be created with the 
expected attributes and the event itself serialized " +
+"to JSON and written to the flow file's content. For example, if 
event.type is APPEND then the content of the flow file will contain a JSON file 
containing the information about the " +
  

[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65878804
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/GetHDFSEvents.java
 ---
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+
+import org.apache.hadoop.hdfs.DFSInotifyEventInputStream;
+import org.apache.hadoop.hdfs.client.HdfsAdmin;
+import org.apache.hadoop.hdfs.inotify.Event;
+import org.apache.hadoop.hdfs.inotify.EventBatch;
+import org.apache.hadoop.hdfs.inotify.MissingEventsException;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.hadoop.AbstractHadoopProcessor;
+import org.apache.nifi.processors.hadoop.FetchHDFS;
+import org.apache.nifi.processors.hadoop.GetHDFS;
+import org.apache.nifi.processors.hadoop.ListHDFS;
+import org.apache.nifi.processors.hadoop.PutHDFS;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"hadoop", "events", "inotify", "notifications", "filesystem"})
+@WritesAttributes({
+@WritesAttribute(attribute = EventAttributes.MIME_TYPE, 
description = "This is always application/json."),
+@WritesAttribute(attribute = EventAttributes.EVENT_TYPE, 
description = "This will specify the specific HDFS notification event type. 
Currently there are six types of events " +
+"(append, close, create, metadata, rename, and unlink)."),
+@WritesAttribute(attribute = EventAttributes.EVENT_PATH, 
description = "The specific path that the event is tied to.")
+})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("This processor polls the notification events 
provided by the HdfsAdmin API. Since this uses the HdfsAdmin APIs it is 
required to run as an HDFS super user. Currently there " +
+"are six types of events (append, close, create, metadata, rename, 
and unlink). Please see org.apache.hadoop.hdfs.inotify.Event documentation for 
full explanations of each event. " +
+"This processor will poll for new events based on a defined 
duration. For each event received a new flow file will be created with the 
expected attributes and the event itself serialized " +
+"to JSON and written to the flow file's content. For example, if 
event.type is APPEND then the content of the flow file will contain a JSON file 
containing the information about the " +
  

[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65879551
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/EventAttributes.java
 ---
@@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+final class EventAttributes {
+static final String EVENT_PATH = "hdfs.inotify.event.path";
+static final String EVENT_TYPE = "hdfs.inotify.event.type";
+static final String MIME_TYPE = "mime.type";
--- End diff --

I think you can use the existing Core attribute for mime type, don't you 
think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #397: NIFI-1815

2016-06-06 Thread jdye64
Github user jdye64 commented on the issue:

https://github.com/apache/nifi/pull/397
  
Olegz - I'll certainly add the extra checking. As for installation on nom 
Windows I did the development on OS X and simply ran "brew install tesseract" 
rather than building from source

Sent from my iPhone

> On Jun 6, 2016, at 7:47 AM, Oleg Zhurakousky  
wrote:
> 
> Ok, while all is good on Windows I can't seem to have any success 
building those .so files on OSx. Normally I would not worry about it that much 
but given that Tesseract distribution includes DLLs inside the JAR means that 
for "all other" OS such native libraries will come from outside and need to be 
known to the processor, so we probably would need another property and 
definitely test with at least one non-Win system.
> So, I'll keep on trying (when I get a chance) to get/build those native 
libraries, but could use some help here as well
> 
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or mute the thread.
> 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #397: NIFI-1815

2016-06-06 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/397
  
True, and I did the same ('brew install tesseract') and got this
```
Trying a mirror...
==> Downloading 
ftp://ftp.simplesystems.org/pub/libpng/png/src/libpng16/libpng-1.6.17.tar.xz

curl: (78) RETR response: 550
Error: Failed to download resource "libpng"
Download failed: 
ftp://ftp.simplesystems.org/pub/libpng/png/src/libpng16/libpng-1.6.17.tar.xz
```
So, I'll keep on trying 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #497: NIFI-1857: HTTPS Site-to-Site

2016-06-06 Thread ijokarumawak
GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/497

NIFI-1857: HTTPS Site-to-Site

Hi,

I've squashed #423 with the latest master, to add proxy auth and better 
connection management. However, I haven't done tests with the latest clustering 
model which doesn't use NCM.

- Enable HTTP(S) for Site-to-Site communication
- Support HTTP Proxy in the middle of local and remote NiFi
- Support BASIC and DIGEST auth with Proxy Server
- Provide 2-phase style commit same as existing socket version
- [WIP] Test with the latest cluster env (without NCM) hasn't tested yet

Details are also written in [NiFi Feature 
Proposals](https://cwiki.apache.org/confluence/x/v628Aw).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-1857-ss

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/497.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #497


commit a3ddb24bbffdba31df5861439bf3ab03fcfc7bc4
Author: Koji Kawamura 
Date:   2016-06-06T13:19:26Z

NIFI-1857: HTTPS Site-to-Site

- Enable HTTP(S) for Site-to-Site communication
- Support HTTP Proxy in the middle of local and remote NiFi
- Support BASIC and DIGEST auth with Proxy Server
- Provide 2-phase style commit same as existing socket version
- [WIP] Test with the latest cluster env (without NCM) hasn't tested yet




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #423: [WIP] NIFI-1857 Added HTTP(S) support for Site-to-Si...

2016-06-06 Thread ijokarumawak
Github user ijokarumawak closed the pull request at:

https://github.com/apache/nifi/pull/423


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #423: [WIP] NIFI-1857 Added HTTP(S) support for Site-to-Site.

2016-06-06 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/423
  
Closing this PR since I've submitted updated version #497 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #487: NIFI-1956 added 'keyboard-interactive' option to SFTPTransf...

2016-06-06 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/487
  
@pvillard31gave it some time and it looks like with regard to testing this 
one will be one of those exceptions. In other words there is no easy way to 
reliably write a test case to validate that this property is set since it is 
set in private _getChannel(..)_ method. And even if I use reflection to invoke 
it, it would still require the running SSH server which may not be the case on 
other build environments. 
Now, I don't want to say that it's completely impossible, but at the moment 
I am willing to say it is not worth the effort. Let m know what you think.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #494: Added details about data size in description.

2016-06-06 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/494
  
Merging.
@thadguidry for the future, please raise JIRA prior to submitting PR 
(regardless how small/trivial the issue may be)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #494: Added details about data size in description.

2016-06-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/494


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #324: NIFI-1668 modified TestProcessorLifecycle to ensure FlowCon...

2016-06-06 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/324
  
@JPercivall comments are addressed, so please take a look


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #468: NIFI-1925: Fixed typo in error message

2016-06-06 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/468
  
merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #468: NIFI-1925: Fixed typo in error message

2016-06-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/468


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #498: NIFI-1973 Allow ExecuteSQL to use flow file content ...

2016-06-06 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/498

NIFI-1973 Allow ExecuteSQL to use flow file content as SQL query



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-1973

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/498.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #498


commit 5ae16ec261428d19acd20acc67c25f414146854a
Author: Matt Burgess 
Date:   2016-06-06T15:19:32Z

NIFI-1973 Allow ExecuteSQL to use flow file content as SQL query




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #499: NIFI-1052: Added "Ghost" Processors, Reporting Tasks...

2016-06-06 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/499

NIFI-1052: Added "Ghost" Processors, Reporting Tasks, Controller Services

If we try to create a component for which the NAR is missing, we previously 
would throw an Exception that would result in NiFi not starting up. We changed 
this so that we instead create a "Ghost" implementation that will be invalid an 
explain that the component could not be created. This allows NiFi at least to 
start, so that users can continue to use the NiFi instance.

This ticket also includes fixes to the ReportingTaskResource, as those were 
necessary to test this.

Note that if a component is missing, all properties are marked as 
'sensitive' simply because we don't know whether or not the property truly is 
sensitive, and it is better to show the value as sensitive than to assume that 
it is not. Unfortunately, this means that the actual property value can't be 
seen unless the correct component is restored to the lib/ directory.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1052

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/499.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #499


commit d86fd959f3ba20facdfc207b80da5fb48feaf379
Author: Mark Payne 
Date:   2016-06-03T23:41:43Z

NIFI-1052: Added Ghost Processors, Ghost Reporting Tasks, Ghost Controller 
Services




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #494: Added details about data size in description.

2016-06-06 Thread thadguidry
Github user thadguidry commented on the issue:

https://github.com/apache/nifi/pull/494
  
@olegz sure


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: NiFi Filenames too long

2016-06-06 Thread Roksolana
I used to have similar problems too, but after using "long path tool"
everything was solved. Try this software and you would be glad you did.




--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/NiFi-Filenames-too-long-tp1096p11087.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: Integrate with Microsoft Azure

2016-06-06 Thread Aldrin Piri
Brig,

Nothing is currently on the roadmap for a specific release, but there are
two issues that have some activity on JIRA and a working PR.

https://issues.apache.org/jira/browse/NIFI-1922

Please feel free to add on with any additional details or cases that are
not included.

On Wed, May 25, 2016 at 1:25 PM, Brig Lamoreaux <
brig.lamore...@microsoft.com> wrote:

> Hi Team,
>
>
>
> I’m trying to put a file to Azure HDInsight using the PutHDFS command.
> However, there seems to be an issue with the WASB. Do you know if this is
> on the roadmap to support WASB?
>
>
>
> Thanks,
>
> Brig Lamoreaux
>
> Data Solution Architect
>
> US Desert/Mountain Tempe
>
>
>
>
>
> [image: MSFT_logo_Gray DE sized SIG1.png]
>
>
>
>
>
>
>


[GitHub] nifi pull request #500: NIFI 1922

2016-06-06 Thread atilcock
GitHub user atilcock opened a pull request:

https://github.com/apache/nifi/pull/500

NIFI 1922

NIFI-1922 : Added dependencies for HDInsight wasb file format.   

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/atilcock/nifi NIFI-1922

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/500.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #500


commit e38f6b69b45470d0768a9427c6a63366c6263361
Author: Alex Tilcock 
Date:   2016-06-05T06:10:05Z

Updated to include dependencies for HDInsight wasb file system format

commit 9db2f5443bbc6ccf0b27a9a79c5c7dcb5aa7a681
Author: Alex Tilcock 
Date:   2016-06-06T16:47:31Z

NIFI-1922 : added dependency support for HDInsight plus patch file

commit 6b958adac08f0f91cd4abe5befe91e70fa29b213
Author: Alex Tilcock 
Date:   2016-06-06T16:47:36Z

Merge remote-tracking branch 'apache/master' into NIFI-1922




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #495: NIFI-1968 ExtractHL7Attributes is squashing empty component...

2016-06-06 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/495
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #495: NIFI-1968 ExtractHL7Attributes is squashing empty component...

2016-06-06 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/495
  
+1 LGTM, built and ran the tests, also ran with a NiFi flow, verified the 
attributes with empty segment components are displayed correctly.

Mind squashing your commits? Also I was having trouble applying the patch, 
I may need to manually intervene but I will keep you as author of the commit(s).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #487: NIFI-1956 added 'keyboard-interactive' option to SFTPTransf...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/487
  
Agree. Merging...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #487: NIFI-1956 added 'keyboard-interactive' option to SFT...

2016-06-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/487


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #324: NIFI-1668 modified TestProcessorLifecycle to ensure FlowCon...

2016-06-06 Thread JPercivall
Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/324
  
Looks good +1

Will merge into master and 0.x


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #324: NIFI-1668 modified TestProcessorLifecycle to ensure ...

2016-06-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/324


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread jjmeyer0
Github user jjmeyer0 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65942690
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/GetHDFSEvents.java
 ---
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+
+import org.apache.hadoop.hdfs.DFSInotifyEventInputStream;
+import org.apache.hadoop.hdfs.client.HdfsAdmin;
+import org.apache.hadoop.hdfs.inotify.Event;
+import org.apache.hadoop.hdfs.inotify.EventBatch;
+import org.apache.hadoop.hdfs.inotify.MissingEventsException;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.hadoop.AbstractHadoopProcessor;
+import org.apache.nifi.processors.hadoop.FetchHDFS;
+import org.apache.nifi.processors.hadoop.GetHDFS;
+import org.apache.nifi.processors.hadoop.ListHDFS;
+import org.apache.nifi.processors.hadoop.PutHDFS;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"hadoop", "events", "inotify", "notifications", "filesystem"})
+@WritesAttributes({
+@WritesAttribute(attribute = EventAttributes.MIME_TYPE, 
description = "This is always application/json."),
+@WritesAttribute(attribute = EventAttributes.EVENT_TYPE, 
description = "This will specify the specific HDFS notification event type. 
Currently there are six types of events " +
+"(append, close, create, metadata, rename, and unlink)."),
+@WritesAttribute(attribute = EventAttributes.EVENT_PATH, 
description = "The specific path that the event is tied to.")
+})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("This processor polls the notification events 
provided by the HdfsAdmin API. Since this uses the HdfsAdmin APIs it is 
required to run as an HDFS super user. Currently there " +
+"are six types of events (append, close, create, metadata, rename, 
and unlink). Please see org.apache.hadoop.hdfs.inotify.Event documentation for 
full explanations of each event. " +
+"This processor will poll for new events based on a defined 
duration. For each event received a new flow file will be created with the 
expected attributes and the event itself serialized " +
+"to JSON and written to the flow file's content. For example, if 
event.type is APPEND then the content of the flow file will contain a JSON file 
containing the information about the " +

[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread jjmeyer0
Github user jjmeyer0 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65942750
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/GetHDFSEvents.java
 ---
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+
+import org.apache.hadoop.hdfs.DFSInotifyEventInputStream;
+import org.apache.hadoop.hdfs.client.HdfsAdmin;
+import org.apache.hadoop.hdfs.inotify.Event;
+import org.apache.hadoop.hdfs.inotify.EventBatch;
+import org.apache.hadoop.hdfs.inotify.MissingEventsException;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.hadoop.AbstractHadoopProcessor;
+import org.apache.nifi.processors.hadoop.FetchHDFS;
+import org.apache.nifi.processors.hadoop.GetHDFS;
+import org.apache.nifi.processors.hadoop.ListHDFS;
+import org.apache.nifi.processors.hadoop.PutHDFS;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"hadoop", "events", "inotify", "notifications", "filesystem"})
+@WritesAttributes({
+@WritesAttribute(attribute = EventAttributes.MIME_TYPE, 
description = "This is always application/json."),
+@WritesAttribute(attribute = EventAttributes.EVENT_TYPE, 
description = "This will specify the specific HDFS notification event type. 
Currently there are six types of events " +
+"(append, close, create, metadata, rename, and unlink)."),
+@WritesAttribute(attribute = EventAttributes.EVENT_PATH, 
description = "The specific path that the event is tied to.")
+})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("This processor polls the notification events 
provided by the HdfsAdmin API. Since this uses the HdfsAdmin APIs it is 
required to run as an HDFS super user. Currently there " +
+"are six types of events (append, close, create, metadata, rename, 
and unlink). Please see org.apache.hadoop.hdfs.inotify.Event documentation for 
full explanations of each event. " +
+"This processor will poll for new events based on a defined 
duration. For each event received a new flow file will be created with the 
expected attributes and the event itself serialized " +
+"to JSON and written to the flow file's content. For example, if 
event.type is APPEND then the content of the flow file will contain a JSON file 
containing the information about the " +

[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread jjmeyer0
Github user jjmeyer0 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65943519
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/EventAttributes.java
 ---
@@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+final class EventAttributes {
+static final String EVENT_PATH = "hdfs.inotify.event.path";
+static final String EVENT_TYPE = "hdfs.inotify.event.type";
+static final String MIME_TYPE = "mime.type";
--- End diff --

The reason I did this is so that I wouldn't have to hardcode a string in 
the WriteAttribute annotation and then use the CoreAttributes and 
EventAttributes when writing said attributes. I thought it would be simpler to 
consolidated into one class. If it is preferred to use CoreAttributes I can 
switch mime.type to do so.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - initial commit variable registry for cus...

2016-06-06 Thread YolandaMDavis
GitHub user YolandaMDavis opened a pull request:

https://github.com/apache/nifi/pull/501

NIFI-1974 - initial commit variable registry for custom property support

This is an initial commit for NIFI-1974 which will support making custom 
property files available for use in expression languages in NIFI.   

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/YolandaMDavis/nifi NIFI-1974

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/501.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #501


commit 8052b7adea612dd91c999368a8e55c3622d30035
Author: Yolanda M. Davis 
Date:   2016-06-06T01:17:37Z

NIFI-1974 - initial commit variable registry for custom property support




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #495: NIFI-1968 ExtractHL7Attributes is squashing empty component...

2016-06-06 Thread jfrazee
Github user jfrazee commented on the issue:

https://github.com/apache/nifi/pull/495
  
@mattyb149 All squashed. The original was off of 0.x and the squashed 
commits are rebased onto the last few days's changes, so I think it should be 
clean. Not sure why it wouldn't be.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65957598
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/EventAttributes.java
 ---
@@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+final class EventAttributes {
+static final String EVENT_PATH = "hdfs.inotify.event.path";
+static final String EVENT_TYPE = "hdfs.inotify.event.type";
+static final String MIME_TYPE = "mime.type";
--- End diff --

I see your point. I don't really have an opinion on what is best, maybe 
someone else has some thoughts?
(I was mentioning that mainly because if I look at IdentifyMimeType 
processor, there is "mime.type" in the annotation and CoreAttributes is used)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65961941
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/GetHDFSEvents.java
 ---
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+
+import org.apache.hadoop.hdfs.DFSInotifyEventInputStream;
+import org.apache.hadoop.hdfs.client.HdfsAdmin;
+import org.apache.hadoop.hdfs.inotify.Event;
+import org.apache.hadoop.hdfs.inotify.EventBatch;
+import org.apache.hadoop.hdfs.inotify.MissingEventsException;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.hadoop.AbstractHadoopProcessor;
+import org.apache.nifi.processors.hadoop.FetchHDFS;
+import org.apache.nifi.processors.hadoop.GetHDFS;
+import org.apache.nifi.processors.hadoop.ListHDFS;
+import org.apache.nifi.processors.hadoop.PutHDFS;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"hadoop", "events", "inotify", "notifications", "filesystem"})
+@WritesAttributes({
+@WritesAttribute(attribute = EventAttributes.MIME_TYPE, 
description = "This is always application/json."),
+@WritesAttribute(attribute = EventAttributes.EVENT_TYPE, 
description = "This will specify the specific HDFS notification event type. 
Currently there are six types of events " +
+"(append, close, create, metadata, rename, and unlink)."),
+@WritesAttribute(attribute = EventAttributes.EVENT_PATH, 
description = "The specific path that the event is tied to.")
+})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("This processor polls the notification events 
provided by the HdfsAdmin API. Since this uses the HdfsAdmin APIs it is 
required to run as an HDFS super user. Currently there " +
+"are six types of events (append, close, create, metadata, rename, 
and unlink). Please see org.apache.hadoop.hdfs.inotify.Event documentation for 
full explanations of each event. " +
+"This processor will poll for new events based on a defined 
duration. For each event received a new flow file will be created with the 
expected attributes and the event itself serialized " +
+"to JSON and written to the flow file's content. For example, if 
event.type is APPEND then the content of the flow file will contain a JSON file 
containing the information about the " +
  

[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65966688
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/GetHDFSEvents.java
 ---
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+
+import org.apache.hadoop.hdfs.DFSInotifyEventInputStream;
+import org.apache.hadoop.hdfs.client.HdfsAdmin;
+import org.apache.hadoop.hdfs.inotify.Event;
+import org.apache.hadoop.hdfs.inotify.EventBatch;
+import org.apache.hadoop.hdfs.inotify.MissingEventsException;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.hadoop.AbstractHadoopProcessor;
+import org.apache.nifi.processors.hadoop.FetchHDFS;
+import org.apache.nifi.processors.hadoop.GetHDFS;
+import org.apache.nifi.processors.hadoop.ListHDFS;
+import org.apache.nifi.processors.hadoop.PutHDFS;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"hadoop", "events", "inotify", "notifications", "filesystem"})
+@WritesAttributes({
+@WritesAttribute(attribute = EventAttributes.MIME_TYPE, 
description = "This is always application/json."),
+@WritesAttribute(attribute = EventAttributes.EVENT_TYPE, 
description = "This will specify the specific HDFS notification event type. 
Currently there are six types of events " +
+"(append, close, create, metadata, rename, and unlink)."),
+@WritesAttribute(attribute = EventAttributes.EVENT_PATH, 
description = "The specific path that the event is tied to.")
+})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("This processor polls the notification events 
provided by the HdfsAdmin API. Since this uses the HdfsAdmin APIs it is 
required to run as an HDFS super user. Currently there " +
+"are six types of events (append, close, create, metadata, rename, 
and unlink). Please see org.apache.hadoop.hdfs.inotify.Event documentation for 
full explanations of each event. " +
+"This processor will poll for new events based on a defined 
duration. For each event received a new flow file will be created with the 
expected attributes and the event itself serialized " +
+"to JSON and written to the flow file's content. For example, if 
event.type is APPEND then the content of the flow file will contain a JSON file 
containing the information about the " +
  

[GitHub] nifi pull request #493: NIFI-1037 Created processor that handles HDFS' inoti...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/493#discussion_r65967740
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/inotify/GetHDFSEvents.java
 ---
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop.inotify;
+
+
+import org.apache.hadoop.hdfs.DFSInotifyEventInputStream;
+import org.apache.hadoop.hdfs.client.HdfsAdmin;
+import org.apache.hadoop.hdfs.inotify.Event;
+import org.apache.hadoop.hdfs.inotify.EventBatch;
+import org.apache.hadoop.hdfs.inotify.MissingEventsException;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.hadoop.AbstractHadoopProcessor;
+import org.apache.nifi.processors.hadoop.FetchHDFS;
+import org.apache.nifi.processors.hadoop.GetHDFS;
+import org.apache.nifi.processors.hadoop.ListHDFS;
+import org.apache.nifi.processors.hadoop.PutHDFS;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"hadoop", "events", "inotify", "notifications", "filesystem"})
+@WritesAttributes({
+@WritesAttribute(attribute = EventAttributes.MIME_TYPE, 
description = "This is always application/json."),
+@WritesAttribute(attribute = EventAttributes.EVENT_TYPE, 
description = "This will specify the specific HDFS notification event type. 
Currently there are six types of events " +
+"(append, close, create, metadata, rename, and unlink)."),
+@WritesAttribute(attribute = EventAttributes.EVENT_PATH, 
description = "The specific path that the event is tied to.")
+})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("This processor polls the notification events 
provided by the HdfsAdmin API. Since this uses the HdfsAdmin APIs it is 
required to run as an HDFS super user. Currently there " +
+"are six types of events (append, close, create, metadata, rename, 
and unlink). Please see org.apache.hadoop.hdfs.inotify.Event documentation for 
full explanations of each event. " +
+"This processor will poll for new events based on a defined 
duration. For each event received a new flow file will be created with the 
expected attributes and the event itself serialized " +
+"to JSON and written to the flow file's content. For example, if 
event.type is APPEND then the content of the flow file will contain a JSON file 
containing the information about the " +
  

[GitHub] nifi issue #493: NIFI-1037 Created processor that handles HDFS' inotify even...

2016-06-06 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/493
  
Once comments are addressed, I'll have another look and give it a try but 
it looks good to me. It's a nice feature!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Apache NiFi 0.7.0 Release date ?

2016-06-06 Thread Ryan H
This is old enough that I'm sure it will make 0.7.0.. Just to give a shout
though:
"NIFI-1686 - NiFi is unable to populate over 1/4 of AMQP properties from
flow properties." (https://github.com/apache/nifi/pull/305)

NIFI 0.6.1 throws some nice stacktraces for this..

I think it barely missed the 0.6.1 release..

https://github.com/apache/nifi/commit/e02c79975ed8db69e63d96a28e81db08bc869e54#diff-3e956d8910c69a1df119adf256b3e1ce

Ryan


On Sun, Jun 5, 2016 at 5:05 PM, Tony Kurc  wrote:

> Joe,
> Please leave the cookie code that I'm reviewing (NIFI-1937) in 0.7.0. I'm
> almost done testing.
>
> On Sun, Jun 5, 2016 at 4:13 PM, Joe Witt  wrote:
>
> > Team
> >
> > Several folks have asked for an 070 release.  However there are quite a
> lot
> > of tickets hanging out there.  I plan to move them out to allow for a
> > release to occur.  If there are critical items please advise.
> >
> > Thanks
> > Joe
> > On Jun 4, 2016 4:13 PM, "idioma"  wrote:
> >
> > > Joe,
> > > I was actually going to post the very same question when I have found
> > this
> > > one. I am personally interested in the following features:
> > >
> > > JSON-to-JSON Schema Converter Editor
> > >   ;
> > > Transform JOLT Processor   ;
> > > Add replaceFirst method to expression language
> > > 
> > >
> > > Any idea about the timeline?
> > >
> > > Thank you,
> > >
> > > Regards,
> > >
> > > I.
> > >
> > >
> > >
> > > --
> > > View this message in context:
> > >
> >
> http://apache-nifi-developer-list.39713.n7.nabble.com/Apache-NiFi-0-7-0-Release-date-tp10720p11056.html
> > > Sent from the Apache NiFi Developer List mailing list archive at
> > > Nabble.com.
> > >
> >
>


Re: Apache NiFi 0.7.0 Release date ?

2016-06-06 Thread Michael Moser
PR #305 was merged into 0.x and master branches, so I updated NIFI-1686 to
reflect its Fix Version as 1.0.0, 0.7.0

-- Mike


On Mon, Jun 6, 2016 at 5:08 PM, Ryan H  wrote:

> This is old enough that I'm sure it will make 0.7.0.. Just to give a shout
> though:
> "NIFI-1686 - NiFi is unable to populate over 1/4 of AMQP properties from
> flow properties." (https://github.com/apache/nifi/pull/305)
>
> NIFI 0.6.1 throws some nice stacktraces for this..
>
> I think it barely missed the 0.6.1 release..
>
>
> https://github.com/apache/nifi/commit/e02c79975ed8db69e63d96a28e81db08bc869e54#diff-3e956d8910c69a1df119adf256b3e1ce
>
> Ryan
>
>
> On Sun, Jun 5, 2016 at 5:05 PM, Tony Kurc  wrote:
>
> > Joe,
> > Please leave the cookie code that I'm reviewing (NIFI-1937) in 0.7.0. I'm
> > almost done testing.
> >
> > On Sun, Jun 5, 2016 at 4:13 PM, Joe Witt  wrote:
> >
> > > Team
> > >
> > > Several folks have asked for an 070 release.  However there are quite a
> > lot
> > > of tickets hanging out there.  I plan to move them out to allow for a
> > > release to occur.  If there are critical items please advise.
> > >
> > > Thanks
> > > Joe
> > > On Jun 4, 2016 4:13 PM, "idioma"  wrote:
> > >
> > > > Joe,
> > > > I was actually going to post the very same question when I have found
> > > this
> > > > one. I am personally interested in the following features:
> > > >
> > > > JSON-to-JSON Schema Converter Editor
> > > >   ;
> > > > Transform JOLT Processor 
> ;
> > > > Add replaceFirst method to expression language
> > > > 
> > > >
> > > > Any idea about the timeline?
> > > >
> > > > Thank you,
> > > >
> > > > Regards,
> > > >
> > > > I.
> > > >
> > > >
> > > >
> > > > --
> > > > View this message in context:
> > > >
> > >
> >
> http://apache-nifi-developer-list.39713.n7.nabble.com/Apache-NiFi-0-7-0-Release-date-tp10720p11056.html
> > > > Sent from the Apache NiFi Developer List mailing list archive at
> > > > Nabble.com.
> > > >
> > >
> >
>


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-06 Thread mans2singh
GitHub user mans2singh opened a pull request:

https://github.com/apache/nifi/pull/502

Nifi-1972 Apache Ignite Put Cache Processor

Nifi Put Ignite cache processor which streams data into Ignite Cache.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mans2singh/nifi Nifi-1972

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/502.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #502


commit 85e199bf06affea5d88f6f959fbc24a93a13f436
Author: mans2singh 
Date:   2016-06-03T01:51:49Z

Nifi-1972 First cut for cache
Nifi-1972 Updated code and added test cases
Nifi-1972 Added javadocs
Nifi-1972 Added integration test and checkstyle corrections
Nifi-1972 Logging and integration test correction
Nifi-1972 Added allow overwrite and test cases




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #495: NIFI-1968 ExtractHL7Attributes is squashing empty component...

2016-06-06 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/495
  
Merged this to 0.x and master, but forgot the magic string to close the PR, 
do you mind closing?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #492: NIFI-1975 - Processor for parsing evtx files

2016-06-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/492#discussion_r66000380
  
--- Diff: 
nifi-nar-bundles/nifi-evtx-bundle/nifi-evtx-nar/src/main/resources/META-INF/NOTICE
 ---
@@ -0,0 +1,36 @@
+nifi-evtx-nar
+Copyright 2016 The Apache Software Foundation
+
+This includes derived works from the Apache Software License V2 library 
python-evtx (https://github.com/williballenthin/python-evtx)
+Copyright 2012, 2013 Willi Ballenthin william.ballent...@mandiant.com
+while at Mandiant http://www.mandiant.com
+The derived work is adapted from Evtx/Evtx.py, Evtx/BinaryParser.py, 
Evtx/Nodes.py, Evtx/Views.py and can be found in the 
org.apache.nifi.processors.evtx.parser package.
+
--- End diff --

Seems comprehensive, thanks! I will defer to @joewitt if this is sufficient


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #492: NIFI-1975 - Processor for parsing evtx files

2016-06-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/492#discussion_r66000431
  
--- Diff: nifi-nar-bundles/nifi-evtx-bundle/nifi-evtx-processors/pom.xml ---
@@ -0,0 +1,68 @@
+
+
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+nifi-evtx-bundle
+org.apache.nifi
+1.0.0-SNAPSHOT
+
+
+nifi-evtx-processors
+jar
+
+
+
+org.apache.nifi
+nifi-api
+provided
+
+
+org.apache.nifi
+nifi-properties
+provided
+
+
+org.apache.nifi
+nifi-processor-utils
+
+
+com.google.guava
+guava
+
+
+org.apache.nifi
+nifi-mock
+test
--- End diff --

Nit pick if there are other comments; group test dependencies together at 
the bottom


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #492: NIFI-1975 - Processor for parsing evtx files

2016-06-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/492#discussion_r66000567
  
--- Diff: 
nifi-nar-bundles/nifi-evtx-bundle/nifi-evtx-processors/src/main/java/org/apache/nifi/processors/evtx/ParseEvtx.java
 ---
@@ -0,0 +1,353 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.evtx;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.net.MediaType;
+import com.google.common.primitives.UnsignedLong;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processors.evtx.parser.ChunkHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeaderFactory;
+import org.apache.nifi.processors.evtx.parser.MalformedChunkException;
+import org.apache.nifi.processors.evtx.parser.Record;
+import org.apache.nifi.processors.evtx.parser.XmlBxmlNodeVisitor;
+import org.apache.nifi.processors.evtx.parser.bxml.RootNode;
+
+import javax.xml.stream.XMLOutputFactory;
+import javax.xml.stream.XMLStreamException;
+import javax.xml.stream.XMLStreamWriter;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+@SideEffectFree
+@EventDriven
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"logs", "windows", "event", "evtx", "message", "file"})
+@CapabilityDescription("Parses the contents of a Windows Event Log file 
(evtx) and writes the resulting xml to the FlowFile")
+public class ParseEvtx extends AbstractProcessor {
+public static final String RECORD = "Record";
+public static final String CHUNK = "Chunk";
+public static final String FILE = "File";
+public static final String EVENTS = "Events";
+public static final XMLOutputFactory XML_OUTPUT_FACTORY = 
XMLOutputFactory.newFactory();
+public static final String EVTX_EXTENSION = ".evtx";
+public static final String UNABLE_TO_PROCESS_DUE_TO = "Unable to 
process {} due to {}";
+public static final String XML_EXTENSION = ".xml";
+
+@VisibleForTesting
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("Any FlowFile that was successfully converted 
from evtx to xml")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("Any FlowFile that encountered an exception 
during conversion will be transferred to this relationship with as much parsing 
as possible done")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_BAD_CHUNK = new Relationship.Builder()
+.name("bad chunk")
+.description("Any bad ch

[GitHub] nifi pull request #492: NIFI-1975 - Processor for parsing evtx files

2016-06-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/492#discussion_r66000706
  
--- Diff: 
nifi-nar-bundles/nifi-evtx-bundle/nifi-evtx-processors/src/main/java/org/apache/nifi/processors/evtx/ParseEvtx.java
 ---
@@ -0,0 +1,353 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.evtx;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.net.MediaType;
+import com.google.common.primitives.UnsignedLong;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processors.evtx.parser.ChunkHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeaderFactory;
+import org.apache.nifi.processors.evtx.parser.MalformedChunkException;
+import org.apache.nifi.processors.evtx.parser.Record;
+import org.apache.nifi.processors.evtx.parser.XmlBxmlNodeVisitor;
+import org.apache.nifi.processors.evtx.parser.bxml.RootNode;
+
+import javax.xml.stream.XMLOutputFactory;
+import javax.xml.stream.XMLStreamException;
+import javax.xml.stream.XMLStreamWriter;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+@SideEffectFree
+@EventDriven
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"logs", "windows", "event", "evtx", "message", "file"})
+@CapabilityDescription("Parses the contents of a Windows Event Log file 
(evtx) and writes the resulting xml to the FlowFile")
+public class ParseEvtx extends AbstractProcessor {
+public static final String RECORD = "Record";
+public static final String CHUNK = "Chunk";
+public static final String FILE = "File";
+public static final String EVENTS = "Events";
+public static final XMLOutputFactory XML_OUTPUT_FACTORY = 
XMLOutputFactory.newFactory();
+public static final String EVTX_EXTENSION = ".evtx";
+public static final String UNABLE_TO_PROCESS_DUE_TO = "Unable to 
process {} due to {}";
+public static final String XML_EXTENSION = ".xml";
+
+@VisibleForTesting
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("Any FlowFile that was successfully converted 
from evtx to xml")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("Any FlowFile that encountered an exception 
during conversion will be transferred to this relationship with as much parsing 
as possible done")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_BAD_CHUNK = new Relationship.Builder()
+.name("bad chunk")
+.description("Any bad ch

[GitHub] nifi pull request #492: NIFI-1975 - Processor for parsing evtx files

2016-06-06 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/492#discussion_r66000949
  
--- Diff: 
nifi-nar-bundles/nifi-evtx-bundle/nifi-evtx-processors/src/main/java/org/apache/nifi/processors/evtx/ParseEvtx.java
 ---
@@ -0,0 +1,353 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.evtx;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.net.MediaType;
+import com.google.common.primitives.UnsignedLong;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processors.evtx.parser.ChunkHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeaderFactory;
+import org.apache.nifi.processors.evtx.parser.MalformedChunkException;
+import org.apache.nifi.processors.evtx.parser.Record;
+import org.apache.nifi.processors.evtx.parser.XmlBxmlNodeVisitor;
+import org.apache.nifi.processors.evtx.parser.bxml.RootNode;
+
+import javax.xml.stream.XMLOutputFactory;
+import javax.xml.stream.XMLStreamException;
+import javax.xml.stream.XMLStreamWriter;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+@SideEffectFree
+@EventDriven
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"logs", "windows", "event", "evtx", "message", "file"})
+@CapabilityDescription("Parses the contents of a Windows Event Log file 
(evtx) and writes the resulting xml to the FlowFile")
+public class ParseEvtx extends AbstractProcessor {
+public static final String RECORD = "Record";
+public static final String CHUNK = "Chunk";
+public static final String FILE = "File";
+public static final String EVENTS = "Events";
+public static final XMLOutputFactory XML_OUTPUT_FACTORY = 
XMLOutputFactory.newFactory();
+public static final String EVTX_EXTENSION = ".evtx";
+public static final String UNABLE_TO_PROCESS_DUE_TO = "Unable to 
process {} due to {}";
+public static final String XML_EXTENSION = ".xml";
+
+@VisibleForTesting
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("Any FlowFile that was successfully converted 
from evtx to xml")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("Any FlowFile that encountered an exception 
during conversion will be transferred to this relationship with as much parsing 
as possible done")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_BAD_CHUNK = new Relationship.Builder()
+.name("bad chunk")
+.description("Any bad ch

[GitHub] nifi issue #239: Nifi 1540 - AWS Kinesis Get and Put Processors

2016-06-06 Thread jvwing
Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/239
  
The suggested use of `name` and `displayName` on PropertyDescriptors has 
been shared around a lot the last few days.  You can read the [backstory 
thread](http://mail-archives.apache.org/mod_mbox/nifi-dev/201605.mbox/%3c5a6fdf1e-1889-46fe-a3c4-5d2f0a905...@apache.org%3E)
 on the best practice and the reasons for it.  The short, short version is to 
provide `name` as a computer-readable key to saved settings in templates and 
flows, and `displayName` as a human-readable description which may be changed 
or translated without breaking compatibility with saved data.  A good example 
is this PropertyDescriptor from PutS3Object:

public static final PropertyDescriptor SERVER_SIDE_ENCRYPTION = new 
PropertyDescriptor.Builder()
.name("server-side-encryption")
.displayName("Server Side Encryption")
.description("Specifies the algorithm used for server side 
encryption.")
.required(true)
.allowableValues(NO_SERVER_SIDE_ENCRYPTION, 
ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION)
.defaultValue(NO_SERVER_SIDE_ENCRYPTION)
.build();



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #492: NIFI-1975 - Processor for parsing evtx files

2016-06-06 Thread brosander
Github user brosander commented on a diff in the pull request:

https://github.com/apache/nifi/pull/492#discussion_r66007396
  
--- Diff: nifi-nar-bundles/nifi-evtx-bundle/nifi-evtx-processors/pom.xml ---
@@ -0,0 +1,68 @@
+
+
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+nifi-evtx-bundle
+org.apache.nifi
+1.0.0-SNAPSHOT
+
+
+nifi-evtx-processors
+jar
+
+
+
+org.apache.nifi
+nifi-api
+provided
+
+
+org.apache.nifi
+nifi-properties
+provided
+
+
+org.apache.nifi
+nifi-processor-utils
+
+
+com.google.guava
+guava
+
+
+org.apache.nifi
+nifi-mock
+test
--- End diff --

will do


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #492: NIFI-1975 - Processor for parsing evtx files

2016-06-06 Thread brosander
Github user brosander commented on a diff in the pull request:

https://github.com/apache/nifi/pull/492#discussion_r66007477
  
--- Diff: 
nifi-nar-bundles/nifi-evtx-bundle/nifi-evtx-processors/src/main/java/org/apache/nifi/processors/evtx/ParseEvtx.java
 ---
@@ -0,0 +1,353 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.evtx;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.net.MediaType;
+import com.google.common.primitives.UnsignedLong;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processors.evtx.parser.ChunkHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeaderFactory;
+import org.apache.nifi.processors.evtx.parser.MalformedChunkException;
+import org.apache.nifi.processors.evtx.parser.Record;
+import org.apache.nifi.processors.evtx.parser.XmlBxmlNodeVisitor;
+import org.apache.nifi.processors.evtx.parser.bxml.RootNode;
+
+import javax.xml.stream.XMLOutputFactory;
+import javax.xml.stream.XMLStreamException;
+import javax.xml.stream.XMLStreamWriter;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+@SideEffectFree
+@EventDriven
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"logs", "windows", "event", "evtx", "message", "file"})
+@CapabilityDescription("Parses the contents of a Windows Event Log file 
(evtx) and writes the resulting xml to the FlowFile")
+public class ParseEvtx extends AbstractProcessor {
+public static final String RECORD = "Record";
+public static final String CHUNK = "Chunk";
+public static final String FILE = "File";
+public static final String EVENTS = "Events";
+public static final XMLOutputFactory XML_OUTPUT_FACTORY = 
XMLOutputFactory.newFactory();
+public static final String EVTX_EXTENSION = ".evtx";
+public static final String UNABLE_TO_PROCESS_DUE_TO = "Unable to 
process {} due to {}";
+public static final String XML_EXTENSION = ".xml";
+
+@VisibleForTesting
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("Any FlowFile that was successfully converted 
from evtx to xml")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("Any FlowFile that encountered an exception 
during conversion will be transferred to this relationship with as much parsing 
as possible done")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_BAD_CHUNK = new Relationship.Builder()
+.name("bad chunk")
+.description("Any bad ch

[GitHub] nifi pull request #492: NIFI-1975 - Processor for parsing evtx files

2016-06-06 Thread brosander
Github user brosander commented on a diff in the pull request:

https://github.com/apache/nifi/pull/492#discussion_r66007547
  
--- Diff: 
nifi-nar-bundles/nifi-evtx-bundle/nifi-evtx-processors/src/main/java/org/apache/nifi/processors/evtx/ParseEvtx.java
 ---
@@ -0,0 +1,353 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.evtx;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.net.MediaType;
+import com.google.common.primitives.UnsignedLong;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processors.evtx.parser.ChunkHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeaderFactory;
+import org.apache.nifi.processors.evtx.parser.MalformedChunkException;
+import org.apache.nifi.processors.evtx.parser.Record;
+import org.apache.nifi.processors.evtx.parser.XmlBxmlNodeVisitor;
+import org.apache.nifi.processors.evtx.parser.bxml.RootNode;
+
+import javax.xml.stream.XMLOutputFactory;
+import javax.xml.stream.XMLStreamException;
+import javax.xml.stream.XMLStreamWriter;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+@SideEffectFree
+@EventDriven
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"logs", "windows", "event", "evtx", "message", "file"})
+@CapabilityDescription("Parses the contents of a Windows Event Log file 
(evtx) and writes the resulting xml to the FlowFile")
+public class ParseEvtx extends AbstractProcessor {
+public static final String RECORD = "Record";
+public static final String CHUNK = "Chunk";
+public static final String FILE = "File";
+public static final String EVENTS = "Events";
+public static final XMLOutputFactory XML_OUTPUT_FACTORY = 
XMLOutputFactory.newFactory();
+public static final String EVTX_EXTENSION = ".evtx";
+public static final String UNABLE_TO_PROCESS_DUE_TO = "Unable to 
process {} due to {}";
+public static final String XML_EXTENSION = ".xml";
+
+@VisibleForTesting
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("Any FlowFile that was successfully converted 
from evtx to xml")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("Any FlowFile that encountered an exception 
during conversion will be transferred to this relationship with as much parsing 
as possible done")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_BAD_CHUNK = new Relationship.Builder()
+.name("bad chunk")
+.description("Any bad ch

[GitHub] nifi pull request #492: NIFI-1975 - Processor for parsing evtx files

2016-06-06 Thread brosander
Github user brosander commented on a diff in the pull request:

https://github.com/apache/nifi/pull/492#discussion_r66007647
  
--- Diff: 
nifi-nar-bundles/nifi-evtx-bundle/nifi-evtx-processors/src/main/java/org/apache/nifi/processors/evtx/ParseEvtx.java
 ---
@@ -0,0 +1,353 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.evtx;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.net.MediaType;
+import com.google.common.primitives.UnsignedLong;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processors.evtx.parser.ChunkHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeader;
+import org.apache.nifi.processors.evtx.parser.FileHeaderFactory;
+import org.apache.nifi.processors.evtx.parser.MalformedChunkException;
+import org.apache.nifi.processors.evtx.parser.Record;
+import org.apache.nifi.processors.evtx.parser.XmlBxmlNodeVisitor;
+import org.apache.nifi.processors.evtx.parser.bxml.RootNode;
+
+import javax.xml.stream.XMLOutputFactory;
+import javax.xml.stream.XMLStreamException;
+import javax.xml.stream.XMLStreamWriter;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+
+@SideEffectFree
+@EventDriven
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"logs", "windows", "event", "evtx", "message", "file"})
+@CapabilityDescription("Parses the contents of a Windows Event Log file 
(evtx) and writes the resulting xml to the FlowFile")
+public class ParseEvtx extends AbstractProcessor {
+public static final String RECORD = "Record";
+public static final String CHUNK = "Chunk";
+public static final String FILE = "File";
+public static final String EVENTS = "Events";
+public static final XMLOutputFactory XML_OUTPUT_FACTORY = 
XMLOutputFactory.newFactory();
+public static final String EVTX_EXTENSION = ".evtx";
+public static final String UNABLE_TO_PROCESS_DUE_TO = "Unable to 
process {} due to {}";
+public static final String XML_EXTENSION = ".xml";
+
+@VisibleForTesting
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("Any FlowFile that was successfully converted 
from evtx to xml")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("Any FlowFile that encountered an exception 
during conversion will be transferred to this relationship with as much parsing 
as possible done")
+.build();
+
+@VisibleForTesting
+static final Relationship REL_BAD_CHUNK = new Relationship.Builder()
+.name("bad chunk")
+.description("Any bad ch

Re: Apache NiFi 0.7.0 Release date ?

2016-06-06 Thread Joe Witt
NiFi 0.7.0 presently has 24 open tickets [1].

I moved/closed those that appeared obvious to move out for now.
However, for those that remain they seem to be in active review state
and just needing some final review/feedback.

If you have tickets in here that you contributed or are reviewing
please try to carry the issue through to its eventual merge, deferral,
or closure.

[1] https://issues.apache.org/jira/browse/NIFI/fixforversion/12335078

Thanks
Joe

On Mon, Jun 6, 2016 at 5:27 PM, Michael Moser  wrote:
> PR #305 was merged into 0.x and master branches, so I updated NIFI-1686 to
> reflect its Fix Version as 1.0.0, 0.7.0
>
> -- Mike
>
>
> On Mon, Jun 6, 2016 at 5:08 PM, Ryan H  wrote:
>
>> This is old enough that I'm sure it will make 0.7.0.. Just to give a shout
>> though:
>> "NIFI-1686 - NiFi is unable to populate over 1/4 of AMQP properties from
>> flow properties." (https://github.com/apache/nifi/pull/305)
>>
>> NIFI 0.6.1 throws some nice stacktraces for this..
>>
>> I think it barely missed the 0.6.1 release..
>>
>>
>> https://github.com/apache/nifi/commit/e02c79975ed8db69e63d96a28e81db08bc869e54#diff-3e956d8910c69a1df119adf256b3e1ce
>>
>> Ryan
>>
>>
>> On Sun, Jun 5, 2016 at 5:05 PM, Tony Kurc  wrote:
>>
>> > Joe,
>> > Please leave the cookie code that I'm reviewing (NIFI-1937) in 0.7.0. I'm
>> > almost done testing.
>> >
>> > On Sun, Jun 5, 2016 at 4:13 PM, Joe Witt  wrote:
>> >
>> > > Team
>> > >
>> > > Several folks have asked for an 070 release.  However there are quite a
>> > lot
>> > > of tickets hanging out there.  I plan to move them out to allow for a
>> > > release to occur.  If there are critical items please advise.
>> > >
>> > > Thanks
>> > > Joe
>> > > On Jun 4, 2016 4:13 PM, "idioma"  wrote:
>> > >
>> > > > Joe,
>> > > > I was actually going to post the very same question when I have found
>> > > this
>> > > > one. I am personally interested in the following features:
>> > > >
>> > > > JSON-to-JSON Schema Converter Editor
>> > > >   ;
>> > > > Transform JOLT Processor 
>> ;
>> > > > Add replaceFirst method to expression language
>> > > > 
>> > > >
>> > > > Any idea about the timeline?
>> > > >
>> > > > Thank you,
>> > > >
>> > > > Regards,
>> > > >
>> > > > I.
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > View this message in context:
>> > > >
>> > >
>> >
>> http://apache-nifi-developer-list.39713.n7.nabble.com/Apache-NiFi-0-7-0-Release-date-tp10720p11056.html
>> > > > Sent from the Apache NiFi Developer List mailing list archive at
>> > > > Nabble.com.
>> > > >
>> > >
>> >
>>


[GitHub] nifi issue #239: Nifi 1540 - AWS Kinesis Get and Put Processors

2016-06-06 Thread jvwing
Github user jvwing commented on the issue:

https://github.com/apache/nifi/pull/239
  
It appears that my errors were caused by memory constraints on the KPL.  
With a larger EC2 instance, I was able to run at the provisioned throughput 
threshold without the KPL process crashing.  The processors also worked fine 
through a shard merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #497: NIFI-1857: HTTPS Site-to-Site

2016-06-06 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/497#discussion_r66016174
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/nifi.properties
 ---
@@ -111,9 +111,11 @@ 
nifi.components.status.repository.buffer.size=${nifi.components.status.repositor
 
nifi.components.status.snapshot.frequency=${nifi.components.status.snapshot.frequency}
 
 # Site to Site properties
-nifi.remote.input.socket.host=
-nifi.remote.input.socket.port=
+nifi.remote.input.host=
 nifi.remote.input.secure=true
+nifi.remote.input.socket.port=
+nifi.remote.input.http.enabled=true
--- End diff --

`nifi.remote.input.http.enabled` and `nifi.remote.input.secure` are set to 
true by default, but https port, keystore and truststore are not configured. 
With the default nifi.properties file, user will see following exception at 
start up, which is not a good UX. Can we change default 
`nifi.remote.input.secure` to `false` ?

```
Caused by: java.lang.RuntimeException: Remote input HTTPS is enabled but 
nifi.web.https.port is not specified.
at 
org.apache.nifi.util.NiFiProperties.getRemoteInputHttpPort(NiFiProperties.java:439)
 ~[nifi-properties-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---