[GitHub] nifi pull request: NIFI-1021 added support for Provenance event st...

2015-10-12 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/102

NIFI-1021 added support for Provenance event streaming

SUMMARY:
Current implementation is based on requiring user to implement 
org.apache.nifi.provenance.ProvenanceEventConsumer
strategy, declare it as service in META-INF/services and drop the resulting 
JAR
in the 'lib' directory of NiFi distribution.
Upon startup each consumer is discovered by ProvenanceRepository via 
standard ServiceLoader
after which every provenance event is sent to such consumer.
To simplify things base implementation of ProvenanceRepository was provided 
with two existing
implementations modified to extend from it.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1021

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #102






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1021 added support for Provenance event st...

2015-10-20 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/102#issuecomment-149621632
  
Closing after discussing it with Joe. Will resubmit once proposed 
implementation is realized within the context of the ReportingTask


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1021 added support for Provenance event st...

2015-10-20 Thread olegz
Github user olegz closed the pull request at:

https://github.com/apache/nifi/pull/102


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1074 added initial support for IDE integra...

2015-10-27 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/108

NIFI-1074 added initial support for IDE integration

Includes instructions for Eclipse

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1074

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/108.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #108


commit 73feb3b31c0df49d8aca941ef0fae13ef02ea378
Author: Oleg Zhurakousky 
Date:   2015-10-27T11:40:41Z

NIFI-1074 added initial support for IDE integration
Includes instructions for Eclipse




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-869 Fixed formatting issue

2015-10-29 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/110

NIFI-869 Fixed formatting issue

Fixed formatting issue with printed error message which only apears when 
NiFi is cnfigured using Logback.
Please see NIFI-869 for more details

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-869

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/110.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #110


commit 716561d0dbf7d1d520b6fcff37c1ab1c331e20b9
Author: Oleg Zhurakousky 
Date:   2015-10-29T18:19:44Z

NIFI-869 Fixed formatting issue
Fixed formatting issue with printed error message which only apears when 
NiFi is cnfigured using Logback.
Please see NIFI-869 for more details




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1051 Allowed FileSystemRepository to skip ...

2015-10-29 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/111

NIFI-1051 Allowed FileSystemRepository to skip un-readable entries.

The exception was caused due to basic file permissions. This fix overrides
'visitFileFailed' method of SimpleFileVisitor to log WARN message and allow
FileSystemRepository to continue.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1051

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/111.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #111


commit 5c4042bd7c39dd92b1fba665c7b03c670346be8f
Author: Oleg Zhurakousky 
Date:   2015-10-29T20:31:17Z

NIFI-1051 Allowed FileSystemRepository to skip un-readable entries.
The exception was caused due to basic file permissions. This fix overrides
'visitFileFailed' method of SimpleFileVisitor to log WARN message and allow
FileSystemRepository to continue.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-869 Fixed formatting issue

2015-10-31 Thread olegz
Github user olegz closed the pull request at:

https://github.com/apache/nifi/pull/110


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-869 Fixed formatting issue

2015-10-31 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/110#issuecomment-152734921
  
Pushed to the trunk


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: Nifi 631

2015-11-03 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/113#discussion_r43739796
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListFile.java
 ---
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.util.FileInfo;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+import java.nio.file.FileStore;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.attribute.BasicFileAttributeView;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.nio.file.attribute.FileOwnerAttributeView;
+import java.nio.file.attribute.PosixFileAttributeView;
+import java.nio.file.attribute.PosixFilePermissions;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"file", "get", "list", "ingest", "source", "filesystem"})
+@CapabilityDescription("Retrieves a listing of files from the local 
filesystem. For each file that is listed, " +
+"creates a FlowFile that represents the file so that it can be 
fetched in conjunction with ListFile. This " +
+"Processor is designed to run on Primary Node only in a cluster. 
If the primary node changes, the new " +
+"Primary Node will pick up where the previous node left off 
without duplicating all of the data. Unlike " +
+"GetFile, this Processor does not delete any data from the local 
filesystem.")
+@WritesAttributes({
+@WritesAttribute(attribute="filename", description="The name of 
the file that was read from filesystem."),
+@WritesAttribute(attribute="path", description="The path is set to 
the absolute path of the file's directory " +
+"on filesystem. For example, if the Directory property is 
set to /tmp, then files picked up from " +
+"/tmp will have the path attribute set to \"./\". If the 
Recurse Subdirectories property is set to " +
+"true and a file is picked up from /tmp/abc/1/2/3, then 
the path attribute will be set to " +
+"\"/tmp/abc/1/2/3\"."),
+@WritesAttribute(attribute="fs.owner", description="The user that 
owns the f

[GitHub] nifi pull request: Nifi 631

2015-11-03 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/113#discussion_r43740642
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListFile.java
 ---
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.util.FileInfo;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+import java.nio.file.FileStore;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.attribute.BasicFileAttributeView;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.nio.file.attribute.FileOwnerAttributeView;
+import java.nio.file.attribute.PosixFileAttributeView;
+import java.nio.file.attribute.PosixFilePermissions;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"file", "get", "list", "ingest", "source", "filesystem"})
+@CapabilityDescription("Retrieves a listing of files from the local 
filesystem. For each file that is listed, " +
+"creates a FlowFile that represents the file so that it can be 
fetched in conjunction with ListFile. This " +
+"Processor is designed to run on Primary Node only in a cluster. 
If the primary node changes, the new " +
+"Primary Node will pick up where the previous node left off 
without duplicating all of the data. Unlike " +
+"GetFile, this Processor does not delete any data from the local 
filesystem.")
+@WritesAttributes({
+@WritesAttribute(attribute="filename", description="The name of 
the file that was read from filesystem."),
+@WritesAttribute(attribute="path", description="The path is set to 
the absolute path of the file's directory " +
+"on filesystem. For example, if the Directory property is 
set to /tmp, then files picked up from " +
+"/tmp will have the path attribute set to \"./\". If the 
Recurse Subdirectories property is set to " +
+"true and a file is picked up from /tmp/abc/1/2/3, then 
the path attribute will be set to " +
+"\"/tmp/abc/1/2/3\"."),
+@WritesAttribute(attribute="fs.owner", description="The user that 
owns the f

[GitHub] nifi pull request: Nifi 631

2015-11-03 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/113#discussion_r43740910
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListFile.java
 ---
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.util.FileInfo;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+import java.nio.file.FileStore;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.attribute.BasicFileAttributeView;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.nio.file.attribute.FileOwnerAttributeView;
+import java.nio.file.attribute.PosixFileAttributeView;
+import java.nio.file.attribute.PosixFilePermissions;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"file", "get", "list", "ingest", "source", "filesystem"})
+@CapabilityDescription("Retrieves a listing of files from the local 
filesystem. For each file that is listed, " +
+"creates a FlowFile that represents the file so that it can be 
fetched in conjunction with ListFile. This " +
+"Processor is designed to run on Primary Node only in a cluster. 
If the primary node changes, the new " +
+"Primary Node will pick up where the previous node left off 
without duplicating all of the data. Unlike " +
+"GetFile, this Processor does not delete any data from the local 
filesystem.")
+@WritesAttributes({
+@WritesAttribute(attribute="filename", description="The name of 
the file that was read from filesystem."),
+@WritesAttribute(attribute="path", description="The path is set to 
the absolute path of the file's directory " +
+"on filesystem. For example, if the Directory property is 
set to /tmp, then files picked up from " +
+"/tmp will have the path attribute set to \"./\". If the 
Recurse Subdirectories property is set to " +
+"true and a file is picked up from /tmp/abc/1/2/3, then 
the path attribute will be set to " +
+"\"/tmp/abc/1/2/3\"."),
+@WritesAttribute(attribute="fs.owner", description="The user that 
owns the f

[GitHub] nifi pull request: Nifi 631

2015-11-03 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/113#discussion_r43741070
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListFile.java
 ---
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.util.FileInfo;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+import java.nio.file.FileStore;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.attribute.BasicFileAttributeView;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.nio.file.attribute.FileOwnerAttributeView;
+import java.nio.file.attribute.PosixFileAttributeView;
+import java.nio.file.attribute.PosixFilePermissions;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"file", "get", "list", "ingest", "source", "filesystem"})
+@CapabilityDescription("Retrieves a listing of files from the local 
filesystem. For each file that is listed, " +
+"creates a FlowFile that represents the file so that it can be 
fetched in conjunction with ListFile. This " +
+"Processor is designed to run on Primary Node only in a cluster. 
If the primary node changes, the new " +
+"Primary Node will pick up where the previous node left off 
without duplicating all of the data. Unlike " +
+"GetFile, this Processor does not delete any data from the local 
filesystem.")
+@WritesAttributes({
+@WritesAttribute(attribute="filename", description="The name of 
the file that was read from filesystem."),
+@WritesAttribute(attribute="path", description="The path is set to 
the absolute path of the file's directory " +
+"on filesystem. For example, if the Directory property is 
set to /tmp, then files picked up from " +
+"/tmp will have the path attribute set to \"./\". If the 
Recurse Subdirectories property is set to " +
+"true and a file is picked up from /tmp/abc/1/2/3, then 
the path attribute will be set to " +
+"\"/tmp/abc/1/2/3\"."),
+@WritesAttribute(attribute="fs.owner", description="The user that 
owns the f

[GitHub] nifi pull request: Nifi 631

2015-11-03 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/113#discussion_r43741352
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListFile.java
 ---
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.util.FileInfo;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+import java.nio.file.FileStore;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.attribute.BasicFileAttributeView;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.nio.file.attribute.FileOwnerAttributeView;
+import java.nio.file.attribute.PosixFileAttributeView;
+import java.nio.file.attribute.PosixFilePermissions;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"file", "get", "list", "ingest", "source", "filesystem"})
+@CapabilityDescription("Retrieves a listing of files from the local 
filesystem. For each file that is listed, " +
+"creates a FlowFile that represents the file so that it can be 
fetched in conjunction with ListFile. This " +
+"Processor is designed to run on Primary Node only in a cluster. 
If the primary node changes, the new " +
+"Primary Node will pick up where the previous node left off 
without duplicating all of the data. Unlike " +
+"GetFile, this Processor does not delete any data from the local 
filesystem.")
+@WritesAttributes({
+@WritesAttribute(attribute="filename", description="The name of 
the file that was read from filesystem."),
+@WritesAttribute(attribute="path", description="The path is set to 
the absolute path of the file's directory " +
+"on filesystem. For example, if the Directory property is 
set to /tmp, then files picked up from " +
+"/tmp will have the path attribute set to \"./\". If the 
Recurse Subdirectories property is set to " +
+"true and a file is picked up from /tmp/abc/1/2/3, then 
the path attribute will be set to " +
+"\"/tmp/abc/1/2/3\"."),
+@WritesAttribute(attribute="fs.owner", description="The user that 
owns the f

[GitHub] nifi pull request: Nifi 631

2015-11-03 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/113#discussion_r43744708
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListFile.java
 ---
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.util.FileInfo;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+import java.nio.file.FileStore;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.attribute.BasicFileAttributeView;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.nio.file.attribute.FileOwnerAttributeView;
+import java.nio.file.attribute.PosixFileAttributeView;
+import java.nio.file.attribute.PosixFilePermissions;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"file", "get", "list", "ingest", "source", "filesystem"})
+@CapabilityDescription("Retrieves a listing of files from the local 
filesystem. For each file that is listed, " +
+"creates a FlowFile that represents the file so that it can be 
fetched in conjunction with ListFile. This " +
+"Processor is designed to run on Primary Node only in a cluster. 
If the primary node changes, the new " +
+"Primary Node will pick up where the previous node left off 
without duplicating all of the data. Unlike " +
+"GetFile, this Processor does not delete any data from the local 
filesystem.")
+@WritesAttributes({
+@WritesAttribute(attribute="filename", description="The name of 
the file that was read from filesystem."),
+@WritesAttribute(attribute="path", description="The path is set to 
the absolute path of the file's directory " +
+"on filesystem. For example, if the Directory property is 
set to /tmp, then files picked up from " +
+"/tmp will have the path attribute set to \"./\". If the 
Recurse Subdirectories property is set to " +
+"true and a file is picked up from /tmp/abc/1/2/3, then 
the path attribute will be set to " +
+"\"/tmp/abc/1/2/3\"."),
+@WritesAttribute(attribute="fs.owner", description="The user that 
owns the f

[GitHub] nifi pull request: Nifi 631

2015-11-03 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/113#discussion_r43744743
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListFile.java
 ---
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.util.FileInfo;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+import java.nio.file.FileStore;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.attribute.BasicFileAttributeView;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.nio.file.attribute.FileOwnerAttributeView;
+import java.nio.file.attribute.PosixFileAttributeView;
+import java.nio.file.attribute.PosixFilePermissions;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"file", "get", "list", "ingest", "source", "filesystem"})
+@CapabilityDescription("Retrieves a listing of files from the local 
filesystem. For each file that is listed, " +
+"creates a FlowFile that represents the file so that it can be 
fetched in conjunction with ListFile. This " +
+"Processor is designed to run on Primary Node only in a cluster. 
If the primary node changes, the new " +
+"Primary Node will pick up where the previous node left off 
without duplicating all of the data. Unlike " +
+"GetFile, this Processor does not delete any data from the local 
filesystem.")
+@WritesAttributes({
+@WritesAttribute(attribute="filename", description="The name of 
the file that was read from filesystem."),
+@WritesAttribute(attribute="path", description="The path is set to 
the absolute path of the file's directory " +
+"on filesystem. For example, if the Directory property is 
set to /tmp, then files picked up from " +
+"/tmp will have the path attribute set to \"./\". If the 
Recurse Subdirectories property is set to " +
+"true and a file is picked up from /tmp/abc/1/2/3, then 
the path attribute will be set to " +
+"\"/tmp/abc/1/2/3\"."),
+@WritesAttribute(attribute="fs.owner", description="The user that 
owns the f

[GitHub] nifi pull request: Nifi 631

2015-11-03 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/113#discussion_r43782371
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListFile.java
 ---
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.util.FileInfo;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+import java.nio.file.FileStore;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.attribute.BasicFileAttributeView;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.nio.file.attribute.FileOwnerAttributeView;
+import java.nio.file.attribute.PosixFileAttributeView;
+import java.nio.file.attribute.PosixFilePermissions;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"file", "get", "list", "ingest", "source", "filesystem"})
+@CapabilityDescription("Retrieves a listing of files from the local 
filesystem. For each file that is listed, " +
+"creates a FlowFile that represents the file so that it can be 
fetched in conjunction with ListFile. This " +
+"Processor is designed to run on Primary Node only in a cluster. 
If the primary node changes, the new " +
+"Primary Node will pick up where the previous node left off 
without duplicating all of the data. Unlike " +
+"GetFile, this Processor does not delete any data from the local 
filesystem.")
+@WritesAttributes({
+@WritesAttribute(attribute="filename", description="The name of 
the file that was read from filesystem."),
+@WritesAttribute(attribute="path", description="The path is set to 
the absolute path of the file's directory " +
+"on filesystem. For example, if the Directory property is 
set to /tmp, then files picked up from " +
+"/tmp will have the path attribute set to \"./\". If the 
Recurse Subdirectories property is set to " +
+"true and a file is picked up from /tmp/abc/1/2/3, then 
the path attribute will be set to " +
+"\"/tmp/abc/1/2/3\"."),
+@WritesAttribute(attribute="fs.owner", description="The user that 
owns the f

[GitHub] nifi pull request: [NIFI-987] Added Processor For Writing Events t...

2015-11-03 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/91#discussion_r43783354
  
--- Diff: 
nifi-nar-bundles/nifi-riemann-bundle/nifi-riemann-processors/src/main/java/org/apache/nifi/processors/riemann/PutRiemann.java
 ---
@@ -0,0 +1,376 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.riemann;
+
+import com.aphyr.riemann.Proto;
+import com.aphyr.riemann.Proto.Event;
+import com.aphyr.riemann.client.RiemannClient;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+@Tags({"riemann", "monitoring", "metrics"})
+@DynamicProperty(name = "Custom Event Attribute", 
supportsExpressionLanguage = true,
+  description = "These values will be attached to the Riemann event as a 
custom attribute",
+  value = "Any value or expression")
+@CapabilityDescription("Send events to Riemann")
+@SupportsBatching
+public class PutRiemann extends AbstractProcessor {
+  protected enum Transport {
+TCP, UDP
+  }
+
+  protected RiemannClient riemannClient = null;
+  protected Transport transport;
+
+  public static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("Metrics successfully written to Riemann")
+.build();
+
+  public static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("Metrics which failed to write to Riemann")
+.build();
+
+
+  public static final PropertyDescriptor RIEMANN_HOST = new 
PropertyDescriptor.Builder()
+.name("Riemann Address")
+.description("Hostname of Riemann server")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+  public static final PropertyDescriptor RIEMANN_PORT = new 
PropertyDescriptor.Builder()
+.name("Riemann Port")
+.description("Port that Riemann is listening on")
+.required(true)
+.defaultValue("")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.addValidator(StandardValidators.INTEGER_VALIDATOR)
+.build();
+
+  public static final PropertyDescriptor TRANSPORT_PROTOCOL = new 
PropertyDescriptor.Builder()
+.name("Transport Protocol")
+.description("Transport protocol to speak to Riemann in")
+.required(true)
+.allowableValues(new Transport[]{Transport.TCP, Transport.UDP})
+.defaultValue("TCP")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+  public static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Batch Size")
+.descrip

[GitHub] nifi pull request: Nifi 631

2015-11-04 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/113#discussion_r43875897
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListFile.java
 ---
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.util.FileInfo;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+import java.nio.file.FileStore;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.attribute.BasicFileAttributeView;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.nio.file.attribute.FileOwnerAttributeView;
+import java.nio.file.attribute.PosixFileAttributeView;
+import java.nio.file.attribute.PosixFilePermissions;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@TriggerSerially
+@TriggerWhenEmpty
+@Tags({"file", "get", "list", "ingest", "source", "filesystem"})
+@CapabilityDescription("Retrieves a listing of files from the local 
filesystem. For each file that is listed, " +
+"creates a FlowFile that represents the file so that it can be 
fetched in conjunction with ListFile. This " +
+"Processor is designed to run on Primary Node only in a cluster. 
If the primary node changes, the new " +
+"Primary Node will pick up where the previous node left off 
without duplicating all of the data. Unlike " +
+"GetFile, this Processor does not delete any data from the local 
filesystem.")
+@WritesAttributes({
+@WritesAttribute(attribute="filename", description="The name of 
the file that was read from filesystem."),
+@WritesAttribute(attribute="path", description="The path is set to 
the absolute path of the file's directory " +
+"on filesystem. For example, if the Directory property is 
set to /tmp, then files picked up from " +
+"/tmp will have the path attribute set to \"./\". If the 
Recurse Subdirectories property is set to " +
+"true and a file is picked up from /tmp/abc/1/2/3, then 
the path attribute will be set to " +
+"\"/tmp/abc/1/2/3\"."),
+@WritesAttribute(attribute="fs.owner", description="The user that 
owns the f

[GitHub] nifi pull request: NIFI-1099 fixed the handling of InterruptedExce...

2015-11-05 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/115

NIFI-1099 fixed the handling of InterruptedException

In many places current code simply swallows InterruptedException without 
allowing
calling thread to be notified  of the interrupt. We should not be doing 
this as it could lead to
various side-effects (e.g., Thread pool waiting for completion of the 
interrupted task, a completion that).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1099

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/115.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #115


commit a15b44485f4f062f0bfae5bed70c9566b5932649
Author: Oleg Zhurakousky 
Date:   2015-11-05T18:29:13Z

NIFI-1099 dixed the handling of InterruptedException
In many places current code simply swallows InterruptedException without 
allowing
calling thread to be notified  of the interrupt. We should not be doing 
this as it could lead to
various side-effects (e.g., Thread pool waiting for completion of the 
interrupted task, a completion that).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1099 fixed the handling of InterruptedExce...

2015-11-05 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/115#issuecomment-154248804
  
Great book, great guy and if you can't get a book here is his writeup on 
the subject - http://www.ibm.com/developerworks/library/j-jtp05236/.

The bottom line is blanket call to _Thread.interrupt()_ IMHO is always 
safe. All it does is resets the interrupt flag that the caller _may_ choose to 
ignore. NOT doing it simply implies withholding of the information that may or 
may not be relevant.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1099 fixed the handling of InterruptedExce...

2015-11-06 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/115#issuecomment-154428329
  
I just want to make sure that we are all on the same page; _repeat the 
interrupt_ in the context of _Thread.interrupt()_ simply implies communication 
of something that have already happened and is not he same as re-throwing 
exception. Hence my point about that being safe. 

For cases where one really wants to ignore the interrupt especially where 
```Thread.sleep(..)``` is used (a whole other topic), we can simply use 
```LockSupport.parkNanos(..)```.  Any interrupt will not turn into exception 
making user responsible to query the active thread periodically and check if it 
has been interrupted ```Thread.isInterrupted()```.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1061 fixed deadlock caused by DBCPConnecti...

2015-11-09 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/117

NIFI-1061 fixed deadlock caused by DBCPConnectionPool.onConfigured()

Current implementation of DBCPConnectionPool was attempting to test if 
connection could be obtained via dataSource.getConnection().
Such call is naturally a blocking call and the duration of the block is 
dependent on driver implementation. Some drivers (e.g., Phoenix - 
https://phoenix.apache.org/installation.html)
attempts numerous retries before failing creating a deadlock when attempt 
was made to disable DBCPConnectionPool which was still being enabled.

This fix removes the connection test from DBCPConnectionPool.onConfigured() 
operation returning successfully upon creation of DataSource.
For more details see comments in 
https://issues.apache.org/jira/browse/NIFI-1061

The fix also cleaned up the code in 
StandardProcessScheduler.getScheduleState for better readability.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1061

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/117.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #117


commit 370950b0193c0a64cb023b2e64d9ea40c9ad97f4
Author: Oleg Zhurakousky 
Date:   2015-11-09T19:47:50Z

NIFI-1061 fixed deadlock caused by DBCPConnectionPool.onConfigured()
Current implementation of DBCPConnectionPool was attempting to test if 
connection could be obtained via dataSource.getConnection().
Such call is naturally a blocking call and the duration of the block is 
dependent on driver implementation. Some drivers (e.g., Phoenix - 
https://phoenix.apache.org/installation.html)
attempts numerous retries before failing creating a deadlock when attempt 
was made to disable DBCPConnectionPool which was still being enabled.

This fix removes the connection test from DBCPConnectionPool.onConfigured() 
operation returning successfully upon creation of DataSource.
For more details see comments in 
https://issues.apache.org/jira/browse/NIFI-1061

The fix also cleaned up the code in 
StandardProcessScheduler.getScheduleState for better readability.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1099 fixed the handling of InterruptedExce...

2015-11-09 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/115#issuecomment-155174624
  
Closing it in favor of addressing each individually instead of a blanket as 
was suggested


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1099 fixed the handling of InterruptedExce...

2015-11-09 Thread olegz
Github user olegz closed the pull request at:

https://github.com/apache/nifi/pull/115


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1000 Fixed JmsFactory to properly obtain d...

2015-11-09 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/118

NIFI-1000 Fixed JmsFactory to properly obtain destiniation name

Re-enabled JMS Tests that were annotated with @Ignore

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1000

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/118.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #118


commit ef0be5a5d6fbfc8abb55e250fc1ddbc294fda743
Author: Oleg Zhurakousky 
Date:   2015-11-09T23:35:31Z

NIFI-1000 Fixed JmsFactory to properly obtain destiniation name
Re-enabled JMS Tests that were annotated with @Ignore




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1000 Fixed JmsFactory to properly obtain d...

2015-11-10 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/118#discussion_r44422157
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/pom.xml ---
@@ -152,7 +152,8 @@ language governing permissions and limitations under 
the License. -->
 
 
 org.apache.activemq
-activemq-client
+activemq-all
+test
--- End diff --

Yeah, activemq-all will bring the client back in. The real issue is that 
when using embedded broker like I am doing in test you also need other JARs 
from active mq and they all have to be in sink. So activemq-all will ensure 
that. Also, we don't need it at runtime, only at testCompile


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1000 Fixed JmsFactory to properly obtain d...

2015-11-10 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/118#discussion_r44422260
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestGetJMSQueue.java
 ---
@@ -24,73 +29,116 @@
 import javax.jms.Session;
 import javax.jms.StreamMessage;
 
+import org.apache.nifi.processor.Relationship;
 import org.apache.nifi.processors.standard.util.JmsFactory;
 import org.apache.nifi.processors.standard.util.JmsProperties;
 import org.apache.nifi.processors.standard.util.WrappedMessageProducer;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.MockProcessSession;
+import org.apache.nifi.util.StandardProcessorTestRunner;
 import org.apache.nifi.util.TestRunner;
 import org.apache.nifi.util.TestRunners;
 import org.apache.nifi.web.Revision;
+import org.junit.Test;
 
 public class TestGetJMSQueue {
 
-@org.junit.Ignore
+@Test
 public void testSendTextToQueue() throws Exception {
-final TestRunner runner = 
TestRunners.newTestRunner(GetJMSQueue.class);
+GetJMSQueue getJmsQueue = new GetJMSQueue();
+StandardProcessorTestRunner runner = (StandardProcessorTestRunner) 
TestRunners.newTestRunner(getJmsQueue);
 runner.setProperty(JmsProperties.JMS_PROVIDER, 
JmsProperties.ACTIVEMQ_PROVIDER);
-runner.setProperty(JmsProperties.URL, "tcp://localhost:61616");
+runner.setProperty(JmsProperties.URL, 
"vm://localhost?broker.persistent=false");
 runner.setProperty(JmsProperties.DESTINATION_TYPE, 
JmsProperties.DESTINATION_TYPE_QUEUE);
 runner.setProperty(JmsProperties.DESTINATION_NAME, 
"queue.testing");
 runner.setProperty(JmsProperties.ACKNOWLEDGEMENT_MODE, 
JmsProperties.ACK_MODE_AUTO);
+
+MockProcessSession pSession = (MockProcessSession) 
runner.getProcessSessionFactory().createSession();
 WrappedMessageProducer wrappedProducer = 
JmsFactory.createMessageProducer(runner.getProcessContext(), true);
 final Session jmsSession = wrappedProducer.getSession();
 final MessageProducer producer = wrappedProducer.getProducer();
-
 final Message message = jmsSession.createTextMessage("Hello 
World");
 
 producer.send(message);
 jmsSession.commit();
+
+getJmsQueue.onTrigger(runner.getProcessContext(), pSession);
--- End diff --

Notice that on another test, will fix


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1000 Fixed JmsFactory to properly obtain d...

2015-11-10 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/118#discussion_r44423561
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/pom.xml ---
@@ -152,7 +152,8 @@ language governing permissions and limitations under 
the License. -->
 
 
 org.apache.activemq
-activemq-client
+activemq-all
+test
--- End diff --

No, we only need javax.jms at runtime. If we do need activeMq at runtime we 
are in big trouble unless the intention of JmsFactory and all components around 
it was to work with ActiveMq only.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1000 Fixed JmsFactory to properly obtain d...

2015-11-10 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/118#discussion_r44423704
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/pom.xml ---
@@ -152,7 +152,8 @@ language governing permissions and limitations under 
the License. -->
 
 
 org.apache.activemq
-activemq-client
+activemq-all
+test
--- End diff --

Basically its like db driver, provided at runtime. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1024 NIFI-1062 Fixed PutHDFS processor to ...

2015-11-10 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/119

NIFI-1024 NIFI-1062 Fixed PutHDFS processor to properly route failures.

Ensured that during put failures the FlowFile is routed to 'failure' 
relationship.
Added validation test
Re-enabled previously ignored test.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1124

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/119.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #119


commit d10befd4647e7c7456d3d85809b280ab3ac087dc
Author: Oleg Zhurakousky 
Date:   2015-11-10T12:58:28Z

NIFI-1024 NIFI-1062 Fixed PutHDFS processor to properly route failures.
Ensured that during put failures the FlowFile is routed to 'failure' 
relationship.
Added validation test
Re-enabled previously ignored test.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1000 Fixed JmsFactory to properly obtain d...

2015-11-10 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/118#discussion_r44432971
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestGetJMSQueue.java
 ---
@@ -24,73 +29,116 @@
 import javax.jms.Session;
 import javax.jms.StreamMessage;
 
+import org.apache.nifi.processor.Relationship;
 import org.apache.nifi.processors.standard.util.JmsFactory;
 import org.apache.nifi.processors.standard.util.JmsProperties;
 import org.apache.nifi.processors.standard.util.WrappedMessageProducer;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.MockProcessSession;
+import org.apache.nifi.util.StandardProcessorTestRunner;
 import org.apache.nifi.util.TestRunner;
 import org.apache.nifi.util.TestRunners;
 import org.apache.nifi.web.Revision;
+import org.junit.Test;
 
 public class TestGetJMSQueue {
 
-@org.junit.Ignore
+@Test
 public void testSendTextToQueue() throws Exception {
-final TestRunner runner = 
TestRunners.newTestRunner(GetJMSQueue.class);
+GetJMSQueue getJmsQueue = new GetJMSQueue();
+StandardProcessorTestRunner runner = (StandardProcessorTestRunner) 
TestRunners.newTestRunner(getJmsQueue);
 runner.setProperty(JmsProperties.JMS_PROVIDER, 
JmsProperties.ACTIVEMQ_PROVIDER);
-runner.setProperty(JmsProperties.URL, "tcp://localhost:61616");
+runner.setProperty(JmsProperties.URL, 
"vm://localhost?broker.persistent=false");
 runner.setProperty(JmsProperties.DESTINATION_TYPE, 
JmsProperties.DESTINATION_TYPE_QUEUE);
 runner.setProperty(JmsProperties.DESTINATION_NAME, 
"queue.testing");
 runner.setProperty(JmsProperties.ACKNOWLEDGEMENT_MODE, 
JmsProperties.ACK_MODE_AUTO);
+
+MockProcessSession pSession = (MockProcessSession) 
runner.getProcessSessionFactory().createSession();
 WrappedMessageProducer wrappedProducer = 
JmsFactory.createMessageProducer(runner.getProcessContext(), true);
 final Session jmsSession = wrappedProducer.getSession();
 final MessageProducer producer = wrappedProducer.getProducer();
-
 final Message message = jmsSession.createTextMessage("Hello 
World");
 
 producer.send(message);
 jmsSession.commit();
+
+getJmsQueue.onTrigger(runner.getProcessContext(), pSession);
+
+List flowFiles = pSession
+.getFlowFilesForRelationship(new 
Relationship.Builder().name("success").build());
+
+assertTrue(flowFiles.size() == 1);
+MockFlowFile successFlowFile = flowFiles.get(0);
+String receivedMessage = new 
String(runner.getContentAsByteArray(successFlowFile));
+assertEquals("Hello World", receivedMessage);
+assertEquals("queue.testing", 
successFlowFile.getAttribute("jms.JMSDestination"));
 producer.close();
 jmsSession.close();
 }
 
-@org.junit.Ignore
+@Test
 public void testSendBytesToQueue() throws Exception {
-final TestRunner runner = 
TestRunners.newTestRunner(GetJMSQueue.class);
+GetJMSQueue getJmsQueue = new GetJMSQueue();
+StandardProcessorTestRunner runner = (StandardProcessorTestRunner) 
TestRunners.newTestRunner(getJmsQueue);
 runner.setProperty(JmsProperties.JMS_PROVIDER, 
JmsProperties.ACTIVEMQ_PROVIDER);
-runner.setProperty(JmsProperties.URL, "tcp://localhost:61616");
+runner.setProperty(JmsProperties.URL, 
"vm://localhost?broker.persistent=false");
 runner.setProperty(JmsProperties.DESTINATION_TYPE, 
JmsProperties.DESTINATION_TYPE_QUEUE);
 runner.setProperty(JmsProperties.DESTINATION_NAME, 
"queue.testing");
 runner.setProperty(JmsProperties.ACKNOWLEDGEMENT_MODE, 
JmsProperties.ACK_MODE_AUTO);
 WrappedMessageProducer wrappedProducer = 
JmsFactory.createMessageProducer(runner.getProcessContext(), true);
 final Session jmsSession = wrappedProducer.getSession();
 final MessageProducer producer = wrappedProducer.getProducer();
-
+MockProcessSession pSession = (MockProcessSession) 
runner.getProcessSessionFactory().createSession();
 final BytesMessage message = jmsSession.createBytesMessage();
 message.writeBytes("Hello Bytes".getBytes());
 
 producer.send(message);
 jmsSession.commit();
+
--- End diff --

Mark, please look at it again, all your concerns were addressed


---
If your project is set up for it, you can reply to this email and have 

[GitHub] nifi pull request: NIFI-1024 NIFI-1062 Fixed PutHDFS processor to ...

2015-11-10 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/119#issuecomment-155618292
  
@bbende - all done, please review


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1124 NIFI-1062 Fixed PutHDFS processor to ...

2015-11-11 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/119#issuecomment-155764650
  
@trkurc addressed, thanks for looking!



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1061 fixed deadlock caused by DBCPConnecti...

2015-11-11 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/117#issuecomment-155786564
  
@joewitt reverted changes to getScheduleState() method and addressed 
white-space violations. Good to go.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1143 Fixed race condition which caused int...

2015-11-11 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/120

NIFI-1143 Fixed race condition which caused intermittent failures

Fixed the order of service state check in PropertyDescriptor
Encapsulated the check into private method for readability
Modified and documented test to validate correct behavior.
For more details please see comment in 
https://issues.apache.org/jira/browse/NIFI-1143

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1143

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/120.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #120


commit 5baafa156a58eda7fec7366cc5116da9a2a5a9ec
Author: Oleg Zhurakousky 
Date:   2015-11-11T19:06:08Z

NIFI-1143 Fixed race condition which caused intermittent failures
Fixed the order of service state check in PropertyDescriptor
Encapsulated the check into private method for readability
Modified and documented test to validate correct behavior.
For more details please see comment in 
https://issues.apache.org/jira/browse/NIFI-1143




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1107 - Create new PutS3ObjectMultipart pro...

2015-11-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/121#discussion_r44713764
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/PutS3ObjectMultipart.java
 ---
@@ -0,0 +1,550 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.AmazonS3Client;
+import com.amazonaws.services.s3.model.AccessControlList;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadResult;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
+import com.amazonaws.services.s3.model.ObjectMetadata;
+import com.amazonaws.services.s3.model.PartETag;
+import com.amazonaws.services.s3.model.StorageClass;
+import com.amazonaws.services.s3.model.UploadPartRequest;
+import com.amazonaws.services.s3.model.UploadPartResult;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedInputStream;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+@SeeAlso({FetchS3Object.class, PutS3Object.class, DeleteS3Object.class})
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"Amazon", "S3", "AWS", "Archive", "Put", "Multi", "Multipart", 
"Upload"})
+@CapabilityDescription("Puts FlowFiles to an Amazon S3 Bucket using the 
MultipartUpload API method.  " +
+"This upload consists of three steps 1) initiate upload, 2) upload 
the parts, and 3) complete the upload.\n" +
+"Since the intent for this processor involves large files, the 
processor saves state locally after each step " +
+"so that an upload can be resumed without having to restart from 
the beginning of the file.\n" +
+"The AWS libraries default to using standard AWS regions but the 
'Endpoint Override URL' allows this to be " +
+"overridden.")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added t

[GitHub] nifi pull request: NIFI-1107 - Create new PutS3ObjectMultipart pro...

2015-11-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/121#discussion_r44713942
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/PutS3ObjectMultipart.java
 ---
@@ -0,0 +1,550 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.AmazonS3Client;
+import com.amazonaws.services.s3.model.AccessControlList;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadResult;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
+import com.amazonaws.services.s3.model.ObjectMetadata;
+import com.amazonaws.services.s3.model.PartETag;
+import com.amazonaws.services.s3.model.StorageClass;
+import com.amazonaws.services.s3.model.UploadPartRequest;
+import com.amazonaws.services.s3.model.UploadPartResult;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedInputStream;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+@SeeAlso({FetchS3Object.class, PutS3Object.class, DeleteS3Object.class})
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"Amazon", "S3", "AWS", "Archive", "Put", "Multi", "Multipart", 
"Upload"})
+@CapabilityDescription("Puts FlowFiles to an Amazon S3 Bucket using the 
MultipartUpload API method.  " +
+"This upload consists of three steps 1) initiate upload, 2) upload 
the parts, and 3) complete the upload.\n" +
+"Since the intent for this processor involves large files, the 
processor saves state locally after each step " +
+"so that an upload can be resumed without having to restart from 
the beginning of the file.\n" +
+"The AWS libraries default to using standard AWS regions but the 
'Endpoint Override URL' allows this to be " +
+"overridden.")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added t

[GitHub] nifi pull request: NIFI-1107 - Create new PutS3ObjectMultipart pro...

2015-11-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/121#discussion_r44714307
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/PutS3ObjectMultipart.java
 ---
@@ -0,0 +1,550 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.AmazonS3Client;
+import com.amazonaws.services.s3.model.AccessControlList;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadResult;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
+import com.amazonaws.services.s3.model.ObjectMetadata;
+import com.amazonaws.services.s3.model.PartETag;
+import com.amazonaws.services.s3.model.StorageClass;
+import com.amazonaws.services.s3.model.UploadPartRequest;
+import com.amazonaws.services.s3.model.UploadPartResult;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedInputStream;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+@SeeAlso({FetchS3Object.class, PutS3Object.class, DeleteS3Object.class})
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"Amazon", "S3", "AWS", "Archive", "Put", "Multi", "Multipart", 
"Upload"})
+@CapabilityDescription("Puts FlowFiles to an Amazon S3 Bucket using the 
MultipartUpload API method.  " +
+"This upload consists of three steps 1) initiate upload, 2) upload 
the parts, and 3) complete the upload.\n" +
+"Since the intent for this processor involves large files, the 
processor saves state locally after each step " +
+"so that an upload can be resumed without having to restart from 
the beginning of the file.\n" +
+"The AWS libraries default to using standard AWS regions but the 
'Endpoint Override URL' allows this to be " +
+"overridden.")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added t

[GitHub] nifi pull request: NIFI-1107 - Create new PutS3ObjectMultipart pro...

2015-11-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/121#discussion_r44715683
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/PutS3ObjectMultipart.java
 ---
@@ -0,0 +1,550 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.AmazonS3Client;
+import com.amazonaws.services.s3.model.AccessControlList;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadResult;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
+import com.amazonaws.services.s3.model.ObjectMetadata;
+import com.amazonaws.services.s3.model.PartETag;
+import com.amazonaws.services.s3.model.StorageClass;
+import com.amazonaws.services.s3.model.UploadPartRequest;
+import com.amazonaws.services.s3.model.UploadPartResult;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedInputStream;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+@SeeAlso({FetchS3Object.class, PutS3Object.class, DeleteS3Object.class})
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"Amazon", "S3", "AWS", "Archive", "Put", "Multi", "Multipart", 
"Upload"})
+@CapabilityDescription("Puts FlowFiles to an Amazon S3 Bucket using the 
MultipartUpload API method.  " +
+"This upload consists of three steps 1) initiate upload, 2) upload 
the parts, and 3) complete the upload.\n" +
+"Since the intent for this processor involves large files, the 
processor saves state locally after each step " +
+"so that an upload can be resumed without having to restart from 
the beginning of the file.\n" +
+"The AWS libraries default to using standard AWS regions but the 
'Endpoint Override URL' allows this to be " +
+"overridden.")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added t

[GitHub] nifi pull request: NIFI-1107 - Create new PutS3ObjectMultipart pro...

2015-11-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/121#discussion_r44716157
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/PutS3ObjectMultipart.java
 ---
@@ -0,0 +1,550 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.AmazonS3Client;
+import com.amazonaws.services.s3.model.AccessControlList;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadResult;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
+import com.amazonaws.services.s3.model.ObjectMetadata;
+import com.amazonaws.services.s3.model.PartETag;
+import com.amazonaws.services.s3.model.StorageClass;
+import com.amazonaws.services.s3.model.UploadPartRequest;
+import com.amazonaws.services.s3.model.UploadPartResult;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.BufferedInputStream;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+@SeeAlso({FetchS3Object.class, PutS3Object.class, DeleteS3Object.class})
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"Amazon", "S3", "AWS", "Archive", "Put", "Multi", "Multipart", 
"Upload"})
+@CapabilityDescription("Puts FlowFiles to an Amazon S3 Bucket using the 
MultipartUpload API method.  " +
+"This upload consists of three steps 1) initiate upload, 2) upload 
the parts, and 3) complete the upload.\n" +
+"Since the intent for this processor involves large files, the 
processor saves state locally after each step " +
+"so that an upload can be resumed without having to restart from 
the beginning of the file.\n" +
+"The AWS libraries default to using standard AWS regions but the 
'Endpoint Override URL' allows this to be " +
+"overridden.")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added t

[GitHub] nifi pull request: NIFI-748 Fixed logic around handling partial qu...

2015-11-13 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/123

NIFI-748 Fixed logic around handling partial query results from prove…

…nance repository

- Ensured that failures derived form correlating Document to its actual 
provenance event do fail the entire query and produce partial results with 
warning messages
- Refactored DocsReader.read() operation.
- Added test to validate two conditions where the such failures could occur

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-748

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/123.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #123


commit a4d93c62c88f594ef0cd3739a0536769f1ab9b26
Author: Oleg Zhurakousky 
Date:   2015-11-13T19:08:39Z

NIFI-748 Fixed logic around handling partial query results from provenance 
repository
- Ensured that failures derived form correlating Document to its actual 
provenance event do fail the entire query and produce partial results with 
warning messages
- Refactored DocsReader.read() operation.
- Added test to validate two conditions where the such failures could occur




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-748 Fixed logic around handling partial qu...

2015-11-15 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/123#discussion_r44872287
  
--- Diff: 
nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/lucene/DocsReader.java
 ---
@@ -47,9 +46,6 @@
 public class DocsReader {
 private final Logger logger = 
LoggerFactory.getLogger(DocsReader.class);
 
-public DocsReader(final List storageDirectories) {
-}
-
--- End diff --

Considering that this class does not appear to be intended as public API I 
still believe it is the right thing to do since the constructor argument isn't 
used. 
In fact, need to look if we can make this class package private.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-748 Fixed logic around handling partial qu...

2015-11-15 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/123#discussion_r44872400
  
--- Diff: 
nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/lucene/DocsReader.java
 ---
@@ -100,101 +96,61 @@ private ProvenanceEventRecord getRecord(final 
Document d, final RecordReader rea
 }
 }
 
-if ( record == null ) {
-throw new IOException("Failed to find Provenance Event " + d);
-} else {
-return record;
+if (record == null) {
+logger.warn("Failed to read Provenance Event for '" + d + "'. 
The event file may be missing or corrupted");
 }
-}
 
+return record;
+}
 
 public Set read(final List docs, 
final Collection allProvenanceLogFiles,
-final AtomicInteger retrievalCount, final int maxResults, final 
int maxAttributeChars) throws IOException {
-if (retrievalCount.get() >= maxResults) {
-return Collections.emptySet();
-}
-
-LuceneUtil.sortDocsForRetrieval(docs);
-
-RecordReader reader = null;
-String lastStorageFilename = null;
-final Set matchingRecords = new 
LinkedHashSet<>();
+final AtomicInteger retrievalCount, final int maxResults, 
final int maxAttributeChars) throws IOException {
 
 final long start = System.nanoTime();
-int logFileCount = 0;
-
-final Set storageFilesToSkip = new HashSet<>();
-int eventsReadThisFile = 0;
 
-try {
-for (final Document d : docs) {
-final String storageFilename = 
d.getField(FieldNames.STORAGE_FILENAME).stringValue();
-if ( storageFilesToSkip.contains(storageFilename) ) {
-continue;
-}
-
-try {
-if (reader != null && 
storageFilename.equals(lastStorageFilename)) {
-matchingRecords.add(getRecord(d, reader));
-eventsReadThisFile++;
-
-if ( retrievalCount.incrementAndGet() >= 
maxResults ) {
-break;
-}
-} else {
-logger.debug("Opening log file {}", 
storageFilename);
-
-logFileCount++;
-if (reader != null) {
-reader.close();
-}
+Set matchingRecords = new LinkedHashSet<>();
+if (retrievalCount.get() >= maxResults) {
+return matchingRecords;
--- End diff --

I think I'd agree. Just felt a bit cleaner 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-748 Fixed logic around handling partial qu...

2015-11-15 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/123#discussion_r44872506
  
--- Diff: 
nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/lucene/DocsReader.java
 ---
@@ -100,101 +96,61 @@ private ProvenanceEventRecord getRecord(final 
Document d, final RecordReader rea
 }
 }
 
-if ( record == null ) {
-throw new IOException("Failed to find Provenance Event " + d);
-} else {
-return record;
+if (record == null) {
+logger.warn("Failed to read Provenance Event for '" + d + "'. 
The event file may be missing or corrupted");
 }
-}
 
+return record;
+}
 
 public Set read(final List docs, 
final Collection allProvenanceLogFiles,
-final AtomicInteger retrievalCount, final int maxResults, final 
int maxAttributeChars) throws IOException {
-if (retrievalCount.get() >= maxResults) {
-return Collections.emptySet();
-}
-
-LuceneUtil.sortDocsForRetrieval(docs);
-
-RecordReader reader = null;
-String lastStorageFilename = null;
-final Set matchingRecords = new 
LinkedHashSet<>();
+final AtomicInteger retrievalCount, final int maxResults, 
final int maxAttributeChars) throws IOException {
 
 final long start = System.nanoTime();
-int logFileCount = 0;
-
-final Set storageFilesToSkip = new HashSet<>();
-int eventsReadThisFile = 0;
 
-try {
-for (final Document d : docs) {
-final String storageFilename = 
d.getField(FieldNames.STORAGE_FILENAME).stringValue();
-if ( storageFilesToSkip.contains(storageFilename) ) {
-continue;
-}
-
-try {
-if (reader != null && 
storageFilename.equals(lastStorageFilename)) {
-matchingRecords.add(getRecord(d, reader));
-eventsReadThisFile++;
-
-if ( retrievalCount.incrementAndGet() >= 
maxResults ) {
-break;
-}
-} else {
-logger.debug("Opening log file {}", 
storageFilename);
-
-logFileCount++;
-if (reader != null) {
-reader.close();
-}
+Set matchingRecords = new LinkedHashSet<>();
+if (retrievalCount.get() >= maxResults) {
+return matchingRecords;
+}
 
-final List potentialFiles = 
LuceneUtil.getProvenanceLogFiles(storageFilename, allProvenanceLogFiles);
-if (potentialFiles.isEmpty()) {
-logger.warn("Could not find Provenance Log 
File with basename {} in the "
-+ "Provenance Repository; assuming 
file has expired and continuing without it", storageFilename);
-storageFilesToSkip.add(storageFilename);
-continue;
-}
+Map> byStorageNameDocGroups = 
LuceneUtil.groupDocsByStorageFileName(docs);
 
-if (potentialFiles.size() > 1) {
-throw new FileNotFoundException("Found 
multiple Provenance Log Files with basename " +
-storageFilename + " in the Provenance 
Repository");
-}
+int eventsReadThisFile = 0;
+int logFileCount = 0;
 
-for (final File file : potentialFiles) {
-try {
-if (reader != null) {
-logger.debug("Read {} records from 
previous file", eventsReadThisFile);
-}
-
-reader = 
RecordReaders.newRecordReader(file, allProvenanceLogFiles, maxAttributeChars);
-matchingRecords.add(getRecord(d, reader));
-eventsReadThisFile = 1;
-
-if ( retrievalCount.incrementAndGet() >= 
maxResults ) {
-break;
-}
-} catch (final IOException e) {
  

[GitHub] nifi pull request: NIFI-748 Fixed logic around handling partial qu...

2015-11-15 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/123#discussion_r44872756
  
--- Diff: 
nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/lucene/DocsReader.java
 ---
@@ -100,101 +96,61 @@ private ProvenanceEventRecord getRecord(final 
Document d, final RecordReader rea
 }
 }
 
-if ( record == null ) {
-throw new IOException("Failed to find Provenance Event " + d);
-} else {
-return record;
+if (record == null) {
+logger.warn("Failed to read Provenance Event for '" + d + "'. 
The event file may be missing or corrupted");
 }
-}
 
+return record;
+}
 
 public Set read(final List docs, 
final Collection allProvenanceLogFiles,
-final AtomicInteger retrievalCount, final int maxResults, final 
int maxAttributeChars) throws IOException {
-if (retrievalCount.get() >= maxResults) {
-return Collections.emptySet();
-}
-
-LuceneUtil.sortDocsForRetrieval(docs);
--- End diff --

Hmm, I don't think so. Based on my observation it was done to group files 
based on storage file name and sorting had that sideeffect, so in a way the 
additional sorting is a bit of an overkill here, hence refactoring and new 
utility operation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-748 Fixed logic around handling partial qu...

2015-11-15 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/123#discussion_r44873018
  
--- Diff: 
nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/lucene/DocsReader.java
 ---
@@ -100,101 +96,61 @@ private ProvenanceEventRecord getRecord(final 
Document d, final RecordReader rea
 }
 }
 
-if ( record == null ) {
-throw new IOException("Failed to find Provenance Event " + d);
-} else {
-return record;
+if (record == null) {
+logger.warn("Failed to read Provenance Event for '" + d + "'. 
The event file may be missing or corrupted");
 }
-}
 
+return record;
+}
 
 public Set read(final List docs, 
final Collection allProvenanceLogFiles,
-final AtomicInteger retrievalCount, final int maxResults, final 
int maxAttributeChars) throws IOException {
-if (retrievalCount.get() >= maxResults) {
-return Collections.emptySet();
-}
-
-LuceneUtil.sortDocsForRetrieval(docs);
--- End diff --

Tony, I'll give it another look


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-748 Fixed logic around handling partial qu...

2015-11-15 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/123#discussion_r44873446
  
--- Diff: 
nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/lucene/DocsReader.java
 ---
@@ -47,9 +46,6 @@
 public class DocsReader {
 private final Logger logger = 
LoggerFactory.getLogger(DocsReader.class);
 
-public DocsReader(final List storageDirectories) {
-}
-
--- End diff --

I agree that depreciation is the right thing to do in general cases like 
this. But given that the project is pre-1.0 and obvious intent of this class 
not being a public API I felt removing it would be ok. Still do but will honor 
the majority opinion and will deprecate.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-748 Fixed logic around handling partial qu...

2015-11-16 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/123#issuecomment-157029989
  
@trkurc @joewitt @apiri 
Guys, please see the latest commit. Didn't squash it, so its easier to read 
and see what's been addressed. In summary:
1. Since based on the latest comment from Joe it appears that we all agree 
that DocReader is not really public, i kept the dead constructor out and also 
made DocReader package private.
2. Based on Tony's point added Document sorting logic back. At least it 
will ensure that previous behavior is maintained. 
See commit message for more details  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1164 decreased the chances of race conditi...

2015-11-16 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/126

NIFI-1164 decreased the chances of race condition

Removed checks for 'if (getState() != ControllerServiceState.DISABLED)’ 
from StandardControllerServiceNode.verifyCanEnable(..) operations based on the 
discussion that we had in NIFI-1143 where ‘enablable’ service is the one 
that is not ENABLED or ENABLING.
On top of that the actual state check is redundant since it is going  to be 
checked again when isValid() is invoked.
Cleaned up the code in 
StandardControllerServiceProvider.enableReferencingServices(..) since:
1. It had the same check ordering issue on service state between ENABLING 
and ENABLED as was described in NIFI-1143.
2. Removed redundant recursiveReferences computation
3. There was two loops iterating over the same collection, so merged that 
into one
4. Removed redundant state check in the loop since it would be checked 
again as part of 'verifyCanEnable'

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1164

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/126.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #126


commit 54c5c0397c6d45d34c7c75e5fe44984dcb765ea4
Author: Oleg Zhurakousky 
Date:   2015-11-16T18:18:43Z

NIFI-1164 decreased the chances of race condition
Removed checks for 'if (getState() != ControllerServiceState.DISABLED)’ 
from StandardControllerServiceNode.verifyCanEnable(..) operations based on the 
discussion that we had in NIFI-1143 where ‘enablable’ service is the one 
that is not ENABLED or ENABLING.
On top of that the actual state check is redundant since it is going  to be 
checked again when isValid() is invoked.
Cleaned up the code in 
StandardControllerServiceProvider.enableReferencingServices(..) since:
1. It had the same check ordering issue on service state between ENABLING 
and ENABLED as was described in NIFI-1143.
2. Removed redundant recursiveReferences computation
3. There was two loops iterating over the same collection, so merged that 
into one
4. Removed redundant state check in the loop since it would be checked 
again as part of 'verifyCanEnable'




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1164 decreased the chances of race conditi...

2015-11-16 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/126#discussion_r44976221
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/service/StandardControllerServiceProvider.java
 ---
@@ -545,23 +546,16 @@ public void enableReferencingServices(final 
ControllerServiceNode serviceNode) {
 }
 
 private void enableReferencingServices(final ControllerServiceNode 
serviceNode, final List recursiveReferences) {
-if (serviceNode.getState() != ControllerServiceState.ENABLED && 
serviceNode.getState() != ControllerServiceState.ENABLING) {
+if (serviceNode.getState() != ControllerServiceState.ENABLING && 
serviceNode.getState() != ControllerServiceState.ENABLED) {
 serviceNode.verifyCanEnable(new 
HashSet<>(recursiveReferences));
 }
 
 final Set ifEnabled = new HashSet<>();
-final List toEnable = 
findRecursiveReferences(serviceNode, ControllerServiceNode.class);
-for (final ControllerServiceNode nodeToEnable : toEnable) {
+for (final ControllerServiceNode nodeToEnable : 
recursiveReferences) {
 final ControllerServiceState state = nodeToEnable.getState();
-if (state != ControllerServiceState.ENABLED && state != 
ControllerServiceState.ENABLING) {
+if (state != ControllerServiceState.ENABLING && state != 
ControllerServiceState.ENABLED) {
 nodeToEnable.verifyCanEnable(ifEnabled);
 ifEnabled.add(nodeToEnable);
-}
-}
-
-for (final ControllerServiceNode nodeToEnable : toEnable) {
--- End diff --

I see your point, let me put it back and I'll document it in code as well


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1164 decreased the chances of race conditi...

2015-11-16 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/126#issuecomment-157173562
  
Pulling back, see comments in JIRA


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1164 decreased the chances of race conditi...

2015-11-16 Thread olegz
Github user olegz closed the pull request at:

https://github.com/apache/nifi/pull/126


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1156: HTML Parsing Processors Bundle

2015-11-16 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/124#issuecomment-157221978
  
General question primarily for NiFi leadership team. JSoup MIT License - 
http://jsoup.org/license
Are there any conflicts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1156: HTML Parsing Processors Bundle

2015-11-16 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/124#discussion_r45008293
  
--- Diff: nifi-nar-bundles/nifi-html-bundle/nifi-html-processors/pom.xml ---
@@ -0,0 +1,59 @@
+
+
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+org.apache.nifi
+nifi-html-bundle
+0.4.0-SNAPSHOT
+
+
+nifi-html-processors
+Support for parsing HTML documents
+
+
+
+org.jsoup
+jsoup
+1.8.3
+
+
+org.apache.nifi
+nifi-api
+
+
+org.apache.nifi
+nifi-processor-utils
+
+
+org.apache.nifi
+nifi-mock
+test
+
+
+org.slf4j
+slf4j-simple
+test
+
+
+junit
+junit
+4.11
--- End diff --

Does the main POM declares a version of JUnit 4.12? Can we use that one?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1074 added initial support for IDE integra...

2015-11-18 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/108#issuecomment-157839054
  
After discussing it with the team we agreed (at least for now) simply keep 
the reference to my github - https://github.com/olegz/nifi-ide-integration and 
it is already reflected in Contributor's WIKI


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1074 added initial support for IDE integra...

2015-11-18 Thread olegz
Github user olegz closed the pull request at:

https://github.com/apache/nifi/pull/108


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1192 added support for Dynamic Properties

2015-11-19 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/129

NIFI-1192 added support for Dynamic Properties



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1192

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/129.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #129


commit a5fecf632239a30ec822c5bebd2eeca7f549ac9e
Author: Oleg Zhurakousky 
Date:   2015-11-18T22:06:11Z

NIFI-1192 added support for Dynamic Properties




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1192 added support for Dynamic Properties ...

2015-11-20 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/129#issuecomment-158542457
  
@trkurc @joewitt @markap14 
Guys, please see the updated commit. This one has a bit more work as it 
includes some polishing since we discovered few more issues, one being that 
there was an non-existing property etc. See commit message for more details.

Tony, in relation to your last message, they will not override any 
previously set property since NiFi will not allow it. Basically one of the 
biggest changes that went in with this commit is setting _name_ and 
_displayName_. So when one attempts to set the property (e.g., 'client.id') as 
dynamic property, the NiFi will pop up with the message 'A property with this 
name already exists.’. So I think we're good, but let me know if you still 
see an issue.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1192 added support for Dynamic Properties ...

2015-11-20 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/129#issuecomment-158571455
  
After discussing it with @joewitt pulling it back as it breaks backward 
compatibility 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1192 added support for Dynamic Properties ...

2015-11-20 Thread olegz
Github user olegz closed the pull request at:

https://github.com/apache/nifi/pull/129


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1192 added support for dynamic properties ...

2015-11-23 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/131

NIFI-1192 added support for dynamic properties to GetKafka

Due to the fact that current component uses artificial names for properties 
set via UI and then maps those properties to the actual names used by Kafka, we 
can not rely on NiFi UI to display an error if user attempts to set a dynamic 
property which will eventually map to the same Kafka property. So, I’ve 
decided that any dynamic property will simply override an existing property 
with WARNING message displayed. It is actually consistent with how Kafka does 
it and displayed the overrides in the console. Updated the relevant annotation 
description.
It is also worth to mentioned that current code was using an old property 
from Kafka 0.7 (“zk.connectiontimeout.ms”) which is no longer present in 
Kafka 0.8 (WARN Timer-Driven Process Thread-7 utils.VerifiableProperties:83 - 
Property zk.connectiontimeout.ms is not valid). The add/override strategy would 
provide for more flexibility when dealing with Kafka volatile configuration 
until things will settle down and we can get some sensible defaults in place.

While doing it addressed the following issues that were discovered while 
making modification and testing:
ISSUE: When GetKafka started and there are no messages in Kafka topic the 
onTrigger(..) method would block due to the fact that Kafka’s 
ConsumerIterator.hasNext() blocks. When attempt was made to stop GetKafka would 
stops successfully due to the interrupt. However in UI it would appear as ERROR 
based on the fact that InterruptException was not handled.
RESOLUTION: After discussing it with @markap14 the the general desire is to 
let the task exit as quick as possible and that the whole thread maintenance 
logic was there initially due to the fact that there was no way to tell Kafka 
consumer to return immediately if there are no events. In this patch we are now 
using ‘consumer.timeout.ms’ property of Kafka and setting its value to 1 
millisecond (default is -1 - always block infinitely). This ensures that tasks 
that attempted to read an empty topic will exit immediately just to be 
rescheduled by NiFi based on user configurations.

ISSUE:  Kafka would not release FlowFile with events if it didn’t have 
enough to complete the batch since it would block waiting for more messages 
(based on the blocking issue described above).
RESOLUTION: The invocation of hasNext() results in Kafka’s 
ConsumerTimeoutException which is handled in the catch block where the FlowFile 
with partial batch will be released to success. Not sure if we need to put a 
WARN message. In fact in my opinion we should not as it may create unnecessary 
confusion.

ISSUE: When configuring a consumer for topic and specifying multiple 
concurrent consumers in ‘topicCountMap’ based on 
'context.getMaxConcurrentTasks()’ each consumer would bind to a topic 
partition. If you have less partitions then the value returned by 
'context.getMaxConcurrentTasks()’ you would essentially allocate Kafka 
resources that would never get a chance to receive a single message  (see more 
here https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example).
RESOLUTION: Logic was added to determine the amount of partitions for a 
topic and in the event where 'context.getMaxConcurrentTasks()’ value is 
greater than the amount of partitions, the partition count will be used to when 
creating ‘topicCountMap’ and WARNING message will be displayed)see code). 
Unfortunately we can’t do anything with the actual tasks, but based on 
current state of the code they will exit immediately just to be rescheduled 
where the process will repeat. NOTE: That is not ideal as it will be 
rescheduling tasks that will never have a chance to do anything, but at least 
it could be fixed on the user side after reading the warning message.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1192B

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/131.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #131


commit f9d1b2811a08c372baf660a8a5e8d0f73e1a23a2
Author: Oleg Zhurakousky 
Date:   2015-11-23T20:15:03Z

NIFI-1192 added support for dynamic properties to GetKafka
Due to the fact that current component uses artificial names for properties 
set via UI and then maps those properties to the actual names used by Kafka, we 
can not rely on NiFi UI to display an error if user attempts to set a dynamic 
property which will eventually map to the same Kafka property. So, I’ve 
decided that any dynamic property will simply override an existing property 
with WARNING message displayed.

[GitHub] nifi pull request: NIFI-1192 added support for dynamic properties ...

2015-11-23 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/131#issuecomment-159052094
  
@trkurc @markap14 @joewitt 
Guys, this one is strictly for review as I am still working on adding the 
same dynamic properties logic for PutKafka. But since it contains somewhat 
significant changes, it would be worth taking a look. Commit message has all 
the details. 
Cheers


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1192 added support for dynamic properties ...

2015-11-23 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/131#issuecomment-159129775
  
Thanks @markap14! indeed it's a bit dirty and I have to polish it for 
style-check, comments etc. As I mentioned in commit message somewhere this was 
primarily for initial review to ensure that someone else can take a quick look 
and make sure I didn't go off the rails here. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1192 added support for dynamic properties ...

2015-11-24 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/131#issuecomment-159345927
  
@markap14 @trkurc The PR comments were addressed.  I was hoping to get 
Kafka embedded server with this commit as well (for testing), but so far it 
doesn't appear to be very stable. We can do it by the next release.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: added embedded Kafka server and tests

2015-11-30 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/134

added embedded Kafka server and tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1219

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/134.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #134


commit e8751b48a12e9772052030926cd090da86002d9e
Author: Oleg Zhurakousky 
Date:   2015-11-30T17:34:24Z

added embedded Kafka server and tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1234 Correcting container functionality fo...

2015-12-02 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/136#issuecomment-161287833
  
I am on the side of @apiri. I can't imagine that was the desirable behavior 
either, since it was simply inconsistent. If one **explicitly** requests an 
array, than it should return an array, no matter how many elements in it. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1243 added null check for 'currentReadClai...

2015-12-02 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/137

NIFI-1243 added null check for 'currentReadClaimStream'

. . .before it is being closed

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1243

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/137.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #137


commit 4fbfac09638ffc772072cf6a2508c4f149b89bfc
Author: Oleg Zhurakousky 
Date:   2015-12-02T21:13:46Z

NIFI-1243 added null check for 'currentReadClaimStream'
. . .before it is being closed




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: added embedded Kafka server and tests

2015-12-03 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/134#issuecomment-161645940
  
@trkurc @busbey @markap14 @joewitt 
Gentlemen

Was curious if one/all of you would be kind enough to put this through 
scrutiny ;)
Basically we are already using embedded products for testing for other 
components (e.g., ActiveMQ for JMS, Jetty for HTTP and now Kafka for Kafka). 
I've included the implementation of the actual server and few tests. The goal 
is to add more tests, but first I wanted to gather some thoughts/opinions. 

P.S. I've done some due diligence and I am not the first one who's done it. 
Quite a number of projects have their version of embedded Kafka (including 
Kafka guys themselves). Too bad it's embedded version is not productized the 
same way Jetty and ActiveMQ


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1164 Fixed race condition and refactored

2015-12-15 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/141

NIFI-1164 Fixed race condition and refactored

Changed ControllerServiceNode by adding enable(..), disable(..) and 
isActive() operations. See javadocs for more details in both 
ControllerServiceNode and StandardControllerServiceNode

Refactored service enable/disable logic in StandardProcessScheduler and 
StandardControllerServiceNode . Below are some of the notes:
- No need for resetting class loader since its going to derive from the 
class loader of the service. In other words any classes that aren’t loaded 
and will be loaded within the scope of the already loaded service will be 
loaded by the class lower of that service
- No need to control 'scheduleState.isScheduled()’ since the logic has 
changed to use CAS operation on state update and the service state change is 
now atomic.
- Removed Thread.sleep(..) and while(true) loop in favor of rescheduling 
re-tries achieving better thread utilization since the thread that would 
normally block in Thread.sleep(..) is now reused.
- Added tests and validated that the race condition no longer happening

Added additional logic that allows the initiation of the service disabling 
while it is in ENABLING state. See javadoc of 
StandardProcessScheduler.enable/disable for more details.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1164

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/141.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #141


commit 0727b2d31e1b552436c6c16df25dcbfcfcec5a3b
Author: Oleg Zhurakousky 
Date:   2015-12-15T18:21:00Z

NIFI-1164 Fixed race condition and refactored

Changed ControllerServiceNode by adding enable(..), disable(..) and 
isActive() operations. See javadocs for more details in both 
ControllerServiceNode and StandardControllerServiceNode

Refactored service enable/disable logic in StandardProcessScheduler and 
StandardControllerServiceNode . Below are some of the notes:
- No need for resetting class loader since its going to derive from the 
class loader of the service. In other words any classes that aren’t loaded 
and will be loaded within the scope of the already loaded service will be 
loaded by the class lower of that service
- No need to control 'scheduleState.isScheduled()’ since the logic has 
changed to use CAS operation on state update and the service state change is 
now atomic.
- Removed Thread.sleep(..) and while(true) loop in favor of rescheduling 
re-tries achieving better thread utilization since the thread that would 
normally block in Thread.sleep(..) is now reused.
- Added tests and validated that the race condition no longer happening

Added additional logic that allows the initiation of the service disabling 
while it is in ENABLING state. See javadoc of 
StandardProcessScheduler.enable/disable for more details.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1289 added support for refreshing properti...

2015-12-16 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/142

NIFI-1289 added support for refreshing properties

- Added _getNewInstance()_ operation to NiFiProperties to ensure there is a 
way to refresh/reload NiFi properties
- Fixed javadocs

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1289

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/142.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #142


commit b05d619b828ea3a9f54db0ef87c17ddbfd6af017
Author: Oleg Zhurakousky 
Date:   2015-12-16T13:29:33Z

NIFI-1289 added support for refreshing properties
- Added _getNewInstance()_ operation to NiFiProperties to ensure there is a 
way to refresh/reload NiFi properties
- Fixed javadocs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1218 upgraded Kafka to 0.9.0.0 client API

2015-12-16 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/143

NIFI-1218 upgraded Kafka to 0.9.0.0 client API

Tested and validated that it is still compatible with 0.8.* Kafka brokers

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1218

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/143.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #143


commit 10cbc92873784a4a3871ae6937b8a43ac3e0abe8
Author: Oleg Zhurakousky 
Date:   2015-12-16T17:49:44Z

NIFI-1218 upgraded Kafka to 0.9.0.0 client API
Tested and validated that it is still compatible with 0.8.* Kafka brokers




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1218 upgraded Kafka to 0.9.0.0 client API

2015-12-16 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/143#discussion_r47814729
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/test/java/org/apache/nifi/processors/kafka/TestPutKafka.java
 ---
@@ -474,6 +475,18 @@ public void setStopFailingAfter(final Integer 
stopFailingAfter) {
 @Override
 public void close() {
 }
+
+@Override
+public void close(long arg0, TimeUnit arg1) {
+// TODO Auto-generated method stub
+
+}
+
+@Override
+public void flush() {
+// TODO Auto-generated method stub
--- End diff --

As you can see, this is a MockProducer defined in test. Those two methods 
are only there to comply with the changes in Kafka interface and are not part 
of the test assertions.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1218 upgraded Kafka to 0.9.0.0 client API

2015-12-16 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/143#discussion_r47814866
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/test/java/org/apache/nifi/processors/kafka/TestPutKafka.java
 ---
@@ -474,6 +475,18 @@ public void setStopFailingAfter(final Integer 
stopFailingAfter) {
 @Override
 public void close() {
 }
+
+@Override
+public void close(long arg0, TimeUnit arg1) {
+// TODO Auto-generated method stub
+
+}
+
+@Override
+public void flush() {
+// TODO Auto-generated method stub
--- End diff --

Having said that I'll add a comments for clarity


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1218 upgraded Kafka to 0.9.0.0 client API

2015-12-16 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/143#issuecomment-165207075
  
Addressed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1218 upgraded Kafka to 0.9.0.0 client API

2015-12-16 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/143#issuecomment-165296041
  
Merged


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1218 upgraded Kafka to 0.9.0.0 client API

2015-12-16 Thread olegz
Github user olegz closed the pull request at:

https://github.com/apache/nifi/pull/143


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1300 - Penalize flowfiles when message sen...

2015-12-18 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/145#issuecomment-165876212
  
LGTM!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1333 fixed FlowController shutdown deadloc...

2015-12-23 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/148

NIFI-1333 fixed FlowController shutdown deadlock

The relevant test is available here: 
https://github.com/olegz/nifi/blob/int-test/nifi-integration-tests/src/test/java/org/apache/nifi/test/flowcontroll/FlowControllerTests.java#L50
 

Unfortunately this is one of those multi-module situations.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1333

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/148.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #148


commit 302dafbf59e1eab48bece7be8a3e98682c7fc14b
Author: Oleg Zhurakousky 
Date:   2015-12-23T20:41:54Z

NIFI-1333 fixed FlowController shutdown deadlock




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1289 reverted changes to NiFiProperties

2015-12-24 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/150

NIFI-1289 reverted changes to NiFiProperties

in favor of the localized relection call in test to refresh properties.

polish javadoc

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1289

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/150.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #150


commit 5618d42ab1f6f111d8d68911b4683e2af1abf481
Author: Oleg Zhurakousky 
Date:   2015-12-24T17:59:19Z

NIFI-1289 reverted changes to NiFiProperties
in favor of the localized relection call in test to refresh properties.

polish javadoc




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1378 fixed JMS URI validation

2016-01-12 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/167

NIFI-1378 fixed JMS URI validation

simplified JmsFactory check for SSL and scheme-less URIs
ensured URI validation is handled by ActiveMqConnectionFactory
ensured informative error messages are shown to the user

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1378

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/167.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #167


commit 4536db4d9ea7f8223c27c7095b60ec1009ec7c40
Author: Oleg Zhurakousky 
Date:   2016-01-12T15:31:04Z

NIFI-1378 fixed JMS URI validation
simplified JmsFactory check for SSL and scheme-less URIs
ensured URI validation is handled by ActiveMqConnectionFactory
ensured informative error messages are shown to the user




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1376 Provide access to logged messages fro...

2016-01-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/165#discussion_r49470011
  
--- Diff: 
nifi-mock/src/main/java/org/apache/nifi/util/MockProcessorLog.java ---
@@ -16,20 +16,57 @@
  */
 package org.apache.nifi.util;
 
+import java.util.List;
+
 import org.apache.nifi.logging.ProcessorLog;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 public class MockProcessorLog implements ProcessorLog {
 
-private final Logger logger;
+private final CapturingLogger logger;
 private final Object component;
 
 public MockProcessorLog(final String componentId, final Object 
component) {
-this.logger = LoggerFactory.getLogger(component.getClass());
+this.logger = new 
CapturingLogger(LoggerFactory.getLogger(component.getClass()));
--- End diff --

Also, there may be a simpler way to do what I believe you are trying. Just 
look at here 
https://github.com/apache/nifi/pull/167/files#diff-d4270631731d05831ae336a5e6e50ad2R54.
 NOTE: that wit this approach you'll get access to the entire message after its 
being formatted, thus validating values in {..}


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1233 upgraded to Kafka 0.9.0.0

2016-01-12 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/168

NIFI-1233 upgraded to Kafka 0.9.0.0



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1233

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #168


commit 31e38095b9c0f9d117d013c032f3cb4bb0062534
Author: Oleg Zhurakousky 
Date:   2016-01-12T15:56:49Z

NIFI-1233 upgraded to Kafka 0.9.0.0




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1317 removed duplicate 'name' instance var...

2016-01-12 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/169

NIFI-1317 removed duplicate 'name' instance variable



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1317

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/169.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #169


commit 7d17c2281106a851a6455af97a351eada3956792
Author: Oleg Zhurakousky 
Date:   2016-01-12T16:19:45Z

NIFI-1317 removed duplicate 'name' instance variable




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: Nifi 1365

2016-01-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/163#discussion_r49476661
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/PGPUtil.java
 ---
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard.util;
+
+import org.apache.nifi.processors.standard.EncryptContent;
+import org.bouncycastle.bcpg.ArmoredOutputStream;
+import org.bouncycastle.openpgp.PGPCompressedData;
+import org.bouncycastle.openpgp.PGPCompressedDataGenerator;
+import org.bouncycastle.openpgp.PGPEncryptedData;
+import org.bouncycastle.openpgp.PGPEncryptedDataGenerator;
+import org.bouncycastle.openpgp.PGPException;
+import org.bouncycastle.openpgp.PGPLiteralData;
+import org.bouncycastle.openpgp.PGPLiteralDataGenerator;
+import org.bouncycastle.openpgp.operator.PGPKeyEncryptionMethodGenerator;
+import org.bouncycastle.openpgp.operator.jcajce.JcePGPDataEncryptorBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.security.SecureRandom;
+import java.util.Date;
+import java.util.zip.Deflater;
+
+/**
+ * This class contains static utility methods to assist with common PGP 
operations.
+ */
+public class PGPUtil {
+private static final Logger logger = 
LoggerFactory.getLogger(PGPUtil.class);
+
+public static final int BUFFER_SIZE = 65536;
+public static final int BLOCK_SIZE = 4096;
+
+public static void encrypt(InputStream in, OutputStream out, String 
algorithm, String provider, int cipher, String filename, 
PGPKeyEncryptionMethodGenerator encryptionMethodGenerator) throws IOException, 
PGPException {
+final boolean isArmored = 
EncryptContent.isPGPArmoredAlgorithm(algorithm);
--- End diff --

Given that this is a public method I'd suggest to do a null check for 
"algorithm" to avoid NPE in EncryptContent. . .


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1233 upgraded to Kafka 0.9.0.0

2016-01-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/168#discussion_r49477360
  
--- Diff: nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/pom.xml 
---
@@ -37,12 +37,12 @@
 
org.apache.kafka
kafka-clients
-   0.8.2.2
+   0.9.0.0

 
 org.apache.kafka
-kafka_2.9.1
-0.8.2.2
+kafka_2.10
--- End diff --

Correct. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: Nifi 1365

2016-01-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/163#discussion_r49477103
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/PGPUtil.java
 ---
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard.util;
+
+import org.apache.nifi.processors.standard.EncryptContent;
+import org.bouncycastle.bcpg.ArmoredOutputStream;
+import org.bouncycastle.openpgp.PGPCompressedData;
+import org.bouncycastle.openpgp.PGPCompressedDataGenerator;
+import org.bouncycastle.openpgp.PGPEncryptedData;
+import org.bouncycastle.openpgp.PGPEncryptedDataGenerator;
+import org.bouncycastle.openpgp.PGPException;
+import org.bouncycastle.openpgp.PGPLiteralData;
+import org.bouncycastle.openpgp.PGPLiteralDataGenerator;
+import org.bouncycastle.openpgp.operator.PGPKeyEncryptionMethodGenerator;
+import org.bouncycastle.openpgp.operator.jcajce.JcePGPDataEncryptorBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.security.SecureRandom;
+import java.util.Date;
+import java.util.zip.Deflater;
+
+/**
+ * This class contains static utility methods to assist with common PGP 
operations.
+ */
+public class PGPUtil {
+private static final Logger logger = 
LoggerFactory.getLogger(PGPUtil.class);
+
+public static final int BUFFER_SIZE = 65536;
+public static final int BLOCK_SIZE = 4096;
+
+public static void encrypt(InputStream in, OutputStream out, String 
algorithm, String provider, int cipher, String filename, 
PGPKeyEncryptionMethodGenerator encryptionMethodGenerator) throws IOException, 
PGPException {
+final boolean isArmored = 
EncryptContent.isPGPArmoredAlgorithm(algorithm);
+OutputStream output = out;
+if (isArmored) {
+output = new ArmoredOutputStream(out);
+}
+
+// Default value, do not allow null encryption
+if (cipher == PGPEncryptedData.NULL) {
+logger.warn("Null encryption not allowed; defaulting to 
AES-128");
+cipher = PGPEncryptedData.AES_128;
+}
+
+try {
+// TODO: Can probably hardcode provider to BC and remove one 
method parameter
+PGPEncryptedDataGenerator encryptedDataGenerator = new 
PGPEncryptedDataGenerator(
+new 
JcePGPDataEncryptorBuilder(cipher).setWithIntegrityPacket(true).setSecureRandom(new
 SecureRandom()).setProvider(provider));
+
+encryptedDataGenerator.addMethod(encryptionMethodGenerator);
+
+try (OutputStream encryptedOut = 
encryptedDataGenerator.open(output, new byte[BUFFER_SIZE])) {
+PGPCompressedDataGenerator compressedDataGenerator = new 
PGPCompressedDataGenerator(PGPCompressedData.ZIP, Deflater.BEST_SPEED);
+try (OutputStream compressedOut = 
compressedDataGenerator.open(encryptedOut, new byte[BUFFER_SIZE])) {
+PGPLiteralDataGenerator literalDataGenerator = new 
PGPLiteralDataGenerator();
+try (OutputStream literalOut = 
literalDataGenerator.open(compressedOut, PGPLiteralData.BINARY, filename, new 
Date(), new byte[BUFFER_SIZE])) {
+
+final byte[] buffer = new byte[BLOCK_SIZE];
+int len;
+while ((len = in.read(buffer)) >= 0) {
--- End diff --

Something tells me that it has to be ```((len = in.read(buffer)) >= -1)```. 
If I remember correctly the stream can still be open and valid . . . just have 
no data at the moment while -1 simply means its over.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If you

[GitHub] nifi pull request: Nifi 1365

2016-01-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/163#discussion_r49479578
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/PGPUtil.java
 ---
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard.util;
+
+import org.apache.nifi.processors.standard.EncryptContent;
+import org.bouncycastle.bcpg.ArmoredOutputStream;
+import org.bouncycastle.openpgp.PGPCompressedData;
+import org.bouncycastle.openpgp.PGPCompressedDataGenerator;
+import org.bouncycastle.openpgp.PGPEncryptedData;
+import org.bouncycastle.openpgp.PGPEncryptedDataGenerator;
+import org.bouncycastle.openpgp.PGPException;
+import org.bouncycastle.openpgp.PGPLiteralData;
+import org.bouncycastle.openpgp.PGPLiteralDataGenerator;
+import org.bouncycastle.openpgp.operator.PGPKeyEncryptionMethodGenerator;
+import org.bouncycastle.openpgp.operator.jcajce.JcePGPDataEncryptorBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.security.SecureRandom;
+import java.util.Date;
+import java.util.zip.Deflater;
+
+/**
+ * This class contains static utility methods to assist with common PGP 
operations.
+ */
+public class PGPUtil {
+private static final Logger logger = 
LoggerFactory.getLogger(PGPUtil.class);
+
+public static final int BUFFER_SIZE = 65536;
+public static final int BLOCK_SIZE = 4096;
+
+public static void encrypt(InputStream in, OutputStream out, String 
algorithm, String provider, int cipher, String filename, 
PGPKeyEncryptionMethodGenerator encryptionMethodGenerator) throws IOException, 
PGPException {
+final boolean isArmored = 
EncryptContent.isPGPArmoredAlgorithm(algorithm);
+OutputStream output = out;
+if (isArmored) {
+output = new ArmoredOutputStream(out);
+}
+
+// Default value, do not allow null encryption
+if (cipher == PGPEncryptedData.NULL) {
+logger.warn("Null encryption not allowed; defaulting to 
AES-128");
+cipher = PGPEncryptedData.AES_128;
+}
+
+try {
+// TODO: Can probably hardcode provider to BC and remove one 
method parameter
+PGPEncryptedDataGenerator encryptedDataGenerator = new 
PGPEncryptedDataGenerator(
+new 
JcePGPDataEncryptorBuilder(cipher).setWithIntegrityPacket(true).setSecureRandom(new
 SecureRandom()).setProvider(provider));
+
+encryptedDataGenerator.addMethod(encryptionMethodGenerator);
+
+try (OutputStream encryptedOut = 
encryptedDataGenerator.open(output, new byte[BUFFER_SIZE])) {
+PGPCompressedDataGenerator compressedDataGenerator = new 
PGPCompressedDataGenerator(PGPCompressedData.ZIP, Deflater.BEST_SPEED);
+try (OutputStream compressedOut = 
compressedDataGenerator.open(encryptedOut, new byte[BUFFER_SIZE])) {
+PGPLiteralDataGenerator literalDataGenerator = new 
PGPLiteralDataGenerator();
+try (OutputStream literalOut = 
literalDataGenerator.open(compressedOut, PGPLiteralData.BINARY, filename, new 
Date(), new byte[BUFFER_SIZE])) {
+
+final byte[] buffer = new byte[BLOCK_SIZE];
+int len;
+while ((len = in.read(buffer)) >= 0) {
--- End diff --

Andy, my bad; Just realized that you have ```>= 0``` which is the same as 
```> -1```. I guess at this point its a matter of personal preferences and mine 
is -1 (feels more explicit), but i'll leave it up to you ;)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well

[GitHub] nifi pull request: NIFI-1384 Changed DocumentWriter to accept Clas...

2016-01-12 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/171

NIFI-1384 Changed DocumentWriter to accept Class of ConfigurableCompo…

…nent

NIFI-1384
removed dynamic elements writers routines
fixed tests
polishing

NIFI-1384 polishing

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1384

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/171.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #171


commit 8e53c6966c2d7d5e1b2346620fa751844815a80e
Author: Oleg Zhurakousky 
Date:   2016-01-12T19:50:10Z

NIFI-1384 Changed DocumentWriter to accept Class of ConfigurableComponent

NIFI-1384
removed dynamic elements writers routines
fixed tests
polishing

NIFI-1384 polishing




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1233 upgraded to Kafka 0.9.0.0

2016-01-12 Thread olegz
Github user olegz closed the pull request at:

https://github.com/apache/nifi/pull/168


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1317 removed duplicate 'name' instance var...

2016-01-12 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/169#issuecomment-171109933
  
Coo. thx @trkurc 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: Nifi 1324

2016-01-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/170#discussion_r49536701
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/PGPUtil.java
 ---
@@ -46,9 +46,8 @@
 public static final int BUFFER_SIZE = 65536;
 public static final int BLOCK_SIZE = 4096;
 
-public static void encrypt(InputStream in, OutputStream out, String 
algorithm, String provider, int cipher, String filename,
-   PGPKeyEncryptionMethodGenerator 
encryptionMethodGenerator) throws IOException, PGPException {
-
+public static void encrypt(InputStream in, OutputStream out, String 
algorithm, String provider, int cipher, String filename, 
PGPKeyEncryptionMethodGenerator encryptionMethodGenerator) throws
+IOException, PGPException {
 final boolean isArmored = 
EncryptContent.isPGPArmoredAlgorithm(algorithm);
--- End diff --

Andy, same comment about null check given that this is public API


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1317 removed duplicate 'name' instance var...

2016-01-12 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/169#discussion_r49540950
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core-api/src/main/java/org/apache/nifi/controller/AbstractConfiguredComponent.java
 ---
@@ -57,6 +57,7 @@ public AbstractConfiguredComponent(final 
ConfigurableComponent component, final
 this.component = component;
 this.validationContextFactory = validationContextFactory;
 this.serviceProvider = serviceProvider;
+this.name = new 
AtomicReference<>(component.getClass().getSimpleName());
--- End diff --

That's kind of how it was and yes, this is me trying to "extract an answer 
by force" ;). I don't see a reason why it would have to be initialized to null 
while defaulting to class name in sub-class, but I also do see a "safety" point 
in your comment. I am good either way. Do you think we need @markap14  or 
anyone else to chip in with opinion?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1317 removed duplicate 'name' instance var...

2016-01-13 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/169#issuecomment-171283707
  
@trkurc went ahead with your suggestion. It's definitely safer.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1333 fixed FlowController shutdown deadloc...

2016-01-14 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/148#issuecomment-171660747
  
@markap14 @joewitt Guys, please review. As agreed I've put read lock back 
in.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1406 Collecting the number of bytes sent r...

2016-01-18 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/176#issuecomment-172556905
  
There is also a compile warning in the catch clause preceding the place 
where you made you changes
```
 } catch (final IOException e) {
flowFile = session.penalize(flowFile);
session.transfer(flowFile, REL_FAILURE);
logger.error("Unable to communicate with 
destination {} to determine whether or not it can accept "
+ "flowfiles/gzip; routing {} to failure 
due to {}", new Object[]{url, flowFile, e});
context.yield();
return;
 }
```
The warning states _*Resource leak: 'throttler' is not closed at this 
location*_. Perhaps while you're at it you can fix that as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1384 Changed DocumentWriter to accept Clas...

2016-01-18 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/171#issuecomment-172564889
  
At the moment I agree, this should not be merged as it was primarily done 
to start the conversation - 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61336327


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1384 Changed DocumentWriter to accept Clas...

2016-01-18 Thread olegz
Github user olegz closed the pull request at:

https://github.com/apache/nifi/pull/171


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1333 fixed FlowController shutdown deadloc...

2016-01-18 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/148#issuecomment-172583777
  
I am not surprised, that's why I was against having a lock in a shutdown in 
the first place. It doesn't bring any value since If you follow the logic of 
shutting down executors we are shutting them down cold anyway if they have not 
shut down in timely fashion. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1378 fixed JMS URI validation

2016-01-21 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/167#issuecomment-173604806
  
@trkurc I do see your point, so let me throw something else at you. The 
_URI_VALIDATOR_ simply validates based on success of this ```new URI(input);``` 
and we can certainly add the test back, but consider that that test was added 
by Joe to validate the previous fix that we all seem to agree was inefficient 
given what we know now. So instead I've added 3 more tests to test specific 
invalid conditions. 
That leaves us with one thing and that is testing that 'any validator' 
works in the processor and as I said before I think those tests already exists 
at the higher level.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1378 fixed JMS URI validation

2016-01-22 Thread olegz
Github user olegz commented on the pull request:

https://github.com/apache/nifi/pull/167#issuecomment-173939440
  
@trkurc I understand and thank you for merging it. That said, there are 
improvements efforts under way which hopefully would address all of our 
concerns. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   4   5   6   >