[jira] [Created] (NIFI-4212) Create RethinkDB Delete Processor

2017-07-20 Thread Mans Singh (JIRA)
Mans Singh created NIFI-4212:


 Summary: Create RethinkDB Delete Processor
 Key: NIFI-4212
 URL: https://issues.apache.org/jira/browse/NIFI-4212
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Affects Versions: 1.3.0
Reporter: Mans Singh
Assignee: Mans Singh
Priority: Minor
 Fix For: 1.4.0


Create processor to delete RethinkDB documents by id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #117: MINIFI-338: Convert processor threads to ...

2017-07-20 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/117#discussion_r128656760
  
--- Diff: libminifi/include/utils/ThreadPool.h ---
@@ -246,15 +349,67 @@ void ThreadPool::startWorkers() {
 template
 void ThreadPool::run_tasks() {
   auto waitperiod = std::chrono::milliseconds(1) * 100;
+  uint64_t wait_decay_ = 0;
   while (running_.load()) {
 
+// if we are spinning, perform a wait. If something changes in the 
worker such that the timeslice has changed, we will pick that information up. 
Note that it's possible
+// we could starve for processing time if all workers are waiting. In 
the event that the number of workers far exceeds the number of threads, threads 
will spin and potentially
+// wait until they arrive at a task that can be run. In this case we 
reset the wait_decay and attempt to pick up a new task. This means that threads 
that recently ran should
+// be more likely to run. This is intentional.
+if (wait_decay_ > 1000) {
+  std::this_thread::sleep_for(std::chrono::nanoseconds(wait_decay_));
+}
 Worker task;
 if (!worker_queue_.try_dequeue(task)) {
+
   std::unique_lock lock(worker_queue_mutex_);
   tasks_available_.wait_for(lock, waitperiod);
   continue;
 }
-task.run();
+else {
+
+  std::unique_lock lock(worker_queue_mutex_);
+  if (!task_status_[task.getIdentifier()]) {
+continue;
+  }
+}
+
+bool wait_to_run = false;
+if (task.getTimeSlice() > 1) {
+  auto now = std::chrono::system_clock::now().time_since_epoch();
+  auto ms = std::chrono::duration_cast(now);
+  if (task.getTimeSlice() > ms.count()) {
+wait_to_run = true;
+  }
+}
+// if we have to wait we re-queue the worker.
+if (wait_to_run) {
+  {
+std::unique_lock lock(worker_queue_mutex_);
+if (!task_status_[task.getIdentifier()]) {
+  continue;
+}
+  }
+  worker_queue_.enqueue(std::move(task));
--- End diff --

Need? Not really we could run with the same task, but the premise is to 
enqueue in the event that something else could be pulled off if another task 
exists, if this one is dequeued, then we run it unless the timeslice has again 
said "come back later." Admittedly it's waste of a queue, but we won't know if 
a task is available after the wait period. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4087) IdentifyMimeType: Optionally exclude filename from criteria

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095562#comment-16095562
 ] 

ASF GitHub Bot commented on NIFI-4087:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2026
  
looks like a good idea and good change.  I'll let travis-ci do its thing 
and then take a look a bit later.  Thanks for contributing


> IdentifyMimeType: Optionally exclude filename from criteria
> ---
>
> Key: NIFI-4087
> URL: https://issues.apache.org/jira/browse/NIFI-4087
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.3.0, 0.7.4
>Reporter: Brandon DeVries
>Priority: Minor
> Attachments: NIFI-4087-Add-option-to-exclude-filename-from-tika.patch
>
>
> In IdentifyMimeType\[1], the filename is always (when on-null) passed to tika 
> as a criteria in determining the mime type.  However, there are cases when 
> the filename may be known to be misleading (e.g. after decompression via 
> CompressContent with "Update Filename" set to false).  We should add a 
> boolean processor property (default true) indicating whether or not to pass 
> the filename to tika.
> \[1] 
> https://github.com/apache/nifi/blob/a9a9b67430b33944b5eefa17cb85b5dd42c8d1fc/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/IdentifyMimeType.java#L126-L129



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2026: NIFI-4087 Fix to allow exclusion of filename from tika cri...

2017-07-20 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2026
  
looks like a good idea and good change.  I'll let travis-ci do its thing 
and then take a look a bit later.  Thanks for contributing


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4087) IdentifyMimeType: Optionally exclude filename from criteria

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095561#comment-16095561
 ] 

ASF GitHub Bot commented on NIFI-4087:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2026#discussion_r128655377
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/IdentifyMimeType.java
 ---
@@ -124,7 +147,7 @@ public void process(final InputStream stream) throws 
IOException {
 TikaInputStream tikaStream = TikaInputStream.get(in);
 Metadata metadata = new Metadata();
 // Add filename if it exists
--- End diff --

probably makes sense to remove this comment now given that whether it will 
add it is optional


> IdentifyMimeType: Optionally exclude filename from criteria
> ---
>
> Key: NIFI-4087
> URL: https://issues.apache.org/jira/browse/NIFI-4087
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.3.0, 0.7.4
>Reporter: Brandon DeVries
>Priority: Minor
> Attachments: NIFI-4087-Add-option-to-exclude-filename-from-tika.patch
>
>
> In IdentifyMimeType\[1], the filename is always (when on-null) passed to tika 
> as a criteria in determining the mime type.  However, there are cases when 
> the filename may be known to be misleading (e.g. after decompression via 
> CompressContent with "Update Filename" set to false).  We should add a 
> boolean processor property (default true) indicating whether or not to pass 
> the filename to tika.
> \[1] 
> https://github.com/apache/nifi/blob/a9a9b67430b33944b5eefa17cb85b5dd42c8d1fc/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/IdentifyMimeType.java#L126-L129



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2026: NIFI-4087 Fix to allow exclusion of filename from t...

2017-07-20 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2026#discussion_r128655377
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/IdentifyMimeType.java
 ---
@@ -124,7 +147,7 @@ public void process(final InputStream stream) throws 
IOException {
 TikaInputStream tikaStream = TikaInputStream.get(in);
 Metadata metadata = new Metadata();
 // Add filename if it exists
--- End diff --

probably makes sense to remove this comment now given that whether it will 
add it is optional


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4087) IdentifyMimeType: Optionally exclude filename from criteria

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095549#comment-16095549
 ] 

ASF GitHub Bot commented on NIFI-4087:
--

GitHub user Leah-Anderson opened a pull request:

https://github.com/apache/nifi/pull/2026

NIFI-4087 Fix to allow exclusion of filename from tika criteria.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [X] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Leah-Anderson/nifi NIFI-4087

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2026.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2026


commit 8666e0cecd0fb81b835e19e32818318a88116cd2
Author: Leah 
Date:   2017-07-20T23:20:54Z

NIFI-4087 Fix to allow exclusion of filename from tika criteria.




> IdentifyMimeType: Optionally exclude filename from criteria
> ---
>
> Key: NIFI-4087
> URL: https://issues.apache.org/jira/browse/NIFI-4087
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.3.0, 0.7.4
>Reporter: Brandon DeVries
>Priority: Minor
> Attachments: NIFI-4087-Add-option-to-exclude-filename-from-tika.patch
>
>
> In IdentifyMimeType\[1], the filename is always (when on-null) passed to tika 
> as a criteria in determining the mime type.  However, there are cases when 
> the filename may be known to be misleading (e.g. after decompression via 
> CompressContent with "Update Filename" set to false).  We should add a 
> boolean processor property (default true) indicating whether or not to pass 
> the filename to tika.
> \[1] 
> https://github.com/apache/nifi/blob/a9a9b67430b33944b5eefa17cb85b5dd42c8d1fc/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/IdentifyMimeType.java#L126-L129



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2026: NIFI-4087 Fix to allow exclusion of filename from t...

2017-07-20 Thread Leah-Anderson
GitHub user Leah-Anderson opened a pull request:

https://github.com/apache/nifi/pull/2026

NIFI-4087 Fix to allow exclusion of filename from tika criteria.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [X] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Leah-Anderson/nifi NIFI-4087

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2026.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2026


commit 8666e0cecd0fb81b835e19e32818318a88116cd2
Author: Leah 
Date:   2017-07-20T23:20:54Z

NIFI-4087 Fix to allow exclusion of filename from tika criteria.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095465#comment-16095465
 ] 

ASF GitHub Bot commented on NIFI-3736:
--

Github user m-hogue commented on the issue:

https://github.com/apache/nifi/pull/2010
  
@markap14 @mosermw : I've added a static 100MB cap for the max appendable 
claim size. Please let me know if you'd like any more changes.


> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2010: NIFI-3736: change to honor nifi.content.claim.max.appendab...

2017-07-20 Thread m-hogue
Github user m-hogue commented on the issue:

https://github.com/apache/nifi/pull/2010
  
@markap14 @mosermw : I've added a static 100MB cap for the max appendable 
claim size. Please let me know if you'd like any more changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095462#comment-16095462
 ] 

ASF GitHub Bot commented on NIFI-3736:
--

Github user m-hogue commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2010#discussion_r128642452
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
 ---
@@ -149,6 +154,10 @@ public FileSystemRepository(final NiFiProperties 
nifiProperties) throws IOExcept
 for (final Path path : fileRespositoryPaths.values()) {
 Files.createDirectories(path);
 }
+this.maxFlowFilesPerClaim = 
nifiProperties.getMaxFlowFilesPerClaim();
+this.writableClaimQueue  = new 
LinkedBlockingQueue<>(maxFlowFilesPerClaim);
+final String maxAppendableClaimSize = 
nifiProperties.getMaxAppendableClaimSize();
+this.maxAppendableClaimLength = 
DataUnit.parseDataSize(maxAppendableClaimSize, DataUnit.B).intValue();
--- End diff --

Cool - I'll add a static 100MB cap.


> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2010: NIFI-3736: change to honor nifi.content.claim.max.a...

2017-07-20 Thread m-hogue
Github user m-hogue commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2010#discussion_r128642452
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
 ---
@@ -149,6 +154,10 @@ public FileSystemRepository(final NiFiProperties 
nifiProperties) throws IOExcept
 for (final Path path : fileRespositoryPaths.values()) {
 Files.createDirectories(path);
 }
+this.maxFlowFilesPerClaim = 
nifiProperties.getMaxFlowFilesPerClaim();
+this.writableClaimQueue  = new 
LinkedBlockingQueue<>(maxFlowFilesPerClaim);
+final String maxAppendableClaimSize = 
nifiProperties.getMaxAppendableClaimSize();
+this.maxAppendableClaimLength = 
DataUnit.parseDataSize(maxAppendableClaimSize, DataUnit.B).intValue();
--- End diff --

Cool - I'll add a static 100MB cap.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #117: MINIFI-338: Convert processor threads to ...

2017-07-20 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/117#discussion_r128638003
  
--- Diff: libminifi/include/utils/ThreadPool.h ---
@@ -246,15 +349,67 @@ void ThreadPool::startWorkers() {
 template
 void ThreadPool::run_tasks() {
   auto waitperiod = std::chrono::milliseconds(1) * 100;
+  uint64_t wait_decay_ = 0;
   while (running_.load()) {
 
+// if we are spinning, perform a wait. If something changes in the 
worker such that the timeslice has changed, we will pick that information up. 
Note that it's possible
+// we could starve for processing time if all workers are waiting. In 
the event that the number of workers far exceeds the number of threads, threads 
will spin and potentially
+// wait until they arrive at a task that can be run. In this case we 
reset the wait_decay and attempt to pick up a new task. This means that threads 
that recently ran should
+// be more likely to run. This is intentional.
+if (wait_decay_ > 1000) {
+  std::this_thread::sleep_for(std::chrono::nanoseconds(wait_decay_));
--- End diff --

@benqiu2016 thanks. I thought I took care of that.Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2010: NIFI-3736: change to honor nifi.content.claim.max.a...

2017-07-20 Thread mosermw
Github user mosermw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2010#discussion_r128633148
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
 ---
@@ -149,6 +154,10 @@ public FileSystemRepository(final NiFiProperties 
nifiProperties) throws IOExcept
 for (final Path path : fileRespositoryPaths.values()) {
 Files.createDirectories(path);
 }
+this.maxFlowFilesPerClaim = 
nifiProperties.getMaxFlowFilesPerClaim();
+this.writableClaimQueue  = new 
LinkedBlockingQueue<>(maxFlowFilesPerClaim);
+final String maxAppendableClaimSize = 
nifiProperties.getMaxAppendableClaimSize();
+this.maxAppendableClaimLength = 
DataUnit.parseDataSize(maxAppendableClaimSize, DataUnit.B).intValue();
--- End diff --

In my opinion, I don't see anything wrong with a static cap, as long as 
it's reasonably big enough (the 100 MB choice).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (NIFI-55) ListenHTTP shoudl log who the sender of a FlowFile bundle is if the HOLD expires

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-55?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-55.

   Resolution: Fixed
Fix Version/s: 1.4.0

> ListenHTTP shoudl log who the sender of a FlowFile bundle is if the HOLD 
> expires
> 
>
> Key: NIFI-55
> URL: https://issues.apache.org/jira/browse/NIFI-55
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Andre F de Miranda
>Priority: Minor
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095388#comment-16095388
 ] 

ASF GitHub Bot commented on NIFI-3736:
--

Github user mosermw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2010#discussion_r128633148
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
 ---
@@ -149,6 +154,10 @@ public FileSystemRepository(final NiFiProperties 
nifiProperties) throws IOExcept
 for (final Path path : fileRespositoryPaths.values()) {
 Files.createDirectories(path);
 }
+this.maxFlowFilesPerClaim = 
nifiProperties.getMaxFlowFilesPerClaim();
+this.writableClaimQueue  = new 
LinkedBlockingQueue<>(maxFlowFilesPerClaim);
+final String maxAppendableClaimSize = 
nifiProperties.getMaxAppendableClaimSize();
+this.maxAppendableClaimLength = 
DataUnit.parseDataSize(maxAppendableClaimSize, DataUnit.B).intValue();
--- End diff --

In my opinion, I don't see anything wrong with a static cap, as long as 
it's reasonably big enough (the 100 MB choice).


> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-55) ListenHTTP shoudl log who the sender of a FlowFile bundle is if the HOLD expires

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-55?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095385#comment-16095385
 ] 

ASF GitHub Bot commented on NIFI-55:


Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1623


> ListenHTTP shoudl log who the sender of a FlowFile bundle is if the HOLD 
> expires
> 
>
> Key: NIFI-55
> URL: https://issues.apache.org/jira/browse/NIFI-55
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Andre F de Miranda
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-55) ListenHTTP shoudl log who the sender of a FlowFile bundle is if the HOLD expires

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-55?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095384#comment-16095384
 ] 

ASF GitHub Bot commented on NIFI-55:


Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1623
  
+1, merging to master, thanks @trixpan 


> ListenHTTP shoudl log who the sender of a FlowFile bundle is if the HOLD 
> expires
> 
>
> Key: NIFI-55
> URL: https://issues.apache.org/jira/browse/NIFI-55
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Andre F de Miranda
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-55) ListenHTTP shoudl log who the sender of a FlowFile bundle is if the HOLD expires

2017-07-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-55?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095383#comment-16095383
 ] 

ASF subversion and git services commented on NIFI-55:
-

Commit b0be99036dd261166fd5330710dd0bac193a40ae in nifi's branch 
refs/heads/master from [~trixpan]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b0be990 ]

NIFI-55 - Ensures ListenHTTP logs the source of an expired hold

Signed-off-by: Pierre Villard 

This closes #1623.


> ListenHTTP shoudl log who the sender of a FlowFile bundle is if the HOLD 
> expires
> 
>
> Key: NIFI-55
> URL: https://issues.apache.org/jira/browse/NIFI-55
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Andre F de Miranda
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1623: NIFI-55 - Log IP of clients generating expired hold...

2017-07-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1623


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1623: NIFI-55 - Log IP of clients generating expired holds and o...

2017-07-20 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1623
  
+1, merging to master, thanks @trixpan 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (NIFI-708) Generated docs for MonitorActivity look bad

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-708.
-
Resolution: Duplicate

Closing as duplicate of NIFI-917.

> Generated docs for MonitorActivity look bad
> ---
>
> Key: NIFI-708
> URL: https://issues.apache.org/jira/browse/NIFI-708
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Affects Versions: 0.1.0
>Reporter: Dan Bress
>Assignee: Andre F de Miranda
>Priority: Minor
>
> If you look at the [documentation for 
> MonitorActivity|https://nifi.incubator.apache.org/docs.html] it just looks 
> bad.
> In the Properties section, the following things don't look good
> # The Default Value column is too wide
> # The Description column is too small
> # There is a [horizontal 
> scrollbar|http://www.howtonotmakemoneyonline.com/2009/01/why-horizontal-scrolling-is-bad.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-917) Long default value "messes up" documentation

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-917:

Component/s: Documentation & Website

> Long default value "messes up" documentation
> 
>
> Key: NIFI-917
> URL: https://issues.apache.org/jira/browse/NIFI-917
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Brandon DeVries
>Assignee: Pierre Villard
>Priority: Minor
>
> Processor default values are included in the automatically generated 
> documentation.  However, "long" values aren't wrapped, so they force the 
> table to become very wide (and thus less readable).  Maybe wrap the value, or 
> make the cell side-scrollable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-917) Long default value "messes up" documentation

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-917:
---

Assignee: Pierre Villard  (was: Andre F de Miranda)

> Long default value "messes up" documentation
> 
>
> Key: NIFI-917
> URL: https://issues.apache.org/jira/browse/NIFI-917
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Brandon DeVries
>Assignee: Pierre Villard
>Priority: Minor
>
> Processor default values are included in the automatically generated 
> documentation.  However, "long" values aren't wrapped, so they force the 
> table to become very wide (and thus less readable).  Maybe wrap the value, or 
> make the cell side-scrollable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-917) Long default value "messes up" documentation

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-917:

Status: Patch Available  (was: Open)

> Long default value "messes up" documentation
> 
>
> Key: NIFI-917
> URL: https://issues.apache.org/jira/browse/NIFI-917
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Brandon DeVries
>Assignee: Pierre Villard
>Priority: Minor
>
> Processor default values are included in the automatically generated 
> documentation.  However, "long" values aren't wrapped, so they force the 
> table to become very wide (and thus less readable).  Maybe wrap the value, or 
> make the cell side-scrollable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-917) Long default value "messes up" documentation

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095363#comment-16095363
 ] 

ASF GitHub Bot commented on NIFI-917:
-

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2025

NIFI-917 - improve rendering of component documentation

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-917

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2025.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2025


commit 29a98d04b4d5dc44cece23327bb99d12d0c810cc
Author: Pierre Villard 
Date:   2017-07-20T20:50:13Z

NIFI-917 - improve rendering of component documentation




> Long default value "messes up" documentation
> 
>
> Key: NIFI-917
> URL: https://issues.apache.org/jira/browse/NIFI-917
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Brandon DeVries
>Assignee: Andre F de Miranda
>Priority: Minor
>
> Processor default values are included in the automatically generated 
> documentation.  However, "long" values aren't wrapped, so they force the 
> table to become very wide (and thus less readable).  Maybe wrap the value, or 
> make the cell side-scrollable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2025: NIFI-917 - improve rendering of component documenta...

2017-07-20 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2025

NIFI-917 - improve rendering of component documentation

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-917

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2025.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2025


commit 29a98d04b4d5dc44cece23327bb99d12d0c810cc
Author: Pierre Villard 
Date:   2017-07-20T20:50:13Z

NIFI-917 - improve rendering of component documentation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-106) Processor Counters should be included in the Status Reports

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095340#comment-16095340
 ] 

ASF GitHub Bot commented on NIFI-106:
-

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1872
  
@mcgilman ok i pushed a new commit. I also ran into another bug that 
occurred when a counter is present in some of the 'aggregate snapshot' fields 
but not all. Can you give it a review when you have a chance? THanks!


> Processor Counters should be included in the Status Reports
> ---
>
> Key: NIFI-106
> URL: https://issues.apache.org/jira/browse/NIFI-106
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Minor
>
> This would allow a Processor's Status HIstory to show counters that were 
> maintained over time periods instead of having only a single count since 
> system start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1872: NIFI-106: Expose processors' counters in Stats History

2017-07-20 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1872
  
@mcgilman ok i pushed a new commit. I also ran into another bug that 
occurred when a counter is present in some of the 'aggregate snapshot' fields 
but not all. Can you give it a review when you have a chance? THanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095312#comment-16095312
 ] 

ASF GitHub Bot commented on NIFI-3736:
--

Github user m-hogue commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2010#discussion_r128622791
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
 ---
@@ -149,6 +154,10 @@ public FileSystemRepository(final NiFiProperties 
nifiProperties) throws IOExcept
 for (final Path path : fileRespositoryPaths.values()) {
 Files.createDirectories(path);
 }
+this.maxFlowFilesPerClaim = 
nifiProperties.getMaxFlowFilesPerClaim();
+this.writableClaimQueue  = new 
LinkedBlockingQueue<>(maxFlowFilesPerClaim);
+final String maxAppendableClaimSize = 
nifiProperties.getMaxAppendableClaimSize();
+this.maxAppendableClaimLength = 
DataUnit.parseDataSize(maxAppendableClaimSize, DataUnit.B).intValue();
--- End diff --

Yep - that makes total sense. Should the max be configurable or static? I 
can do either way.


> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2010: NIFI-3736: change to honor nifi.content.claim.max.a...

2017-07-20 Thread m-hogue
Github user m-hogue commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2010#discussion_r128622791
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
 ---
@@ -149,6 +154,10 @@ public FileSystemRepository(final NiFiProperties 
nifiProperties) throws IOExcept
 for (final Path path : fileRespositoryPaths.values()) {
 Files.createDirectories(path);
 }
+this.maxFlowFilesPerClaim = 
nifiProperties.getMaxFlowFilesPerClaim();
+this.writableClaimQueue  = new 
LinkedBlockingQueue<>(maxFlowFilesPerClaim);
+final String maxAppendableClaimSize = 
nifiProperties.getMaxAppendableClaimSize();
+this.maxAppendableClaimLength = 
DataUnit.parseDataSize(maxAppendableClaimSize, DataUnit.B).intValue();
--- End diff --

Yep - that makes total sense. Should the max be configurable or static? I 
can do either way.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #117: MINIFI-338: Convert processor threads to ...

2017-07-20 Thread benqiu2016
Github user benqiu2016 commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/117#discussion_r128621371
  
--- Diff: libminifi/include/utils/ThreadPool.h ---
@@ -246,15 +349,67 @@ void ThreadPool::startWorkers() {
 template
 void ThreadPool::run_tasks() {
   auto waitperiod = std::chrono::milliseconds(1) * 100;
+  uint64_t wait_decay_ = 0;
   while (running_.load()) {
 
+// if we are spinning, perform a wait. If something changes in the 
worker such that the timeslice has changed, we will pick that information up. 
Note that it's possible
+// we could starve for processing time if all workers are waiting. In 
the event that the number of workers far exceeds the number of threads, threads 
will spin and potentially
+// wait until they arrive at a task that can be run. In this case we 
reset the wait_decay and attempt to pick up a new task. This means that threads 
that recently ran should
+// be more likely to run. This is intentional.
+if (wait_decay_ > 1000) {
+  std::this_thread::sleep_for(std::chrono::nanoseconds(wait_decay_));
--- End diff --

we increase wait_decay if there is not task to run. so the wait_decay may 
become a very large number if we do not have task to run for a long time.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #117: MINIFI-338: Convert processor threads to ...

2017-07-20 Thread benqiu2016
Github user benqiu2016 commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/117#discussion_r128620813
  
--- Diff: libminifi/include/utils/ThreadPool.h ---
@@ -246,15 +349,67 @@ void ThreadPool::startWorkers() {
 template
 void ThreadPool::run_tasks() {
   auto waitperiod = std::chrono::milliseconds(1) * 100;
+  uint64_t wait_decay_ = 0;
   while (running_.load()) {
 
+// if we are spinning, perform a wait. If something changes in the 
worker such that the timeslice has changed, we will pick that information up. 
Note that it's possible
+// we could starve for processing time if all workers are waiting. In 
the event that the number of workers far exceeds the number of threads, threads 
will spin and potentially
+// wait until they arrive at a task that can be run. In this case we 
reset the wait_decay and attempt to pick up a new task. This means that threads 
that recently ran should
+// be more likely to run. This is intentional.
+if (wait_decay_ > 1000) {
+  std::this_thread::sleep_for(std::chrono::nanoseconds(wait_decay_));
+}
 Worker task;
 if (!worker_queue_.try_dequeue(task)) {
+
   std::unique_lock lock(worker_queue_mutex_);
   tasks_available_.wait_for(lock, waitperiod);
   continue;
 }
-task.run();
+else {
+
+  std::unique_lock lock(worker_queue_mutex_);
+  if (!task_status_[task.getIdentifier()]) {
+continue;
+  }
+}
+
+bool wait_to_run = false;
+if (task.getTimeSlice() > 1) {
+  auto now = std::chrono::system_clock::now().time_since_epoch();
+  auto ms = std::chrono::duration_cast(now);
+  if (task.getTimeSlice() > ms.count()) {
+wait_to_run = true;
+  }
+}
+// if we have to wait we re-queue the worker.
+if (wait_to_run) {
+  {
+std::unique_lock lock(worker_queue_mutex_);
+if (!task_status_[task.getIdentifier()]) {
+  continue;
+}
+  }
+  worker_queue_.enqueue(std::move(task));
--- End diff --

do we need to enqueue to head?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-106) Processor Counters should be included in the Status Reports

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095240#comment-16095240
 ] 

ASF GitHub Bot commented on NIFI-106:
-

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r128610547
  
--- Diff: 
nifi-framework-api/src/main/java/org/apache/nifi/controller/status/history/StatusHistory.java
 ---
@@ -41,4 +41,9 @@
  * @return List of snapshots for a given component
  */
 List getStatusSnapshots();
+
+/**
+ * @return true if counter values are included in the 
Status History
+ */
+boolean isIncludeCounters();
--- End diff --

Noted.


> Processor Counters should be included in the Status Reports
> ---
>
> Key: NIFI-106
> URL: https://issues.apache.org/jira/browse/NIFI-106
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Minor
>
> This would allow a Processor's Status HIstory to show counters that were 
> maintained over time periods instead of having only a single count since 
> system start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-106) Processor Counters should be included in the Status Reports

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095241#comment-16095241
 ] 

ASF GitHub Bot commented on NIFI-106:
-

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r128610594
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/controller/status/ProcessorStatus.java 
---
@@ -234,6 +245,7 @@ public ProcessorStatus clone() {
 clonedObj.flowFilesRemoved = flowFilesRemoved;
 clonedObj.runStatus = runStatus;
 clonedObj.type = type;
+clonedObj.counters = new HashMap<>(counters);
--- End diff --

Good catch. Will add a new commit shortly.


> Processor Counters should be included in the Status Reports
> ---
>
> Key: NIFI-106
> URL: https://issues.apache.org/jira/browse/NIFI-106
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Minor
>
> This would allow a Processor's Status HIstory to show counters that were 
> maintained over time periods instead of having only a single count since 
> system start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1872: NIFI-106: Expose processors' counters in Stats Hist...

2017-07-20 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r128610594
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/controller/status/ProcessorStatus.java 
---
@@ -234,6 +245,7 @@ public ProcessorStatus clone() {
 clonedObj.flowFilesRemoved = flowFilesRemoved;
 clonedObj.runStatus = runStatus;
 clonedObj.type = type;
+clonedObj.counters = new HashMap<>(counters);
--- End diff --

Good catch. Will add a new commit shortly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1872: NIFI-106: Expose processors' counters in Stats Hist...

2017-07-20 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r128610528
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FlowController.java
 ---
@@ -2867,6 +2867,7 @@ private ProcessorStatus getProcessorStatus(final 
RepositoryStatusReport report,
 status.setFlowFilesSent(entry.getFlowFilesSent());
 status.setBytesSent(entry.getBytesSent());
 status.setFlowFilesRemoved(entry.getFlowFilesRemoved());
+status.setCounters(entry.getCounters());
--- End diff --

Agreed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1872: NIFI-106: Expose processors' counters in Stats Hist...

2017-07-20 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r128610547
  
--- Diff: 
nifi-framework-api/src/main/java/org/apache/nifi/controller/status/history/StatusHistory.java
 ---
@@ -41,4 +41,9 @@
  * @return List of snapshots for a given component
  */
 List getStatusSnapshots();
+
+/**
+ * @return true if counter values are included in the 
Status History
+ */
+boolean isIncludeCounters();
--- End diff --

Noted.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-106) Processor Counters should be included in the Status Reports

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095239#comment-16095239
 ] 

ASF GitHub Bot commented on NIFI-106:
-

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r128610528
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FlowController.java
 ---
@@ -2867,6 +2867,7 @@ private ProcessorStatus getProcessorStatus(final 
RepositoryStatusReport report,
 status.setFlowFilesSent(entry.getFlowFilesSent());
 status.setBytesSent(entry.getBytesSent());
 status.setFlowFilesRemoved(entry.getFlowFilesRemoved());
+status.setCounters(entry.getCounters());
--- End diff --

Agreed.


> Processor Counters should be included in the Status Reports
> ---
>
> Key: NIFI-106
> URL: https://issues.apache.org/jira/browse/NIFI-106
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Minor
>
> This would allow a Processor's Status HIstory to show counters that were 
> maintained over time periods instead of having only a single count since 
> system start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-106) Processor Counters should be included in the Status Reports

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095238#comment-16095238
 ] 

ASF GitHub Bot commented on NIFI-106:
-

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r128610464
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/StatusHistoryEndpointMerger.java
 ---
@@ -109,13 +119,49 @@ public NodeResponse merge(URI uri, String method, 
Set successfulRe
 noReadPermissionsComponentDetails = 
nodeStatus.getComponentDetails();
 }
 
+if (!nodeStatus.isIncludeCounters()) {
--- End diff --

Good call. I didn't realize that was part of the entity.


> Processor Counters should be included in the Status Reports
> ---
>
> Key: NIFI-106
> URL: https://issues.apache.org/jira/browse/NIFI-106
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Minor
>
> This would allow a Processor's Status HIstory to show counters that were 
> maintained over time periods instead of having only a single count since 
> system start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1872: NIFI-106: Expose processors' counters in Stats Hist...

2017-07-20 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r128610464
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/StatusHistoryEndpointMerger.java
 ---
@@ -109,13 +119,49 @@ public NodeResponse merge(URI uri, String method, 
Set successfulRe
 noReadPermissionsComponentDetails = 
nodeStatus.getComponentDetails();
 }
 
+if (!nodeStatus.isIncludeCounters()) {
--- End diff --

Good call. I didn't realize that was part of the entity.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095235#comment-16095235
 ] 

ASF GitHub Bot commented on NIFI-3736:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2010
  
@m-hogue thanks for submitting the PR! This has been on my to-do list for a 
long time, and I'm very happy to have you knocking it out. The only concern 
that I have is the one that i mentioned inline, regarding what will happen if a 
user sets a very large value for the max.appendable.claim.size. Otherwise, I 
think all looks good.


> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2010: NIFI-3736: change to honor nifi.content.claim.max.appendab...

2017-07-20 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2010
  
@m-hogue thanks for submitting the PR! This has been on my to-do list for a 
long time, and I'm very happy to have you knocking it out. The only concern 
that I have is the one that i mentioned inline, regarding what will happen if a 
user sets a very large value for the max.appendable.claim.size. Otherwise, I 
think all looks good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095233#comment-16095233
 ] 

ASF GitHub Bot commented on NIFI-3736:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2010#discussion_r128609849
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
 ---
@@ -149,6 +154,10 @@ public FileSystemRepository(final NiFiProperties 
nifiProperties) throws IOExcept
 for (final Path path : fileRespositoryPaths.values()) {
 Files.createDirectories(path);
 }
+this.maxFlowFilesPerClaim = 
nifiProperties.getMaxFlowFilesPerClaim();
+this.writableClaimQueue  = new 
LinkedBlockingQueue<>(maxFlowFilesPerClaim);
+final String maxAppendableClaimSize = 
nifiProperties.getMaxAppendableClaimSize();
+this.maxAppendableClaimLength = 
DataUnit.parseDataSize(maxAppendableClaimSize, DataUnit.B).intValue();
--- End diff --

If this value gets set to something like "10 GB" this could cause some 
really problematic (and difficult to track down) problems because the value 
would overflow to a negative value. Probably is best to use a longValue() and 
then perhaps even cap it at something like 100 MB or 10 MB and if the value is 
larger than that, just emit a WARN log event and use the max value.


> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2010: NIFI-3736: change to honor nifi.content.claim.max.a...

2017-07-20 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2010#discussion_r128609849
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
 ---
@@ -149,6 +154,10 @@ public FileSystemRepository(final NiFiProperties 
nifiProperties) throws IOExcept
 for (final Path path : fileRespositoryPaths.values()) {
 Files.createDirectories(path);
 }
+this.maxFlowFilesPerClaim = 
nifiProperties.getMaxFlowFilesPerClaim();
+this.writableClaimQueue  = new 
LinkedBlockingQueue<>(maxFlowFilesPerClaim);
+final String maxAppendableClaimSize = 
nifiProperties.getMaxAppendableClaimSize();
+this.maxAppendableClaimLength = 
DataUnit.parseDataSize(maxAppendableClaimSize, DataUnit.B).intValue();
--- End diff --

If this value gets set to something like "10 GB" this could cause some 
really problematic (and difficult to track down) problems because the value 
would overflow to a negative value. Probably is best to use a longValue() and 
then perhaps even cap it at something like 100 MB or 10 MB and if the value is 
larger than that, just emit a WARN log event and use the max value.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2024: NIFI-4201: Initial implementation of processors for...

2017-07-20 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2024

NIFI-4201: Initial implementation of processors for interacting with …

…Kafka 0.11

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4201

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2024.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2024


commit 5756b53f83fb459d55cbc203edb9d3b1f7aad0b9
Author: Mark Payne 
Date:   2017-07-20T18:01:26Z

NIFI-4201: Initial implementation of processors for interacting with Kafka 
0.11




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4201) Add Processors for interacting with Kafka 0.11.x

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095213#comment-16095213
 ] 

ASF GitHub Bot commented on NIFI-4201:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2024

NIFI-4201: Initial implementation of processors for interacting with …

…Kafka 0.11

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4201

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2024.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2024


commit 5756b53f83fb459d55cbc203edb9d3b1f7aad0b9
Author: Mark Payne 
Date:   2017-07-20T18:01:26Z

NIFI-4201: Initial implementation of processors for interacting with Kafka 
0.11




> Add Processors for interacting with Kafka 0.11.x
> 
>
> Key: NIFI-4201
> URL: https://issues.apache.org/jira/browse/NIFI-4201
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Payne
>Assignee: Mark Payne
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095061#comment-16095061
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user yuri1969 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2009#discussion_r128583974
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-graph.js
 ---
@@ -198,13 +201,13 @@
 var nfGraph = {
 init: function () {
 // initialize the object responsible for each type of component
-nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfFunnel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
-nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfRemoteProcessGroup.init(nfConnectable, nfDraggable, 
nfSelectable, nfContextMenu);
--- End diff --

@scottyaslan OK, I'll enable configure by double clicking for RPG.


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2009: NIFI-1580 - Allow double-click to display config

2017-07-20 Thread yuri1969
Github user yuri1969 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2009#discussion_r128583974
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-graph.js
 ---
@@ -198,13 +201,13 @@
 var nfGraph = {
 init: function () {
 // initialize the object responsible for each type of component
-nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfFunnel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
-nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfRemoteProcessGroup.init(nfConnectable, nfDraggable, 
nfSelectable, nfContextMenu);
--- End diff --

@scottyaslan OK, I'll enable configure by double clicking for RPG.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (NIFI-3376) Implement content repository ResourceClaim compaction

2017-07-20 Thread Michael Hogue (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Hogue reassigned NIFI-3376:
---

Assignee: Michael Hogue

> Implement content repository ResourceClaim compaction
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>Assignee: Michael Hogue
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (NIFI-91) Allow the changing of sensitive property properties via the UI

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-91?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reopened NIFI-91:


Thanks [~aldrin], completely missed that!

> Allow the changing of sensitive property properties via the UI
> --
>
> Key: NIFI-91
> URL: https://issues.apache.org/jira/browse/NIFI-91
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Matt Gilman
>Priority: Minor
>
> Provide UI functionality to allow changing of the sensitive property 
> properties on the fly, during which any of the necessary mechanics behind the 
> scenes are also provided.
> Currently, a default and hardcoded sensitive properties key is used to 
> facilitate new users getting the application up and running.  Should a user 
> make use of extensions that have sensitive properties, they are currently 
> bound to that value should they wish to adjust the key being used via the 
> nifi.properties file.  This additionally requires a restart. 
> Previous description below:
> Sensitive properties are created with a default key if none is specified via 
> nifi.properties.  This was done to provide a way for users to get the 
> application running out of the box.  However, if users configure sensitive 
> properties there is not a good way for them to recover/convert these items to 
> a new key should one be desired.
> This utility would aid in the above transformation process.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-91) Allow the changing of sensitive property properties via the UI

2017-07-20 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-91?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095034#comment-16095034
 ] 

Aldrin Piri commented on NIFI-91:
-

[~pvillard] NIFI-536 was closed as duplicate of this one but was never 
implemented.  Should keep one of them open.

> Allow the changing of sensitive property properties via the UI
> --
>
> Key: NIFI-91
> URL: https://issues.apache.org/jira/browse/NIFI-91
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Matt Gilman
>Priority: Minor
>
> Provide UI functionality to allow changing of the sensitive property 
> properties on the fly, during which any of the necessary mechanics behind the 
> scenes are also provided.
> Currently, a default and hardcoded sensitive properties key is used to 
> facilitate new users getting the application up and running.  Should a user 
> make use of extensions that have sensitive properties, they are currently 
> bound to that value should they wish to adjust the key being used via the 
> nifi.properties file.  This additionally requires a restart. 
> Previous description below:
> Sensitive properties are created with a default key if none is specified via 
> nifi.properties.  This was done to provide a way for users to get the 
> application running out of the box.  However, if users configure sensitive 
> properties there is not a good way for them to recover/convert these items to 
> a new key should one be desired.
> This utility would aid in the above transformation process.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4211) Support shared provider configuration information for authentication and authorization

2017-07-20 Thread Yolanda M. Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yolanda M. Davis updated NIFI-4211:
---
Description: The introduction of the new UserGroup and Policy provider 
interfaces highlight an opportunity to allow configuration information for 
connecting to source systems, such as LDAP/Kerberos, to be shared between 
authentication and authorization providers.  For example, when using a single 
LDAP source for authentication and for user/group lookup to correlate with 
policy, currently users need to setup the source's config information (which 
includes connection, authentication strategy, Manager DN and password, etc) in 
both the login-identity-providers.xml and the authorizers.xml files.  Having a 
way to share this information between these two features I think would help 
simplify setup.  (was: With the introduction of the new UserGroup and Policy 
provider interfaces there appears to be an opportunity to allow users to share 
UserGroup provider config information for login identity (authentication).  For 
example, when using a single LDAP source for authentication and to pull 
user/group information to correlate with policy, currently users would need to 
setup configuration in both the login-identity-providers.xml and the 
authorizers.xml files with essentially the same information. Having a way to 
share this provider information between these two features I think would help 
simplify setup.)

> Support shared provider configuration information for authentication and 
> authorization
> --
>
> Key: NIFI-4211
> URL: https://issues.apache.org/jira/browse/NIFI-4211
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Yolanda M. Davis
>
> The introduction of the new UserGroup and Policy provider interfaces 
> highlight an opportunity to allow configuration information for connecting to 
> source systems, such as LDAP/Kerberos, to be shared between authentication 
> and authorization providers.  For example, when using a single LDAP source 
> for authentication and for user/group lookup to correlate with policy, 
> currently users need to setup the source's config information (which includes 
> connection, authentication strategy, Manager DN and password, etc) in both 
> the login-identity-providers.xml and the authorizers.xml files.  Having a way 
> to share this information between these two features I think would help 
> simplify setup.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-666) Need UI mechanism to acquire version number of custom components

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-666.
-
Resolution: Duplicate

> Need UI mechanism to acquire version number of custom components
> 
>
> Key: NIFI-666
> URL: https://issues.apache.org/jira/browse/NIFI-666
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Core UI, Documentation & Website, 
> Extensions
>Reporter: Robert J. Mills
>Priority: Minor
>
> When attempting to troubleshoot a custom component in a NiFi installation, at 
> least two pieces of information may be needed:
> 1. Which version of NiFi are you using?
> 2. Which version of the component are you using?
> Currently, the only available mechanism(s) (for determining the version of 
> the component), i.e., 
>  - examining the lib directory to get the version of the nar, or 
>  - opening the log to the point where NiFi first "opened"/loaded the nar
>  requires log in access to the machine where NiFi is running.  
> Many environments (particularly corporate) are likely to be wary of granting 
> log in access to all NiFi users (not to speak of the difficulty in explaining 
> to a generic user how to find the version number of the component in 
> question).
> Recommend making custom component/nar version numbers accessible in the NiFi 
> User Interface (UI).
> Here are a few options for consideration:
>  - Include a list of nars (and their version numbers) alongside the Apache 
> NiFi version number (currently provided in the about window/dialog). 
>  - For each components documentation, include the name and version of the 
> deployable unit (nar) from which it was loaded.
>  - Add the information to the face of processor(s), e.g., enable a portion of 
> the face of a processor (perhaps an i (for information) or a question mark 
> icon) that upon mouse over, displays information about the processor, 
> including the deployable unit (nar name and version) from which it came.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-806) Starting an invalid processor has different behavior if processor is selected or not

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-806.
-
Resolution: Cannot Reproduce

> Starting an invalid processor has different behavior if processor is selected 
> or not
> 
>
> Key: NIFI-806
> URL: https://issues.apache.org/jira/browse/NIFI-806
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.2.1
>Reporter: Dan Bress
>Priority: Minor
>  Labels: logging
>
> The behavior of how the application reports problems with a processor in the 
> log is slightly different depending on whether or not you have a processor 
> selected when you hit the "start" button.
> Steps to reproduce
> 1) drag an update attribute
> 2) set "success" to auto terminate
> 3) drag a new update attribute
> 4) click on the graph such that no processors are selected
> 5) press "start" in the toolbar
> 6) observe in nifi-app.log an exception stack trace is printed
> 7) press stop
> 8) select both processors
> 9) press "start" in the toolbar
> 10) observe in nifi-app.log there is no stack trace
> Not sure which behavior is correct.  Generally the app leans towards not 
> printing stack traces, so I might vote to follow that precedent.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3376) Implement content repository ResourceClaim compaction

2017-07-20 Thread Michael Hogue (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094986#comment-16094986
 ] 

Michael Hogue commented on NIFI-3376:
-

compaction work here: https://github.com/m-hogue/nifi/tree/NIFI-3376

> Implement content repository ResourceClaim compaction
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-991) Add "upsert" verb support for ConvertJSONToSQL processor

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-991.
-
Resolution: Duplicate

> Add "upsert" verb support for ConvertJSONToSQL processor
> 
>
> Key: NIFI-991
> URL: https://issues.apache.org/jira/browse/NIFI-991
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: Randy Gelhausen
>
> Apache Phoenix supports only the "upsert" SQL verb. To support Nifi->Phoenix 
> flows, UPSERT_TYPE should be added to the ConvertJSONToSQL standard processor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-822) Reflect @RequiresInput in generated documentation

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-822.
-
Resolution: Duplicate

> Reflect @RequiresInput in generated documentation
> -
>
> Key: NIFI-822
> URL: https://issues.apache.org/jira/browse/NIFI-822
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core Framework, Documentation & Website
>Reporter: Dan Bress
>
> Update documentation generator to detect usage of @RequiresInput on a 
> Processor and reflect this in the HTML output.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3799) Document and / or visually show if a procesor accepts "input" connection

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-3799:
-
   Resolution: Fixed
Fix Version/s: 1.3.0
   Status: Resolved  (was: Patch Available)

> Document and / or visually show if a procesor accepts "input" connection
> 
>
> Key: NIFI-3799
> URL: https://issues.apache.org/jira/browse/NIFI-3799
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Documentation & Website
>Reporter: Juan C. Sequeiros
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.3.0
>
>
> Today the only way of knowing if a processor accepts input connection is 
> either by trying it on the UI or looking at the processor code.
> @InputRequirement(Requirement.INPUT_FORBIDDEN)
> or
> @InputRequirement(Requirement.INPUT_REQUIRED) 
> It would help a DFM if there is at least some documentation on this either 
> under usage link or a tag  to have alternative ways of knowing 
> proactively.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3799) Document and / or visually show if a procesor accepts "input" connection

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-3799:
-
Component/s: Documentation & Website

> Document and / or visually show if a procesor accepts "input" connection
> 
>
> Key: NIFI-3799
> URL: https://issues.apache.org/jira/browse/NIFI-3799
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Documentation & Website
>Reporter: Juan C. Sequeiros
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.3.0
>
>
> Today the only way of knowing if a processor accepts input connection is 
> either by trying it on the UI or looking at the processor code.
> @InputRequirement(Requirement.INPUT_FORBIDDEN)
> or
> @InputRequirement(Requirement.INPUT_REQUIRED) 
> It would help a DFM if there is at least some documentation on this either 
> under usage link or a tag  to have alternative ways of knowing 
> proactively.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4211) Support shared provider configuration information for authentication and authorization

2017-07-20 Thread Yolanda M. Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yolanda M. Davis updated NIFI-4211:
---
Description: With the introduction of the new UserGroup and Policy provider 
interfaces there appears to be an opportunity to allow users to share UserGroup 
provider config information for login identity (authentication).  For example, 
when using a single LDAP source for authentication and to pull user/group 
information to correlate with policy, currently users would need to setup 
configuration in both the login-identity-providers.xml and the authorizers.xml 
files with essentially the same information. Having a way to share this 
provider information between these two features I think would help simplify 
setup.  (was: With the introduction of the new UserGroup and Policy provider 
interfaces there appears to be an opportunity to allow users to share UserGroup 
provider information for login identity (authentication).  For example, when 
using a single LDAP source for authentication and to pull user/group 
information to correlate with policy, currently users would need to setup 
configurations in both the login-identity-providers.xml and the authorizers.xml 
files with essentially the same information. Having a way to share this 
provider information between these two features I think would help simplify 
setup.)
Summary: Support shared provider configuration information for 
authentication and authorization  (was: Support shared providers for 
authentication and authorization)

> Support shared provider configuration information for authentication and 
> authorization
> --
>
> Key: NIFI-4211
> URL: https://issues.apache.org/jira/browse/NIFI-4211
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Yolanda M. Davis
>
> With the introduction of the new UserGroup and Policy provider interfaces 
> there appears to be an opportunity to allow users to share UserGroup provider 
> config information for login identity (authentication).  For example, when 
> using a single LDAP source for authentication and to pull user/group 
> information to correlate with policy, currently users would need to setup 
> configuration in both the login-identity-providers.xml and the 
> authorizers.xml files with essentially the same information. Having a way to 
> share this provider information between these two features I think would help 
> simplify setup.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-709) Add a PutSolr processor that supports Avro

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-709.
-
Resolution: Duplicate

Closing in favour of NIFI-4035 and the new record paradigm.

> Add a PutSolr processor that supports Avro
> --
>
> Key: NIFI-709
> URL: https://issues.apache.org/jira/browse/NIFI-709
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> The existing PutSolrContentStream is great for streaming json, xml, and csv 
> content directly to Solr, but it would be nice to also directly support Avro 
> since this will be a very common data format.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-610) PutJMS should allow the Destination Name to support the Expression Language

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-610.
-
Resolution: Won't Do

Closing per discussion in this JIRA.

> PutJMS should allow the Destination Name to support the Expression Language
> ---
>
> Key: NIFI-610
> URL: https://issues.apache.org/jira/browse/NIFI-610
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>
> PutJMS does not allow the 'Destination Name' property to contain Expression 
> Language. This was done so that we can create a single JmsProducer object and 
> send all messages using this producer. However, this is very limiting, 
> especially in the case that the data was pulled from GetJMS and has a 
> 'jms.JMSReplyTo' attribute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4211) Support shared providers for authentication and authorization

2017-07-20 Thread Yolanda M. Davis (JIRA)
Yolanda M. Davis created NIFI-4211:
--

 Summary: Support shared providers for authentication and 
authorization
 Key: NIFI-4211
 URL: https://issues.apache.org/jira/browse/NIFI-4211
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.4.0
Reporter: Yolanda M. Davis


With the introduction of the new UserGroup and Policy provider interfaces there 
appears to be an opportunity to allow users to share UserGroup provider 
information for login identity (authentication).  For example, when using a 
single LDAP source for authentication and to pull user/group information to 
correlate with policy, currently users would need to setup configurations in 
both the login-identity-providers.xml and the authorizers.xml files with 
essentially the same information. Having a way to share this provider 
information between these two features I think would help simplify setup.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-445) Integrate documentation generated by build into website

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-445.
-
Resolution: Duplicate

Closing as duplicate of NIFI-706

> Integrate documentation generated by build into website
> ---
>
> Key: NIFI-445
> URL: https://issues.apache.org/jira/browse/NIFI-445
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Dan Bress
>Assignee: Dan Bress
>  Labels: assembly, documentation, javadoc
>
> Currently documentation artifacts generated by the build are manually copied 
> from the build output to the website.
> Currently this includes:
> - [NiFi 
> Overview|https://nifi.incubator.apache.org/docs/nifi-docs/overview.html]
> - [NiFi User 
> Guide|https://nifi.incubator.apache.org/docs/nifi-docs/user-guide.html]
> - [NiFi Developer 
> Guide|https://nifi.incubator.apache.org/docs/nifi-docs/developer-guide.html]
> - [NiFi Admin 
> Guide|https://nifi.incubator.apache.org/docs/nifi-docs/administration-guide.html]
> It should also include
> - Javadocs
> - Processor/ControllerService/Reporting task marketplace
> There should be a simple way to do a build such that all these artifacts are 
> generated, put in a single tar.gz so that they can be uploaded to the website.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-91) Allow the changing of sensitive property properties via the UI

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-91?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-91.

Resolution: Duplicate

Closing as duplicate of NIFI-536.

> Allow the changing of sensitive property properties via the UI
> --
>
> Key: NIFI-91
> URL: https://issues.apache.org/jira/browse/NIFI-91
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Matt Gilman
>Priority: Minor
>
> Provide UI functionality to allow changing of the sensitive property 
> properties on the fly, during which any of the necessary mechanics behind the 
> scenes are also provided.
> Currently, a default and hardcoded sensitive properties key is used to 
> facilitate new users getting the application up and running.  Should a user 
> make use of extensions that have sensitive properties, they are currently 
> bound to that value should they wish to adjust the key being used via the 
> nifi.properties file.  This additionally requires a restart. 
> Previous description below:
> Sensitive properties are created with a default key if none is specified via 
> nifi.properties.  This was done to provide a way for users to get the 
> application running out of the box.  However, if users configure sensitive 
> properties there is not a good way for them to recover/convert these items to 
> a new key should one be desired.
> This utility would aid in the above transformation process.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-76) If Exception is thrown from a Connectable, framework should generate a bulletin

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-76?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-76.

Resolution: Cannot Reproduce

Closing as 'Cannot reproduce'. I believe this is fixed in 1.x line, and 
probably in 0.x as well.

> If Exception is thrown from a Connectable, framework should generate a 
> bulletin
> ---
>
> Key: NIFI-76
> URL: https://issues.apache.org/jira/browse/NIFI-76
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Priority: Minor
>
> We see an issue if we have UnresolvedAddressException when using 
> site-to-site, for instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-48) Startup failures should be more graceful/user friendly

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-48?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-48.

Resolution: Fixed

Closing per discussion in this JIRA.

> Startup failures should be more graceful/user friendly
> --
>
> Key: NIFI-48
> URL: https://issues.apache.org/jira/browse/NIFI-48
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joseph Witt
>
> Many startup issues result in massive strack traces and things which will 
> make users have to work much harder than necessary to see the real problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-48) Startup failures should be more graceful/user friendly

2017-07-20 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-48?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094947#comment-16094947
 ] 

Joseph Witt commented on NIFI-48:
-

i think this can be closed.

> Startup failures should be more graceful/user friendly
> --
>
> Key: NIFI-48
> URL: https://issues.apache.org/jira/browse/NIFI-48
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joseph Witt
>
> Many startup issues result in massive strack traces and things which will 
> make users have to work much harder than necessary to see the real problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-48) Startup failures should be more graceful/user friendly

2017-07-20 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-48?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094943#comment-16094943
 ] 

Pierre Villard commented on NIFI-48:


[~joewitt] / [~markap14] - is it still relevant with the work performed in 
NIFI-532? I believe a bit of work has been done on the subject even though 
stack trace are still being displayed in the logs.

> Startup failures should be more graceful/user friendly
> --
>
> Key: NIFI-48
> URL: https://issues.apache.org/jira/browse/NIFI-48
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joseph Witt
>
> Many startup issues result in massive strack traces and things which will 
> make users have to work much harder than necessary to see the real problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (NIFI-48) Startup failures should be more graceful/user friendly

2017-07-20 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-48?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094943#comment-16094943
 ] 

Pierre Villard edited comment on NIFI-48 at 7/20/17 4:42 PM:
-

[~joewitt] / [~markap14] - is it still relevant with the work performed in 
NIFI-532? I believe a bit of work has been done on the subject even though 
stack traces are still being displayed in the logs.


was (Author: pvillard):
[~joewitt] / [~markap14] - is it still relevant with the work performed in 
NIFI-532? I believe a bit of work has been done on the subject even though 
stack trace are still being displayed in the logs.

> Startup failures should be more graceful/user friendly
> --
>
> Key: NIFI-48
> URL: https://issues.apache.org/jira/browse/NIFI-48
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joseph Witt
>
> Many startup issues result in massive strack traces and things which will 
> make users have to work much harder than necessary to see the real problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-47) Cluster Heartbeat Generation request read lock on every FlowFile Queue. Make it not so.

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-47?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-47.

Resolution: Won't Do

No longer relevant - closing per discussion in the JIRA.

> Cluster Heartbeat Generation request read lock on every FlowFile Queue.  Make 
> it not so.
> 
>
> Key: NIFI-47
> URL: https://issues.apache.org/jira/browse/NIFI-47
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joseph Witt
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-40) Implement Provenance Query Language -- requires refactoring of Prov Repo API / Implementations

2017-07-20 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094931#comment-16094931
 ] 

Pierre Villard commented on NIFI-40:


[~markap14] - I see that commits have been merged into the following branch:
https://github.com/apache/nifi/tree/prov-query-language

Is it still relevant? Should this JIRA (and branch) should be closed?

> Implement Provenance Query Language -- requires refactoring of Prov Repo API 
> / Implementations
> --
>
> Key: NIFI-40
> URL: https://issues.apache.org/jira/browse/NIFI-40
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>
> Prov Repo should return a StoredProvenanceRecord object that provides a 
> StorageLocation object and the Prov Record itself. This StorageLocation is a 
> marker interface; specific Prov Repo can create their own implementations to 
> lookup records. Attempt to lookup by Location will throw 
> IllegalArgumentException if Location is not valid for that repo.
> Compared to existing Persistent Prov Repo, also will need to make compression 
> better/seekable, or reading the records will take far too long to be very 
> usable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (NIFI-40) Implement Provenance Query Language -- requires refactoring of Prov Repo API / Implementations

2017-07-20 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094931#comment-16094931
 ] 

Pierre Villard edited comment on NIFI-40 at 7/20/17 4:34 PM:
-

[~markap14] - I see that commits have been merged into the following branch:
https://github.com/apache/nifi/tree/prov-query-language

Is it still relevant? Should this JIRA (and branch) be closed?


was (Author: pvillard):
[~markap14] - I see that commits have been merged into the following branch:
https://github.com/apache/nifi/tree/prov-query-language

Is it still relevant? Should this JIRA (and branch) should be closed?

> Implement Provenance Query Language -- requires refactoring of Prov Repo API 
> / Implementations
> --
>
> Key: NIFI-40
> URL: https://issues.apache.org/jira/browse/NIFI-40
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>
> Prov Repo should return a StoredProvenanceRecord object that provides a 
> StorageLocation object and the Prov Record itself. This StorageLocation is a 
> marker interface; specific Prov Repo can create their own implementations to 
> lookup records. Attempt to lookup by Location will throw 
> IllegalArgumentException if Location is not valid for that repo.
> Compared to existing Persistent Prov Repo, also will need to make compression 
> better/seekable, or reading the records will take far too long to be very 
> usable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-5) Add details to reason messages of standard validators

2017-07-20 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094928#comment-16094928
 ] 

Pierre Villard commented on NIFI-5:
---

Closing per [~trixpan] comment.
Seems to be fixed since a while.

> Add details to reason messages of standard validators
> -
>
> Key: NIFI-5
> URL: https://issues.apache.org/jira/browse/NIFI-5
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Joseph Witt
>Assignee: Andre F de Miranda
>Priority: Minor
>
> The reason messages for the standard validators are vague and make it 
> difficult for admins/DFMs to diagnose certain problems.  For intance, when 
> one of our processors has properties for two or three directories.  If only 
> one does not exist it just says 'directory does not exist' but doesn't 
> indicate which property that was for.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-5) Add details to reason messages of standard validators

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-5.
---
Resolution: Fixed

> Add details to reason messages of standard validators
> -
>
> Key: NIFI-5
> URL: https://issues.apache.org/jira/browse/NIFI-5
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Joseph Witt
>Assignee: Andre F de Miranda
>Priority: Minor
>
> The reason messages for the standard validators are vague and make it 
> difficult for admins/DFMs to diagnose certain problems.  For intance, when 
> one of our processors has properties for two or three directories.  If only 
> one does not exist it just says 'directory does not exist' but doesn't 
> indicate which property that was for.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #119: MINIFI-70: enhance site2site port negotia...

2017-07-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/119


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094786#comment-16094786
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user scottyaslan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2009#discussion_r128537257
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-graph.js
 ---
@@ -198,13 +201,13 @@
 var nfGraph = {
 init: function () {
 // initialize the object responsible for each type of component
-nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfFunnel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
-nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfRemoteProcessGroup.init(nfConnectable, nfDraggable, 
nfSelectable, nfContextMenu);
--- End diff --

Yea I think soI mean...what other options are there for a double click 
on an RPG?


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2009: NIFI-1580 - Allow double-click to display config

2017-07-20 Thread scottyaslan
Github user scottyaslan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2009#discussion_r128537257
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-graph.js
 ---
@@ -198,13 +201,13 @@
 var nfGraph = {
 init: function () {
 // initialize the object responsible for each type of component
-nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfFunnel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
-nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfRemoteProcessGroup.init(nfConnectable, nfDraggable, 
nfSelectable, nfContextMenu);
--- End diff --

Yea I think soI mean...what other options are there for a double click 
on an RPG?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3931) putSftp process port property should support for expression language

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094771#comment-16094771
 ] 

ASF GitHub Bot commented on NIFI-3931:
--

Github user trixpan commented on the issue:

https://github.com/apache/nifi/pull/1968
  
LGTM


> putSftp process port property should support for expression language
> 
>
> Key: NIFI-3931
> URL: https://issues.apache.org/jira/browse/NIFI-3931
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Cheng Chin Tat
>Assignee: Pierre Villard
>Priority: Minor
>  Labels: easyfix
> Attachments: TestFTP.xml
>
>
> PutSftp Processor port property should support for expression language so 
> that dynamic port number can be pass to the processor during run time. 
> Rather than preset the port on design time.
> This changes involve changing the PropertyDescriptor SFTP_PORT validator 
> StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR to 
> StandardValidators.NON_EMPTY_VALIDATOR and other codes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1968: NIFI-3931 - Added EL to properties in SFTP transfer

2017-07-20 Thread trixpan
Github user trixpan commented on the issue:

https://github.com/apache/nifi/pull/1968
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4200) Consider a ControlNiFi processor

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094766#comment-16094766
 ] 

ASF GitHub Bot commented on NIFI-4200:
--

Github user trixpan commented on the issue:

https://github.com/apache/nifi/pull/2022
  
@pvillard31 - I assume the processor should be able to control nifi 
components only. Should we rename it to make this explicit? 


> Consider a ControlNiFi processor
> 
>
> Key: NIFI-4200
> URL: https://issues.apache.org/jira/browse/NIFI-4200
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> We frequently see on the mailing list the need to start/stop a processor 
> based on incoming flow files. At the moment, that's something that can be 
> scripted or that can be done using multiple InvokeHttp processors but it 
> requires a bit of work.
> Even though it is not really in the "NiFi way of thinking", it would be 
> interesting to have a processor with the following parameters:
> - NiFi REST API URL
> - Username
> - Password
> - Processor UUID (with expression language)
> - Action to perform (START, STOP, START/STOP, STOP/START)
> - Sleep duration (between the START and STOP calls when action is START/STOP, 
> or STOP/START)
> That would be helpful in use cases like:
> - start a workflow based on another workflow
> - start a processor not accepting incoming relationship based on a flow file 
> - restart a processor to "refresh" its configuration when the processor 
> relies on configuration files that could be changed
> - have a "start once" behavior



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2022: NIFI-4200 - Initial commit for a ControlNiFi processor

2017-07-20 Thread trixpan
Github user trixpan commented on the issue:

https://github.com/apache/nifi/pull/2022
  
@pvillard31 - I assume the processor should be able to control nifi 
components only. Should we rename it to make this explicit? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4210) Add OpenId Connect support for authenticating users

2017-07-20 Thread Matt Gilman (JIRA)
Matt Gilman created NIFI-4210:
-

 Summary: Add OpenId Connect support for authenticating users
 Key: NIFI-4210
 URL: https://issues.apache.org/jira/browse/NIFI-4210
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework, Core UI
Reporter: Matt Gilman
Assignee: Matt Gilman


Add support for authenticating users with the OpenId Connection specification. 
Evaluate whether a new extension point is necessary to allow for a given 
provider to supply custom code for instance to implement custom token 
validation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4209) Configuring path properties on TailFile in Windows with slashes will fail to match when configured for multifiles

2017-07-20 Thread Aldrin Piri (JIRA)
Aldrin Piri created NIFI-4209:
-

 Summary: Configuring path properties on TailFile in Windows with 
slashes will fail to match when configured for multifiles
 Key: NIFI-4209
 URL: https://issues.apache.org/jira/browse/NIFI-4209
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.3.0, 1.2.0
 Environment: Windows
Reporter: Aldrin Piri


Windows will default to backslashes for paths, but will also accept slashes.  
When configured for "Tailing Mode" of "Multiple Files" in Windows, if the user 
opts to use backslashes for the path (using non-escaped backslashes will cause 
a validation error), this will validate appropriately and allow the processor 
to be started.  

When running and a TailFile listing is performed, this comes back in the 
default Windows format using backslashes.  This causes the regex to not match 
for any of the paths and no files to be included for consideration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4208) Node failed to join cluster due to NullPointerException

2017-07-20 Thread Mark Payne (JIRA)
Mark Payne created NIFI-4208:


 Summary: Node failed to join cluster due to NullPointerException
 Key: NIFI-4208
 URL: https://issues.apache.org/jira/browse/NIFI-4208
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mark Payne
Assignee: Mark Payne
Priority: Critical
 Fix For: 1.4.0


A clustered node ran out of disk space. Upon restart, I came across the 
following error:

2017-07-20 09:03:00,988 ERROR [main] o.a.nifi.controller.StandardFlowService 
Failed to load flow from cluster due to: 
org.apache.nifi.cluster.ConnectionException: Failed to connect node to cluster 
due to: java.lang.NullPointerException
org.apache.nifi.cluster.ConnectionException: Failed to connect node to cluster 
due to: java.lang.NullPointerException
at 
org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:945)
at 
org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:515)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:800)
at org.apache.nifi.NiFi.(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: java.lang.NullPointerException: null
at 
org.apache.nifi.controller.repository.RepositoryRecordSerde.getRecordIdentifier(RepositoryRecordSerde.java:43)
at 
org.apache.nifi.controller.repository.RepositoryRecordSerde.getRecordIdentifier(RepositoryRecordSerde.java:26)
at 
org.wali.MinimalLockingWriteAheadLog$Partition.recoverNextTransaction(MinimalLockingWriteAheadLog.java:1132)
at 
org.wali.MinimalLockingWriteAheadLog.recoverFromEdits(MinimalLockingWriteAheadLog.java:459)
at 
org.wali.MinimalLockingWriteAheadLog.recoverRecords(MinimalLockingWriteAheadLog.java:301)
at 
org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.loadFlowFiles(WriteAheadFlowFileRepository.java:381)
at 
org.apache.nifi.controller.FlowController.initializeFlow(FlowController.java:713)
at 
org.apache.nifi.controller.StandardFlowService.initializeController(StandardFlowService.java:955)
at 
org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:927)
... 4 common frames omitted



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4206) Include proxy instructions in admin guide

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094715#comment-16094715
 ] 

ASF GitHub Bot commented on NIFI-4206:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2023
  
@bond- Another commit has been pushed which includes a brief example for 
NiFi specific configuration. I am not super familiar with proxy configurations 
so if you have any additional suggestions that should be added just let me 
know. If you wanted to supply a patch with specific details based on your 
experience thus far, I'd be happy to include it in this PR. Thanks!


> Include proxy instructions in admin guide
> -
>
> Key: NIFI-4206
> URL: https://issues.apache.org/jira/browse/NIFI-4206
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Documentation & Website
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Minor
> Fix For: 1.4.0
>
>
> Update the admin guide with instructions when running behind a proxy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2023: NIFI-4206: Proxy instructions in Admin Guide

2017-07-20 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2023
  
@bond- Another commit has been pushed which includes a brief example for 
NiFi specific configuration. I am not super familiar with proxy configurations 
so if you have any additional suggestions that should be added just let me 
know. If you wanted to supply a patch with specific details based on your 
experience thus far, I'd be happy to include it in this PR. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4207) PutSlack processor does not expose proxy settings

2017-07-20 Thread Andre F de Miranda (JIRA)
Andre F de Miranda created NIFI-4207:


 Summary: PutSlack processor does not expose proxy settings
 Key: NIFI-4207
 URL: https://issues.apache.org/jira/browse/NIFI-4207
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Andre F de Miranda






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #119: MINIFI-70: enhance site2site port negotiation

2017-07-20 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/119
  
@benqiu2016 I'll get this merged shortly, thanks. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4184) I needed to put some attributes on REMOTE_GROUP and REMOTE_OWNER in the PutHDFS Processor

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094518#comment-16094518
 ] 

ASF GitHub Bot commented on NIFI-4184:
--

Github user panelladavide commented on the issue:

https://github.com/apache/nifi/pull/2007
  
Thank you so much!


>  I needed to put some attributes on REMOTE_GROUP and REMOTE_OWNER in the 
> PutHDFS Processor
> --
>
> Key: NIFI-4184
> URL: https://issues.apache.org/jira/browse/NIFI-4184
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: dav
>
>  I needed to put some attributes on REMOTE_GROUP and REMOTE_OWNER in order to 
> achieve it i put expressionLanguageSupported(true) on the PropertyDescriptor 
> of REMOTE_GROUP and REMOTE_OWNER



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2007: NIFI-4184: PutHDFS Processor Expression language TRUE on R...

2017-07-20 Thread panelladavide
Github user panelladavide commented on the issue:

https://github.com/apache/nifi/pull/2007
  
Thank you so much!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4152) Create ListenTCPRecord Processor

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094493#comment-16094493
 ] 

ASF GitHub Bot commented on NIFI-4152:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1987#discussion_r128478338
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenTCPRecord.java
 ---
@@ -0,0 +1,432 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processor.util.listen.ListenerProperties;
+import org.apache.nifi.record.listen.SocketChannelRecordReader;
+import org.apache.nifi.record.listen.SocketChannelRecordReaderDispatcher;
+import org.apache.nifi.security.util.SslContextFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.ssl.SSLContextService;
+
+import javax.net.ssl.SSLContext;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.NetworkInterface;
+import java.nio.channels.ServerSocketChannel;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import static 
org.apache.nifi.processor.util.listen.ListenerProperties.NETWORK_INTF_NAME;
+
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@Tags({"listen", "tcp", "record", "tls", "ssl"})
+@CapabilityDescription("Listens for incoming TCP connections and reads 
data from each connection using a configured record " +
+"reader, and writes the records to a flow file using a configured 
record writer. The type of record reader selected will " +
+"determine how clients are expected to send data. For example, 
when using a Grok reader to read logs, a client can keep an " +
+"open connection and continuously stream data, but when using an 
JSON 

[jira] [Commented] (NIFI-4152) Create ListenTCPRecord Processor

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094492#comment-16094492
 ] 

ASF GitHub Bot commented on NIFI-4152:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1987#discussion_r128477951
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenTCPRecord.java
 ---
@@ -0,0 +1,432 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processor.util.listen.ListenerProperties;
+import org.apache.nifi.record.listen.SocketChannelRecordReader;
+import org.apache.nifi.record.listen.SocketChannelRecordReaderDispatcher;
+import org.apache.nifi.security.util.SslContextFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.ssl.SSLContextService;
+
+import javax.net.ssl.SSLContext;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.NetworkInterface;
+import java.nio.channels.ServerSocketChannel;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import static 
org.apache.nifi.processor.util.listen.ListenerProperties.NETWORK_INTF_NAME;
+
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@Tags({"listen", "tcp", "record", "tls", "ssl"})
+@CapabilityDescription("Listens for incoming TCP connections and reads 
data from each connection using a configured record " +
+"reader, and writes the records to a flow file using a configured 
record writer. The type of record reader selected will " +
+"determine how clients are expected to send data. For example, 
when using a Grok reader to read logs, a client can keep an " +
+"open connection and continuously stream data, but when using an 
JSON 

[GitHub] nifi pull request #1987: NIFI-4152 Initial commit of ListenTCPRecord

2017-07-20 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1987#discussion_r128478338
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenTCPRecord.java
 ---
@@ -0,0 +1,432 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processor.util.listen.ListenerProperties;
+import org.apache.nifi.record.listen.SocketChannelRecordReader;
+import org.apache.nifi.record.listen.SocketChannelRecordReaderDispatcher;
+import org.apache.nifi.security.util.SslContextFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.ssl.SSLContextService;
+
+import javax.net.ssl.SSLContext;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.NetworkInterface;
+import java.nio.channels.ServerSocketChannel;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import static 
org.apache.nifi.processor.util.listen.ListenerProperties.NETWORK_INTF_NAME;
+
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@Tags({"listen", "tcp", "record", "tls", "ssl"})
+@CapabilityDescription("Listens for incoming TCP connections and reads 
data from each connection using a configured record " +
+"reader, and writes the records to a flow file using a configured 
record writer. The type of record reader selected will " +
+"determine how clients are expected to send data. For example, 
when using a Grok reader to read logs, a client can keep an " +
+"open connection and continuously stream data, but when using an 
JSON reader, the client cannot send an array of JSON " +
+"documents and then send another array on the same connection, as 
the reader would be in a bad state at that point. Records " +
+"will be read from the 

[GitHub] nifi pull request #1987: NIFI-4152 Initial commit of ListenTCPRecord

2017-07-20 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1987#discussion_r128477951
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenTCPRecord.java
 ---
@@ -0,0 +1,432 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.DataUnit;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processor.util.listen.ListenerProperties;
+import org.apache.nifi.record.listen.SocketChannelRecordReader;
+import org.apache.nifi.record.listen.SocketChannelRecordReaderDispatcher;
+import org.apache.nifi.security.util.SslContextFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.ssl.SSLContextService;
+
+import javax.net.ssl.SSLContext;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.NetworkInterface;
+import java.nio.channels.ServerSocketChannel;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import static 
org.apache.nifi.processor.util.listen.ListenerProperties.NETWORK_INTF_NAME;
+
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@Tags({"listen", "tcp", "record", "tls", "ssl"})
+@CapabilityDescription("Listens for incoming TCP connections and reads 
data from each connection using a configured record " +
+"reader, and writes the records to a flow file using a configured 
record writer. The type of record reader selected will " +
+"determine how clients are expected to send data. For example, 
when using a Grok reader to read logs, a client can keep an " +
+"open connection and continuously stream data, but when using an 
JSON reader, the client cannot send an array of JSON " +
+"documents and then send another array on the same connection, as 
the reader would be in a bad state at that point. Records " +
+"will be read from the 

[jira] [Commented] (NIFI-4205) TailFile can produce duplicated data when it wrongly assumes a file is rotated

2017-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094474#comment-16094474
 ] 

ASF GitHub Bot commented on NIFI-4205:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2021


> TailFile can produce duplicated data when it wrongly assumes a file is rotated
> --
>
> Key: NIFI-4205
> URL: https://issues.apache.org/jira/browse/NIFI-4205
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
> Fix For: 1.4.0
>
>
> TailFile checks whether a file being tailed is rotated by following lines of 
> code, and if it determines so, it resets the reader and state for the file:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java#L693
> {code}
> // Check if file has rotated
> if (rolloverOccurred
> || (timestamp <= file.lastModified() && length > 
> file.length())
> || (timestamp < file.lastModified() && length >= 
> file.length())) {
> // Since file has rotated, we close the reader, create a new one, 
> and then reset our state.
> try {
> reader.close();
> getLogger().debug("Closed FileChannel {}", new 
> Object[]{reader, reader});
> } catch (final IOException ioe) {
> getLogger().warn("Failed to close reader for {} due to {}", 
> new Object[]{file, ioe});
> }
> reader = createReader(file, 0L);
> position = 0L;
> checksum.reset();
> }
> {code}
> The third condition, newer timestamp but the same file size can work 
> negatively in some situations. For example:
> # If an already fully tailed file is 'touched' and last modified timestamp is 
> updated. This is the easiest way to produce duplicated content.
> # On Windows, if a file is being tailed and updated by an app that writes 
> logs or some data to it consistently and frequently, then the last modified 
> timestamp can be delayed to be updated compared to file size. I couldn't find 
> canonical docs for this behavior, but testing on Windows consistently 
> produces duplicated data. And the 3rd condition becomes true when such 
> duplication occurs.
> TailFile updates the file timestamp and length when it reads some data from 
> the file, specifically at these lines:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java#L765
> {code}
> timestamp = Math.max(state.getTimestamp(), file.lastModified());
> length = file.length();
> {code}
> As mentioned at the 2nd case above, file.lastModified can return a stale 
> timestamp (or maybe just not being flushed yet) while length is replaced by 
> the latest value. After this happens, at the next onTrigger cycle, the 3rd 
> condition becomes true because it detects a newer timestamp.
> These conditions are added by NIFI-1170 and NIFI-1959.
> A simple flow, TailFile -> SplitText -> (FlowFiles are queued) -> 
> UpdateAttribute(Stopped) can reproduce this, with a command-line to simulate 
> frequently updated log file:
> {code}
> $ for i in `seq 1 1`; do echo $i >> test.log; done
> {code}
> The expected result is having 1 generated FlowFiles queued at the 
> relationship between SplitText and UpdateAttribute. But on Windows, more 
> FlowFiles are generated.
> By enabling debug level log for TailFile, following log messages can be 
> confirmed:
> {code}
> Add this to conf/logback.xml
> 
> 2017-07-19 10:22:07,134 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile TailFile[id=59ef6ea7-0
> 15d-1000-d6c2-c57a61e58a80] Recovering Rolled Off Files; total number of 
> files rolled off = 0
> 2017-07-19 10:22:07,134 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile 
> TailFile[id=59ef6ea7-015d-1000-d6c2-c57a61e58a80] Closed FileChannel 
> sun.nio.ch.FileChannelImpl@6d2a1eaf
> 2017-07-19 10:22:07,134 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile 
> TailFile[id=59ef6ea7-015d-1000-d6c2-c57a61e58a80] Created FileChannel 
> sun.nio.ch.FileChannelImpl@4aefddb3 for C:\logs\test.log
> 2017-07-19 10:22:07,150 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile 
> TailFile[id=59ef6ea7-015d-1000-d6c2-c57a61e58a80] Reading lines starting at 
> position 0
> {code}
> The 3rd condition should be removed to avoid having these duplicated data 
> ingested. Or if there's any specific 

[jira] [Updated] (NIFI-4205) TailFile can produce duplicated data when it wrongly assumes a file is rotated

2017-07-20 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4205:
-
   Resolution: Fixed
Fix Version/s: 1.4.0
   Status: Resolved  (was: Patch Available)

> TailFile can produce duplicated data when it wrongly assumes a file is rotated
> --
>
> Key: NIFI-4205
> URL: https://issues.apache.org/jira/browse/NIFI-4205
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
> Fix For: 1.4.0
>
>
> TailFile checks whether a file being tailed is rotated by following lines of 
> code, and if it determines so, it resets the reader and state for the file:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java#L693
> {code}
> // Check if file has rotated
> if (rolloverOccurred
> || (timestamp <= file.lastModified() && length > 
> file.length())
> || (timestamp < file.lastModified() && length >= 
> file.length())) {
> // Since file has rotated, we close the reader, create a new one, 
> and then reset our state.
> try {
> reader.close();
> getLogger().debug("Closed FileChannel {}", new 
> Object[]{reader, reader});
> } catch (final IOException ioe) {
> getLogger().warn("Failed to close reader for {} due to {}", 
> new Object[]{file, ioe});
> }
> reader = createReader(file, 0L);
> position = 0L;
> checksum.reset();
> }
> {code}
> The third condition, newer timestamp but the same file size can work 
> negatively in some situations. For example:
> # If an already fully tailed file is 'touched' and last modified timestamp is 
> updated. This is the easiest way to produce duplicated content.
> # On Windows, if a file is being tailed and updated by an app that writes 
> logs or some data to it consistently and frequently, then the last modified 
> timestamp can be delayed to be updated compared to file size. I couldn't find 
> canonical docs for this behavior, but testing on Windows consistently 
> produces duplicated data. And the 3rd condition becomes true when such 
> duplication occurs.
> TailFile updates the file timestamp and length when it reads some data from 
> the file, specifically at these lines:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java#L765
> {code}
> timestamp = Math.max(state.getTimestamp(), file.lastModified());
> length = file.length();
> {code}
> As mentioned at the 2nd case above, file.lastModified can return a stale 
> timestamp (or maybe just not being flushed yet) while length is replaced by 
> the latest value. After this happens, at the next onTrigger cycle, the 3rd 
> condition becomes true because it detects a newer timestamp.
> These conditions are added by NIFI-1170 and NIFI-1959.
> A simple flow, TailFile -> SplitText -> (FlowFiles are queued) -> 
> UpdateAttribute(Stopped) can reproduce this, with a command-line to simulate 
> frequently updated log file:
> {code}
> $ for i in `seq 1 1`; do echo $i >> test.log; done
> {code}
> The expected result is having 1 generated FlowFiles queued at the 
> relationship between SplitText and UpdateAttribute. But on Windows, more 
> FlowFiles are generated.
> By enabling debug level log for TailFile, following log messages can be 
> confirmed:
> {code}
> Add this to conf/logback.xml
> 
> 2017-07-19 10:22:07,134 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile TailFile[id=59ef6ea7-0
> 15d-1000-d6c2-c57a61e58a80] Recovering Rolled Off Files; total number of 
> files rolled off = 0
> 2017-07-19 10:22:07,134 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile 
> TailFile[id=59ef6ea7-015d-1000-d6c2-c57a61e58a80] Closed FileChannel 
> sun.nio.ch.FileChannelImpl@6d2a1eaf
> 2017-07-19 10:22:07,134 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile 
> TailFile[id=59ef6ea7-015d-1000-d6c2-c57a61e58a80] Created FileChannel 
> sun.nio.ch.FileChannelImpl@4aefddb3 for C:\logs\test.log
> 2017-07-19 10:22:07,150 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile 
> TailFile[id=59ef6ea7-015d-1000-d6c2-c57a61e58a80] Reading lines starting at 
> position 0
> {code}
> The 3rd condition should be removed to avoid having these duplicated data 
> ingested. Or if there's any specific need, we should discuss about it 

[jira] [Commented] (NIFI-4205) TailFile can produce duplicated data when it wrongly assumes a file is rotated

2017-07-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094473#comment-16094473
 ] 

ASF subversion and git services commented on NIFI-4205:
---

Commit b4e0a6e20683253abe021acb9048602abc250668 in nifi's branch 
refs/heads/master from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b4e0a6e ]

NIFI-4205: Avoid duplicated data from TailFile

Before this fix, it is possible that TailFile to produce duplicated data
if an already tailed file has newer timestamp and fewer or the same
amout of data.

Signed-off-by: Pierre Villard 

This closes #2021.


> TailFile can produce duplicated data when it wrongly assumes a file is rotated
> --
>
> Key: NIFI-4205
> URL: https://issues.apache.org/jira/browse/NIFI-4205
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
> Fix For: 1.4.0
>
>
> TailFile checks whether a file being tailed is rotated by following lines of 
> code, and if it determines so, it resets the reader and state for the file:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java#L693
> {code}
> // Check if file has rotated
> if (rolloverOccurred
> || (timestamp <= file.lastModified() && length > 
> file.length())
> || (timestamp < file.lastModified() && length >= 
> file.length())) {
> // Since file has rotated, we close the reader, create a new one, 
> and then reset our state.
> try {
> reader.close();
> getLogger().debug("Closed FileChannel {}", new 
> Object[]{reader, reader});
> } catch (final IOException ioe) {
> getLogger().warn("Failed to close reader for {} due to {}", 
> new Object[]{file, ioe});
> }
> reader = createReader(file, 0L);
> position = 0L;
> checksum.reset();
> }
> {code}
> The third condition, newer timestamp but the same file size can work 
> negatively in some situations. For example:
> # If an already fully tailed file is 'touched' and last modified timestamp is 
> updated. This is the easiest way to produce duplicated content.
> # On Windows, if a file is being tailed and updated by an app that writes 
> logs or some data to it consistently and frequently, then the last modified 
> timestamp can be delayed to be updated compared to file size. I couldn't find 
> canonical docs for this behavior, but testing on Windows consistently 
> produces duplicated data. And the 3rd condition becomes true when such 
> duplication occurs.
> TailFile updates the file timestamp and length when it reads some data from 
> the file, specifically at these lines:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java#L765
> {code}
> timestamp = Math.max(state.getTimestamp(), file.lastModified());
> length = file.length();
> {code}
> As mentioned at the 2nd case above, file.lastModified can return a stale 
> timestamp (or maybe just not being flushed yet) while length is replaced by 
> the latest value. After this happens, at the next onTrigger cycle, the 3rd 
> condition becomes true because it detects a newer timestamp.
> These conditions are added by NIFI-1170 and NIFI-1959.
> A simple flow, TailFile -> SplitText -> (FlowFiles are queued) -> 
> UpdateAttribute(Stopped) can reproduce this, with a command-line to simulate 
> frequently updated log file:
> {code}
> $ for i in `seq 1 1`; do echo $i >> test.log; done
> {code}
> The expected result is having 1 generated FlowFiles queued at the 
> relationship between SplitText and UpdateAttribute. But on Windows, more 
> FlowFiles are generated.
> By enabling debug level log for TailFile, following log messages can be 
> confirmed:
> {code}
> Add this to conf/logback.xml
> 
> 2017-07-19 10:22:07,134 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile TailFile[id=59ef6ea7-0
> 15d-1000-d6c2-c57a61e58a80] Recovering Rolled Off Files; total number of 
> files rolled off = 0
> 2017-07-19 10:22:07,134 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile 
> TailFile[id=59ef6ea7-015d-1000-d6c2-c57a61e58a80] Closed FileChannel 
> sun.nio.ch.FileChannelImpl@6d2a1eaf
> 2017-07-19 10:22:07,134 DEBUG [Timer-Driven Process Thread-3] 
> o.a.nifi.processors.standard.TailFile 
> TailFile[id=59ef6ea7-015d-1000-d6c2-c57a61e58a80] 

[GitHub] nifi pull request #2021: NIFI-4205: Avoid duplicated data from TailFile

2017-07-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2021


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >