[jira] [Commented] (NIFI-1088) PutKafka does not penalize when routing to failure

2015-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984504#comment-14984504
 ] 

ASF subversion and git services commented on NIFI-1088:
---

Commit 9515b7460713ba985a6d7c8ad033fe2c1ac98e3d in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=9515b74 ]

NIFI-1088: Ensure that FlowFile is penalized before routing to failure


> PutKafka does not penalize when routing to failure
> --
>
> Key: NIFI-1088
> URL: https://issues.apache.org/jira/browse/NIFI-1088
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 0.4.0
>
> Attachments: 
> 0001-NIFI-1088-Ensure-that-FlowFile-is-penalized-before-r.patch
>
>
> We need to ensure that we penalize FlowFiles when routing to 'failure' to 
> ensure that we do not constantly hit the Kafka server and exhaust resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] nifi git commit: NIFI-1088: Ensure that FlowFile is penalized before routing to failure

2015-11-01 Thread markap14
Repository: nifi
Updated Branches:
  refs/heads/master cef7b6c73 -> b729bf4c1


NIFI-1088: Ensure that FlowFile is penalized before routing to failure


Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/9515b746
Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/9515b746
Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/9515b746

Branch: refs/heads/master
Commit: 9515b7460713ba985a6d7c8ad033fe2c1ac98e3d
Parents: dc4004d
Author: Mark Payne 
Authored: Fri Oct 30 14:25:27 2015 -0400
Committer: Mark Payne 
Committed: Fri Oct 30 14:25:27 2015 -0400

--
 .../main/java/org/apache/nifi/processors/kafka/PutKafka.java   | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/nifi/blob/9515b746/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/PutKafka.java
--
diff --git 
a/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/PutKafka.java
 
b/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/PutKafka.java
index cff285c..09025a4 100644
--- 
a/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/PutKafka.java
+++ 
b/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/PutKafka.java
@@ -401,7 +401,7 @@ public class PutKafka extends AbstractProcessor {
 getLogger().info("Successfully sent {} to Kafka in {} millis", 
new Object[] { flowFile, TimeUnit.NANOSECONDS.toMillis(nanos) });
 } catch (final Exception e) {
 getLogger().error("Failed to send {} to Kafka due to {}; 
routing to failure", new Object[] { flowFile, e });
-session.transfer(flowFile, REL_FAILURE);
+session.transfer(session.penalize(flowFile), REL_FAILURE);
 error = true;
 } finally {
 if (error) {
@@ -534,7 +534,7 @@ public class PutKafka extends AbstractProcessor {
 if (offset == 0L) {
 // all of the messages failed to send. Route FlowFile to 
failure
 getLogger().error("Failed to send {} to Kafka due to {}; 
routing to fialure", new Object[] { flowFile, pe.getCause() });
-session.transfer(flowFile, REL_FAILURE);
+session.transfer(session.penalize(flowFile), REL_FAILURE);
 } else {
 // Some of the messages were sent successfully. We want to 
split off the successful messages from the failed messages.
 final FlowFile successfulMessages = 
session.clone(flowFile, 0L, offset);
@@ -545,7 +545,7 @@ public class PutKafka extends AbstractProcessor {
 messagesSent.get(), flowFile, successfulMessages, 
failedMessages, pe.getCause() });
 
 session.transfer(successfulMessages, REL_SUCCESS);
-session.transfer(failedMessages, REL_FAILURE);
+session.transfer(session.penalize(failedMessages), 
REL_FAILURE);
 session.remove(flowFile);
 session.getProvenanceReporter().send(successfulMessages, 
"kafka://" + topic);
 }



[3/3] nifi git commit: Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/nifi

2015-11-01 Thread markap14
Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/nifi


Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/b729bf4c
Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/b729bf4c
Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/b729bf4c

Branch: refs/heads/master
Commit: b729bf4c196e0fbd33692f76c11931aef61c650b
Parents: 6e193df cef7b6c
Author: Mark Payne 
Authored: Sun Nov 1 14:16:54 2015 -0500
Committer: Mark Payne 
Committed: Sun Nov 1 14:16:54 2015 -0500

--
 .../nifi/processors/standard/InvokeHTTP.java|  21 +-
 .../processors/standard/TestInvokeHTTP.java | 617 +-
 .../processors/standard/TestInvokeHttpSSL.java  |  90 ++
 .../standard/util/TestInvokeHttpCommon.java | 830 +++
 4 files changed, 972 insertions(+), 586 deletions(-)
--




[jira] [Commented] (NIFI-1090) Constant log messages about Cleanup Archive are logged at INFO level instead of DEBUG

2015-11-01 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984447#comment-14984447
 ] 

Joseph Witt commented on NIFI-1090:
---

Sounds good.  Then i'll add a ticket to have the 'al' added to 'addition'  ;-)

> Constant log messages about Cleanup Archive are logged at INFO level instead 
> of DEBUG
> -
>
> Key: NIFI-1090
> URL: https://issues.apache.org/jira/browse/NIFI-1090
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 0.4.0
>
>
> Recently we added some debug logging but one log message was logged at an 
> INFO level:
> 2015-10-31 06:38:58,118 INFO [Cleanup Archive for default]
> o.a.n.c.r.F.archive.expiration Currently 250596823040 bytes free for
> Container default; requirement is 137324361973 byte free, so need to free
> -113272461067 bytes
> This is logged continually and is confusing for users and is just noise most 
> of the time. This should be DEBUG level logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1077) Allow ConvertCharacterSet to accept expression language

2015-11-01 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984558#comment-14984558
 ] 

Aldrin Piri commented on NIFI-1077:
---

NIFI-1092 was created

> Allow ConvertCharacterSet to accept expression language
> ---
>
> Key: NIFI-1077
> URL: https://issues.apache.org/jira/browse/NIFI-1077
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Percivall
>Priority: Minor
> Fix For: 0.4.0
>
> Attachments: NIFI-1077.patch
>
>
> This issue arose from a user on the mailing list. It demonstrates the need to 
> be able to use expression language to set the incoming (and potentially 
> outgoing) character sets:
> I'm looking to process many files into common formats.  The source files are 
> coming in various character sets, mime types, and new line terminators.
> My thinking for a data flow was along these lines:
> GetFile (from many sub directories) -> 
> ExecuteStreamCommand (file -i) ->
> ConvertCharacterSet (from previous command to utf8) ->
> ReplaceText (to change any \r\n into \n) ->
> PutFile (into a directory structure based on values found in the original 
> file path and filename)
> Additional steps would be added for archiving a copy of the original, 
> converting xml files, etc.
> Attempting to process these with Nifi leaves me confused as to how to process 
> within the tool.  If I want to ConvertCharacterSet, I have to know the input 
> type.  I setup a ExecuteStreamCommand to file -i 
> ${absolute.path:append(${filename})} which returned the expected values.  I 
> don't see a way to turn these results into input for the processor, which 
> doesn't accept expression language for that field.
> I also considered ConvertCSVToAvro as an interim step but notice the same 
> issue.  Any suggestions what this dataflow should look like?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1094) Searching Provenance Repo by component name results in js error

2015-11-01 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created NIFI-1094:
-

 Summary: Searching Provenance Repo by component name results in js 
error
 Key: NIFI-1094
 URL: https://issues.apache.org/jira/browse/NIFI-1094
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Reporter: Randy Gelhausen


Reproduction Steps:
1. Open the Provenance Repository UI
2. Type some component name
3. Open js debug console (Command+Shift+J on macs)
4. Hit enter

See the following error in the console:
"Uncaught TypeError: Cannot read property 'search' of undefined"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1095) Add Round Robin custom partitioner class to PutKafka

2015-11-01 Thread Andre (JIRA)
Andre created NIFI-1095:
---

 Summary: Add Round Robin custom partitioner class to PutKafka
 Key: NIFI-1095
 URL: https://issues.apache.org/jira/browse/NIFI-1095
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 0.3.0
Reporter: Andre


While Kafka offers some flexibility around partitioning and data rebalancing, 
the current PutKafka processor has just two strategies for rebalancing:

- Random (default?)
- Key (aka Hashed)

It would be great if it also had RoundRobin as some of the other Kafka 
producers around there [1][2].

Implementation can be done with certain ease:

http://ankitasblogger.blogspot.com.au/2014/12/kafka-round-robin-partition.html


[1] see partitioner -> 
https://hekad.readthedocs.org/en/v0.9.2/config/outputs/kafka.html
[2] see roundrobin.py -> 
https://github.com/mumrah/kafka-python/tree/master/kafka/partitioner




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1068) Site-to-Site Client creates memory leak when errors occur

2015-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984533#comment-14984533
 ] 

ASF subversion and git services commented on NIFI-1068:
---

Commit 37e2f178f8f0e4a0fed022e2541a64e97e4897d4 in nifi's branch 
refs/heads/master from [~JPercivall]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=37e2f17 ]

NIFI-1068 Fix EndpointConnectionPool to properly remove connections from 
activeConnections when terminating connections

Signed-off-by: Mark Payne 


> Site-to-Site Client creates memory leak when errors occur
> -
>
> Key: NIFI-1068
> URL: https://issues.apache.org/jira/browse/NIFI-1068
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Joseph Percivall
>  Labels: beginner, newbie
> Fix For: 0.4.0
>
> Attachments: NIFI-1068.patch
>
>
> The EndpointConnectionPool class does not properly cleanup EndpointConnection 
> objects when a EndpointConnectionPool.terminate is called. As a result, if 
> unable to send to the remote NiFi instance, the client will continually 
> obtain a new EndpointConnection object, fail, and call terminate; this 
> results in many objects being added to the internal 'activeConnections' set 
> without ever getting cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-631) Create ListFile and FetchFile processors

2015-11-01 Thread Joe Skora (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984540#comment-14984540
 ] 

Joe Skora commented on NIFI-631:


[~mpetronic]

The changes are almost done.  I modeled the properties on GetFile, so no 
functionality would be lost.  Details are below.

I'm working out issues with file attribute differences on Linux vs Windows and 
making sure the tests work on both platforms.  I hope to have that done 
tonight, but if I don't do you want a crack at the Linux source?

Joe

The properties now consist of
* Input Directory - root directory for file searches,
* Recurse Subdirectories - directory recursion,
* File Filter - regex file name pattern,
* Path Filter - regex path pattern, not including file name, if recurse is true,
* Minimum File Age - Time Unit delta for oldest file to process (i.e. 2 days, 1 
hour, etc.),
* Maximum File Age - Time Unit delta for newest file to process,
* Minimum FIle Size - file size minimum to process,
* Maximum File Size - file size maximum to process, and
* Ignore Hidden Files - allows suppression of hidden files.

> Create ListFile and FetchFile processors
> 
>
> Key: NIFI-631
> URL: https://issues.apache.org/jira/browse/NIFI-631
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mark Payne
>Assignee: Joe Skora
> Attachments: 
> 0001-NIFI-631-Initial-implementation-of-FetchFile-process.patch
>
>
> This pair of Processors will provide several benefits over the existing 
> GetFile processor:
> 1. Currently, GetFile will continually pull the same files if the "Keep 
> Source File" property is set to true. There is no way to pull the file and 
> leave it in the directory without continually pulling the same file. We could 
> implement state here, but it would either be a huge amount of state to 
> remember everything pulled or it would have to always pull the oldest file 
> first so that we can maintain just the Last Modified Date of the last file 
> pulled plus all files with the same Last Modified Date that have already been 
> pulled.
> 2. If pulling from a network attached storage such as NFS, this would allow a 
> single processor to run ListFiles and then distribute those FlowFiles to the 
> cluster so that the cluster can share the work of pulling the data.
> 3. There are use cases when we may want to pull a specific file (for example, 
> in conjunction with ProcessHttpRequest/ProcessHttpResponse) rather than just 
> pull all files in a directory. GetFile does not support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1077) Allow ConvertCharacterSet to accept expression language

2015-11-01 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984556#comment-14984556
 ] 

Aldrin Piri commented on NIFI-1077:
---

Agree.  Will make ticket to make this consistent and apply validators where 
possible.

> Allow ConvertCharacterSet to accept expression language
> ---
>
> Key: NIFI-1077
> URL: https://issues.apache.org/jira/browse/NIFI-1077
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Percivall
>Priority: Minor
> Fix For: 0.4.0
>
> Attachments: NIFI-1077.patch
>
>
> This issue arose from a user on the mailing list. It demonstrates the need to 
> be able to use expression language to set the incoming (and potentially 
> outgoing) character sets:
> I'm looking to process many files into common formats.  The source files are 
> coming in various character sets, mime types, and new line terminators.
> My thinking for a data flow was along these lines:
> GetFile (from many sub directories) -> 
> ExecuteStreamCommand (file -i) ->
> ConvertCharacterSet (from previous command to utf8) ->
> ReplaceText (to change any \r\n into \n) ->
> PutFile (into a directory structure based on values found in the original 
> file path and filename)
> Additional steps would be added for archiving a copy of the original, 
> converting xml files, etc.
> Attempting to process these with Nifi leaves me confused as to how to process 
> within the tool.  If I want to ConvertCharacterSet, I have to know the input 
> type.  I setup a ExecuteStreamCommand to file -i 
> ${absolute.path:append(${filename})} which returned the expected values.  I 
> don't see a way to turn these results into input for the processor, which 
> doesn't accept expression language for that field.
> I also considered ConvertCSVToAvro as an interim step but notice the same 
> issue.  Any suggestions what this dataflow should look like?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-1093) ConvertJSONToSQL incorrectly detects required columns

2015-11-01 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created NIFI-1093:
-

 Summary: ConvertJSONToSQL incorrectly detects required columns
 Key: NIFI-1093
 URL: https://issues.apache.org/jira/browse/NIFI-1093
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Randy Gelhausen


create table device_pings(id varchar not null, ts varchar not null, bssid 
varchar, ssid varchar, noise integer, signal integer, constraint pk primary key 
(id, ts))

With the example DDL above, neither SSID nor BSSID are required fields, yet 
ConvertJSONToSQL throws an exception if the input JSON lacks those fields:

2015-11-01 17:47:10,373 ERROR [Timer-Driven Process Thread-6] 
o.a.n.p.standard.ConvertJSONToSQL 
ConvertJSONToSQL[id=a336eb2b-fc63-4118-b098-c0ded1dd5520] Failed to convert 
StandardFlowFileRecord[uuid=5d2c05f0-982e-4feb-94b1-d9946be730d4,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1446416358991-2, container=default, 
section=2], offset=203306, 
length=614],offset=0,name=1446418019645796000,size=614] to a SQL INSERT 
statement due to org.apache.nifi.processor.exception.ProcessException: JSON 
does not have a value for the Required column 'BSSID'; routing to failure: 
org.apache.nifi.processor.exception.ProcessException: JSON does not have a 
value for the Required column 'BSSID'
2015-11-01 17:47:10,381 ERROR [Timer-Driven Process Thread-6] 
o.a.n.p.standard.ConvertJSONToSQL 
ConvertJSONToSQL[id=a336eb2b-fc63-4118-b098-c0ded1dd5520] Failed to convert 
StandardFlowFileRecord[uuid=727b8c3c-66c5-4d6a-8cdc-602da8b80132,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1446416358991-2, container=default, 
section=2], offset=203920, 
length=674],offset=0,name=1446418019645796000,size=674] to a SQL INSERT 
statement due to org.apache.nifi.processor.exception.ProcessException: JSON 
does not have a value for the Required column 'SSID'; routing to failure: 
org.apache.nifi.processor.exception.ProcessException: JSON does not have a 
value for the Required column 'SSID'

The processor has "Unmatched Field Behavior" property. Should there be an 
additional property "Unmatched Column Behavior" property that allows SQL 
statements to be generated from the set of columns actually available in the 
JSON?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1090) Constant log messages about Cleanup Archive are logged at INFO level instead of DEBUG

2015-11-01 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984436#comment-14984436
 ] 

Mark Payne commented on NIFI-1090:
--

[~joewitt] - we can certainly change the wording so that if the number is 
negative, as in the above example we could indicate " no need to free space 
until an addition 113272461067 bytes are used".

> Constant log messages about Cleanup Archive are logged at INFO level instead 
> of DEBUG
> -
>
> Key: NIFI-1090
> URL: https://issues.apache.org/jira/browse/NIFI-1090
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 0.4.0
>
>
> Recently we added some debug logging but one log message was logged at an 
> INFO level:
> 2015-10-31 06:38:58,118 INFO [Cleanup Archive for default]
> o.a.n.c.r.F.archive.expiration Currently 250596823040 bytes free for
> Container default; requirement is 137324361973 byte free, so need to free
> -113272461067 bytes
> This is logged continually and is confusing for users and is just noise most 
> of the time. This should be DEBUG level logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1057) ConvertJSONToSQL fails with OOM

2015-11-01 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984586#comment-14984586
 ] 

Randy Gelhausen commented on NIFI-1057:
---

This seems to have fixed it! Thanks!

> ConvertJSONToSQL fails with OOM
> ---
>
> Key: NIFI-1057
> URL: https://issues.apache.org/jira/browse/NIFI-1057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Randy Gelhausen
> Attachments: nifi-app.log.gz
>
>
> Running NiFi built from master this afternoon, using Java 1.7.0_79 on OSX.
> The flow pulls JSON from a web endpoint, splits into about 200 JSON objects 
> which are passed to ConvertJSONToSQL and eventually to PutSQL. Based on 
> nifi-app.log, ConvertJSONToSQL is causing an OOM.
> This same setup works fine on Centos 6.6 with Java 1.8.0_40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1090) Constant log messages about Cleanup Archive are logged at INFO level instead of DEBUG

2015-11-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984520#comment-14984520
 ] 

ASF subversion and git services commented on NIFI-1090:
---

Commit ad849c77dff7b379116f4d57510c7b9136c7f4c0 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=ad849c7 ]

NIFI-1090: Fixed log message that was at info level but should have been debug 
level


> Constant log messages about Cleanup Archive are logged at INFO level instead 
> of DEBUG
> -
>
> Key: NIFI-1090
> URL: https://issues.apache.org/jira/browse/NIFI-1090
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 0.4.0
>
>
> Recently we added some debug logging but one log message was logged at an 
> INFO level:
> 2015-10-31 06:38:58,118 INFO [Cleanup Archive for default]
> o.a.n.c.r.F.archive.expiration Currently 250596823040 bytes free for
> Container default; requirement is 137324361973 byte free, so need to free
> -113272461067 bytes
> This is logged continually and is confusing for users and is just noise most 
> of the time. This should be DEBUG level logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


nifi git commit: NIFI-1090: Fixed log message that was at info level but should have been debug level

2015-11-01 Thread markap14
Repository: nifi
Updated Branches:
  refs/heads/master b729bf4c1 -> ad849c77d


NIFI-1090: Fixed log message that was at info level but should have been debug 
level


Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/ad849c77
Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/ad849c77
Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/ad849c77

Branch: refs/heads/master
Commit: ad849c77dff7b379116f4d57510c7b9136c7f4c0
Parents: b729bf4
Author: Mark Payne 
Authored: Sun Nov 1 14:37:01 2015 -0500
Committer: Mark Payne 
Committed: Sun Nov 1 14:37:01 2015 -0500

--
 .../nifi/controller/repository/FileSystemRepository.java  | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/nifi/blob/ad849c77/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
--
diff --git 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
index 72a50ec..5baddbb 100644
--- 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
+++ 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
@@ -1208,7 +1208,15 @@ public class FileSystemRepository implements 
ContentRepository {
 final long startNanos = System.nanoTime();
 final long toFree = minRequiredSpace - usableSpace;
 final BlockingQueue fileQueue = 
archivedFiles.get(containerName);
-archiveExpirationLog.info("Currently {} bytes free for Container {}; 
requirement is {} byte free, so need to free {} bytes", usableSpace, 
containerName, minRequiredSpace, toFree);
+if (archiveExpirationLog.isDebugEnabled()) {
+if (toFree < 0) {
+archiveExpirationLog.debug("Currently {} bytes free for 
Container {}; requirement is {} byte free, so no need to free space until an 
additional {} bytes are used",
+usableSpace, containerName, minRequiredSpace, 
Math.abs(toFree));
+} else {
+archiveExpirationLog.debug("Currently {} bytes free for 
Container {}; requirement is {} byte free, so need to free {} bytes",
+usableSpace, containerName, minRequiredSpace, toFree);
+}
+}
 
 ArchiveInfo toDelete;
 int deleteCount = 0;



[jira] [Resolved] (NIFI-1090) Constant log messages about Cleanup Archive are logged at INFO level instead of DEBUG

2015-11-01 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-1090.
--
Resolution: Fixed

> Constant log messages about Cleanup Archive are logged at INFO level instead 
> of DEBUG
> -
>
> Key: NIFI-1090
> URL: https://issues.apache.org/jira/browse/NIFI-1090
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 0.4.0
>
>
> Recently we added some debug logging but one log message was logged at an 
> INFO level:
> 2015-10-31 06:38:58,118 INFO [Cleanup Archive for default]
> o.a.n.c.r.F.archive.expiration Currently 250596823040 bytes free for
> Container default; requirement is 137324361973 byte free, so need to free
> -113272461067 bytes
> This is logged continually and is confusing for users and is just noise most 
> of the time. This should be DEBUG level logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-883) HandleHttpRequest starts a web server in the OnScheduled method but should start it in onTrigger

2015-11-01 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984541#comment-14984541
 ] 

Mark Payne commented on NIFI-883:
-

[~JPercivall] - i tested this out, and all looks good on a sunny day. 
Unfortunately, though, if I configure the processor to run on port 80 (which I 
don't have permissions to do if i don't run as root), then the processor throws 
an Exception, catches it, and returns. So this then happens again. And again. 
And within a few milliseconds, I start seeing thousands of these in the logs:

{code}
2015-11-01 15:47:41,463 WARN [Timer-Driven Process Thread-6] 
o.e.j.util.component.AbstractLifeCycle FAILED 
org.eclipse.jetty.server.Server@6f48b023: java.lang.OutOfMemoryError: unable to 
create new native thread
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method) [na:1.8.0_60]
at java.lang.Thread.start(Thread.java:714) [na:1.8.0_60]
at 
org.eclipse.jetty.server.ShutdownMonitor.start(ShutdownMonitor.java:511) 
~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
at org.eclipse.jetty.server.Server.doStart(Server.java:325) 
~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
 ~[jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
at 
org.apache.nifi.processors.standard.HandleHttpRequest.initializeServer(HandleHttpRequest.java:412)
 [nifi-standard-processors-0.3.1-SNAPSHOT.jar:0.3.1-SNAPSHOT]
at 
org.apache.nifi.processors.standard.HandleHttpRequest.onTrigger(HandleHttpRequest.java:469)
 [nifi-standard-processors-0.3.1-SNAPSHOT.jar:0.3.1-SNAPSHOT]
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 [nifi-api-0.3.1-SNAPSHOT.jar:0.3.1-SNAPSHOT]
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1134)
 [nifi-framework-core-0.3.1-SNAPSHOT.jar:0.3.1-SNAPSHOT]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:127)
 [nifi-framework-core-0.3.1-SNAPSHOT.jar:0.3.1-SNAPSHOT]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
 [nifi-framework-core-0.3.1-SNAPSHOT.jar:0.3.1-SNAPSHOT]
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
 [nifi-framework-core-0.3.1-SNAPSHOT.jar:0.3.1-SNAPSHOT]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_60]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_60]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_60]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
{code}

I think this is easily solved, though, by calling context.yield() whenever you 
catch an Exception during server initialization.


> HandleHttpRequest starts a web server in the OnScheduled method but should 
> start it in onTrigger
> 
>
> Key: NIFI-883
> URL: https://issues.apache.org/jira/browse/NIFI-883
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Joseph Percivall
>  Labels: beginner, newbie
> Fix For: 0.4.0
>
> Attachments: HttpRequestAndResponseTester.xml, NIFI-883.patch, 
> NIFI-883_removed_lock.patch
>
>
> When HandleHttpRequest is scheduled, it creates an embedded jetty web server 
> and starts it. Unfortunately, if this is run in a clustered environment and 
> configured to run on Primary Node Only, all nodes still start the web server. 
> This is very confusing if setting the Hostname property, as other nodes will 
> complain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)