[jira] [Commented] (NIFI-766) UI should indicate when backpressure is configured for a Connection

2015-07-16 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629973#comment-14629973
 ] 

Mark Payne commented on NIFI-766:
-

Dan,

Hmmm. Interesting idea. I can totally picture what you're talking about. And I 
very much like it. There are two things that I think may be a concern with that 
approach though (i don't know if these are legitimate concerns or not, 
personally - would need someone else to evaluate): would the 
computation/rendering of that be expensive? We already render quite a lot and 
for large flows can push browsers to the brink. Also, would it end up making 
the UI more difficult to see/read or would it be distracting?

I do really like the concept though of showing how full they are. Perhaps 
[~mcgilman] or someone who knows more about UI's can weigh in. And it may 
require trying it out to know for sure if the performance would suffer. But if 
neither of those things are concerning, then yes, I love it :)

 UI should indicate when backpressure is configured for a Connection
 ---

 Key: NIFI-766
 URL: https://issues.apache.org/jira/browse/NIFI-766
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework, Core UI
Reporter: Mark Payne
 Fix For: 0.3.0


 It is sometimes unclear why a Processor is not running, if it is due to 
 backpressure. Recommend we add an icon to the Connection label to indicate 
 that backpressure is configured. If backpressure is applied (i.e., the 
 backpressure threshold has been reached), that icon should be highlighted 
 somehow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-767) If all FlowFiles in a queue are penalized, UI should indicate this fact to user

2015-07-15 Thread Mark Payne (JIRA)
Mark Payne created NIFI-767:
---

 Summary: If all FlowFiles in a queue are penalized, UI should 
indicate this fact to user
 Key: NIFI-767
 URL: https://issues.apache.org/jira/browse/NIFI-767
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework, Core UI
Reporter: Mark Payne
 Fix For: 0.3.0


When all FlowFiles in a Connection's queue are penalized, the Processor that is 
the destination of the Connection will continually run but will receive no 
FlowFiles. This results in some odd metrics shown in the UI and the user not 
knowing why nothing is happening. Recommend we indicate in the UI (likely with 
an icon on the connection label) that the Connection contains only Penalized 
FlowFiles.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-740) StandardFlowServiceTest need to be updated.

2015-07-13 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14624583#comment-14624583
 ] 

Mark Payne commented on NIFI-740:
-

Toivo,

Looking over this, it looks like the code that causes this was added after 
those tests were ignored, which explains why you would see it now and we didn't 
see it before.

I think the best thing to do is to add a new method to 
org.apache.nifi.logging.LogRepositoryFactory:

public static void purge() {
// clear internal state
}

And then in the FlowController's shutdown just call that purge method. This 
would ensure that we always clear out the log observers on shutdown.

Thanks
-Mark

 StandardFlowServiceTest need to be updated.
 ---

 Key: NIFI-740
 URL: https://issues.apache.org/jira/browse/NIFI-740
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Toivo Adams
Assignee: Toivo Adams
Priority: Minor
 Fix For: 0.3.0

 Attachments: NIFI-740_11jul2015.patch


 Currently 
 /nifi-framework-core/src/test/java/org/apache/nifi/controller/StandardFlowServiceTest.java
  :
 [Error] :22:16: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :28:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :46:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :69:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :75:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :80:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{size}' is expected. 
 [Error] :87:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :93:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :112:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :118:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :126:25: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :142:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :152:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :169:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :186:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 And finally public void testLoadExistingFlow() test fails: 
 org.apache.nifi.controller.FlowSynchronizationException: 
 java.lang.NullPointerException: Name is null 
 at 
 org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:317)
  
 at 
 org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1154)
  
 at 
 org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:72)
  
 at 
 org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:608)
  
 at 
 org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:458)
  
 at 
 org.apache.nifi.controller.StandardFlowServiceTest.testLoadExistingFlow(StandardFlowServiceTest.java:98)
  
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  
 at java.lang.reflect.Method.invoke(Method.java:606) 
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
  
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
 at 

[jira] [Created] (NIFI-762) Site-to-site client config implements Serializable but has non-Serializable member variable

2015-07-13 Thread Mark Payne (JIRA)
Mark Payne created NIFI-762:
---

 Summary: Site-to-site client config implements Serializable but 
has non-Serializable member variable
 Key: NIFI-762
 URL: https://issues.apache.org/jira/browse/NIFI-762
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Tools and Build
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.3.0


Right now, if we want to use the site-to-site client securely, we set the 
SSLContext on the configuration object. However, SSLContext is not 
serializable. It is important, however, to be able to set an SSLContext, rather 
than providing keystore and truststore properties directly.

As a result, I suggest we implement both a Serializable form of the 
configuration and a non-serializable form. The non-serializable form can be 
configured with SSLContext while the serializable form would be configured with 
keystore and truststore filename, type, and password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-762) Site-to-site client config implements Serializable but has non-Serializable member variable

2015-07-13 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-762:

Attachment: 0001-NIFI-762-Allow-user-to-set-keystore-and-truststore-p.patch

 Site-to-site client config implements Serializable but has non-Serializable 
 member variable
 ---

 Key: NIFI-762
 URL: https://issues.apache.org/jira/browse/NIFI-762
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Tools and Build
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.3.0

 Attachments: 
 0001-NIFI-762-Allow-user-to-set-keystore-and-truststore-p.patch


 Right now, if we want to use the site-to-site client securely, we set the 
 SSLContext on the configuration object. However, SSLContext is not 
 serializable. It is important, however, to be able to set an SSLContext, 
 rather than providing keystore and truststore properties directly.
 As a result, I suggest we implement both a Serializable form of the 
 configuration and a non-serializable form. The non-serializable form can be 
 configured with SSLContext while the serializable form would be configured 
 with keystore and truststore filename, type, and password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-727) TestJdbcHugeStream takes too long

2015-07-13 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14625286#comment-14625286
 ] 

Mark Payne commented on NIFI-727:
-

I think this is probably more of an integration test. We have a ticket already, 
NIFI-569. This could probably just be made into a sub-ticket of that one?

 TestJdbcHugeStream takes too long
 -

 Key: NIFI-727
 URL: https://issues.apache.org/jira/browse/NIFI-727
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Joseph Witt
Assignee: Toivo Adams
Priority: Minor
 Fix For: 0.3.0


 Running org.apache.nifi.processors.standard.util.TestJdbcHugeStream
 Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 45.891 sec - 
 in org.apache.nifi.processors.standard.util.TestJdbcHugeStream
 This has caused the build to be more than 30% longer than previously seen 
 when on high-end machines with many cores.  The length of the test doesn't 
 seem to clearly add value to warrant it so best to make it lean and mean.  
 Short build times are important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-705) TestHandleHttpRequest#testRequestAddedToService fails in Ubuntu 14.04

2015-07-08 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618958#comment-14618958
 ] 

Mark Payne commented on NIFI-705:
-

Aldrin,

Yup, that would cause it. Continually creating a new server within a loop. This 
patch should address that. +1

 TestHandleHttpRequest#testRequestAddedToService fails in Ubuntu 14.04
 -

 Key: NIFI-705
 URL: https://issues.apache.org/jira/browse/NIFI-705
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 0.1.0, 0.2.0
Reporter: Aldrin Piri
Assignee: Aldrin Piri
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-705-Preventing-the-processor-from-initializing-.patch


 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.675 sec  
 FAILURE! - in org.apache.nifi.processors.standard.TestHandleHttpRequest 
 testRequestAddedToService(org.apache.nifi.processors.standard.TestHandleHttpRequest)
   Time elapsed: 6.675 sec   FAILURE! java.lang.AssertionError: Could not 
 invoke methods annotated with @OnScheduled annotation due to: 
 java.lang.reflect.InvocationTargetException at 
 org.junit.Assert.fail(Assert.java:88) at 
 org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:199)
  at 
 org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:182)
  at 
 org.apache.nifi.processors.standard.TestHandleHttpRequest.testRequestAddedToService(TestHandleHttpRequest.java:100)
  ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-744) Allow FileSystemRepository to write to the same file for multiple (non-parallel) sessions

2015-07-07 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-744:

Fix Version/s: 0.3.0

 Allow FileSystemRepository to write to the same file for multiple 
 (non-parallel) sessions
 -

 Key: NIFI-744
 URL: https://issues.apache.org/jira/browse/NIFI-744
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
 Fix For: 0.3.0


 Currently, when a ProcessSession is committed, the Content Claim that was 
 being written to is now finished and will never be written to again.
 When a flow has processors that generate many, many FlowFiles, each in their 
 own session, this means that we have many, many files on disk on the Content 
 Repository, as well. Generally, this hasn't been a problem to write to these 
 files. However, when the files are to be archived or destroyed, this is very 
 taxing and can cause erratic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-752) Swapping and out of FlowFiles should be driven by FlowFileQueue, rather than in background thread

2015-07-07 Thread Mark Payne (JIRA)
Mark Payne created NIFI-752:
---

 Summary: Swapping and out of FlowFiles should be driven by 
FlowFileQueue, rather than in background thread
 Key: NIFI-752
 URL: https://issues.apache.org/jira/browse/NIFI-752
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
 Fix For: 0.3.0


As-is, if a huge number of FlowFiles are generated quickly enough, the JVM heap 
can fill before FlowFiles are swapped in. Conversely, if FlowFiles are swapped 
out, are NiFi is able to work those FlowFiles off, it ends up pausing, waiting 
for more FlowFiles to be swapped in. If the queue itself has control of this, 
these issues will both be alleviated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-753) NPE when reading in provenance data

2015-07-07 Thread Mark Payne (JIRA)
Mark Payne created NIFI-753:
---

 Summary: NPE when reading in provenance data
 Key: NIFI-753
 URL: https://issues.apache.org/jira/browse/NIFI-753
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.2.0
Reporter: Mark Payne
Assignee: Mark Payne
Priority: Blocker
 Fix For: 0.2.0


If an attribute is removed, NiFi will fail to restart:

Caused by: java.lang.RuntimeException: Unable to create Provenance Repository
at 
org.apache.nifi.controller.FlowController.init(FlowController.java:411) 
~[na:na]
at 
org.apache.nifi.controller.FlowController.createStandaloneInstance(FlowController.java:350)
 ~[na:na]
at 
org.apache.nifi.spring.FlowControllerFactoryBean.getObject(FlowControllerFactoryBean.java:63)
 ~[na:na]
at 
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
 ~[na:na]
... 130 common frames omitted
Caused by: java.lang.NullPointerException: null
at 
org.apache.nifi.provenance.StandardRecordReader.readAttributes(StandardRecordReader.java:372)
 ~[na:na]
at 
org.apache.nifi.provenance.StandardRecordReader.nextRecord(StandardRecordReader.java:313)
 ~[na:na]
at 
org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1356)
 ~[na:na]
at 
org.apache.nifi.provenance.PersistentProvenanceRepository.recoverJournalFiles(PersistentProvenanceRepository.java:1136)
 ~[na:na]
at 
org.apache.nifi.provenance.PersistentProvenanceRepository.recover(PersistentProvenanceRepository.java:572)
 ~[na:na]
at 
org.apache.nifi.provenance.PersistentProvenanceRepository.initialize(PersistentProvenanceRepository.java:212)
 ~[na:na]
at 
org.apache.nifi.controller.FlowController.init(FlowController.java:407) 
~[na:na]
... 133 common frames omitted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-753) NPE when reading in provenance data

2015-07-07 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-753:

Attachment: 0001-NIFI-753-when-truncating-value-take-null-values-into.patch

 NPE when reading in provenance data
 ---

 Key: NIFI-753
 URL: https://issues.apache.org/jira/browse/NIFI-753
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.2.0
Reporter: Mark Payne
Assignee: Mark Payne
Priority: Blocker
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-753-when-truncating-value-take-null-values-into.patch


 If an attribute is removed, NiFi will fail to restart:
 Caused by: java.lang.RuntimeException: Unable to create Provenance Repository
   at 
 org.apache.nifi.controller.FlowController.init(FlowController.java:411) 
 ~[na:na]
   at 
 org.apache.nifi.controller.FlowController.createStandaloneInstance(FlowController.java:350)
  ~[na:na]
   at 
 org.apache.nifi.spring.FlowControllerFactoryBean.getObject(FlowControllerFactoryBean.java:63)
  ~[na:na]
   at 
 org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
  ~[na:na]
   ... 130 common frames omitted
 Caused by: java.lang.NullPointerException: null
   at 
 org.apache.nifi.provenance.StandardRecordReader.readAttributes(StandardRecordReader.java:372)
  ~[na:na]
   at 
 org.apache.nifi.provenance.StandardRecordReader.nextRecord(StandardRecordReader.java:313)
  ~[na:na]
   at 
 org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1356)
  ~[na:na]
   at 
 org.apache.nifi.provenance.PersistentProvenanceRepository.recoverJournalFiles(PersistentProvenanceRepository.java:1136)
  ~[na:na]
   at 
 org.apache.nifi.provenance.PersistentProvenanceRepository.recover(PersistentProvenanceRepository.java:572)
  ~[na:na]
   at 
 org.apache.nifi.provenance.PersistentProvenanceRepository.initialize(PersistentProvenanceRepository.java:212)
  ~[na:na]
   at 
 org.apache.nifi.controller.FlowController.init(FlowController.java:407) 
 ~[na:na]
   ... 133 common frames omitted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-604) ExecuteStreamCommand does not support arguments with semicolons

2015-07-06 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-604:

Fix Version/s: (was: 0.2.0)
   0.3.0

 ExecuteStreamCommand does not support arguments with semicolons 
 

 Key: NIFI-604
 URL: https://issues.apache.org/jira/browse/NIFI-604
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 0.1.0
Reporter: Ricky Saltzer
Assignee: Mark Payne
 Fix For: 0.3.0

 Attachments: NIFI-604.1.patch, NIFI-604.2.patch


 The following code in ExecuteStreamCommand assumes you're not passing 
 semicolons within your argument. This is a problem for people who need to 
 pass semicolons to the executing program as part of the argument. 
 {code}
 224for (String arg : commandArguments.split(;)) { 
 {code}
 To allow for escaped semicolons, I propose we change this to the following 
 regex.
 {code}
 224for (String arg : commandArguments.split([^\\];)) { 
 {code}
 *or*... could we just change the way arguments are passed to make it more 
 similar to how ExecuteCommand works? The whole semicolon per argument took 
 some getting used to, and doesn't seem very intuitive. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-732) GetKafka if stopped then started doesn't resume pulling messages

2015-07-06 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-732:

Fix Version/s: (was: 0.2.0)
   0.3.0

 GetKafka if stopped then started doesn't resume pulling messages
 

 Key: NIFI-732
 URL: https://issues.apache.org/jira/browse/NIFI-732
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
 Environment: linux
Reporter: Joseph Witt
Assignee: Mark Payne
 Fix For: 0.3.0


 A nifi user reported that they had to restart nifi to get the GetKafka 
 processor to resume pulling data once they had stopped the processor.  Upon 
 restarting it showed that it was started but did not resume pulling data.
 Need to attempt to reproduce and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-740) StandardFlowServiceTest need to be updated.

2015-07-06 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14615566#comment-14615566
 ] 

Mark Payne commented on NIFI-740:
-

Toivo,

I agree, it makes sense to remove the header in this case. You may need to 
update the module's pom.xml in order to add it to the RAT exclusions. Please 
verify that things work by running mvn clean install -Pcontrib-check on that 
module.

Thanks
-Mark

 StandardFlowServiceTest need to be updated.
 ---

 Key: NIFI-740
 URL: https://issues.apache.org/jira/browse/NIFI-740
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Toivo Adams
Assignee: Toivo Adams
Priority: Minor
 Fix For: 0.3.0


 Currently 
 /nifi-framework-core/src/test/java/org/apache/nifi/controller/StandardFlowServiceTest.java
  :
 [Error] :22:16: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :28:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :46:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :69:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :75:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :80:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{size}' is expected. 
 [Error] :87:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :93:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :112:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :118:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :126:25: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :142:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :152:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :169:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :186:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 And finally public void testLoadExistingFlow() test fails: 
 org.apache.nifi.controller.FlowSynchronizationException: 
 java.lang.NullPointerException: Name is null 
 at 
 org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:317)
  
 at 
 org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1154)
  
 at 
 org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:72)
  
 at 
 org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:608)
  
 at 
 org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:458)
  
 at 
 org.apache.nifi.controller.StandardFlowServiceTest.testLoadExistingFlow(StandardFlowServiceTest.java:98)
  
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  
 at java.lang.reflect.Method.invoke(Method.java:606) 
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
  
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
  
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
  
 at 

[jira] [Commented] (NIFI-472) When running NiFi with the run.as property specified in the bootstrap.conf file, the run.as user should own the nifi.pid file

2015-07-03 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14613215#comment-14613215
 ] 

Mark Payne commented on NIFI-472:
-

Applied patch. Ran on CentOS 7. All seems to work well. Ran the application as 
root with run.as set to mark and then was able to stop the application as 
myself. Verified that I owned the file and that permissions were 600. +1

 When running NiFi with the run.as property specified in the bootstrap.conf 
 file, the run.as user should own the nifi.pid file
 ---

 Key: NIFI-472
 URL: https://issues.apache.org/jira/browse/NIFI-472
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Aldrin Piri
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-472-Refining-the-mechanism-to-carry-out-running.patch


 Currently, if I set the run.as user to something like nifi and then I run 
 bin/nifi.sh start, a file named nifi.pid is created in the bin/ 
 directory, but it is owned by me. It should instead be owned by the run.as 
 user (nifi).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-732) GetKafka if stopped then started doesn't resume pulling messages

2015-07-03 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14613169#comment-14613169
 ] 

Mark Payne commented on NIFI-732:
-

[~brianghig] I'd be hesitant to remove the timeouts. We should probably make 
proper use of them. The issue with removing them is that if it never gets a 
response, and the user clicks Stop, the processor will never stop. It will just 
sit and wait indefinitely to finish reading from the socket.

 GetKafka if stopped then started doesn't resume pulling messages
 

 Key: NIFI-732
 URL: https://issues.apache.org/jira/browse/NIFI-732
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
 Environment: linux
Reporter: Joseph Witt
Assignee: Mark Payne
 Fix For: 0.2.0


 A nifi user reported that they had to restart nifi to get the GetKafka 
 processor to resume pulling data once they had stopped the processor.  Upon 
 restarting it showed that it was started but did not resume pulling data.
 Need to attempt to reproduce and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-733) GetKafka group identifier is ignored

2015-07-03 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-733:

Attachment: 0001-NIFI-733-Make-use-of-Client-Name-Zookeeper-Timeout-K.patch

 GetKafka group identifier is ignored
 

 Key: NIFI-733
 URL: https://issues.apache.org/jira/browse/NIFI-733
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Joseph Witt
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-733-Make-use-of-Client-Name-Zookeeper-Timeout-K.patch


 A NiFi user reported that the GetKafka processor has a group identifier 
 feature that doesn't work as expected.  After initial code review it appears 
 the group identifier property that a user can set is ignored which appears to 
 be a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-732) GetKafka if stopped then started doesn't resume pulling messages

2015-07-03 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14613172#comment-14613172
 ] 

Mark Payne commented on NIFI-732:
-

Actually, it looks like if not set it defaults to 6 seconds. This is far better 
than waiting indefinitely. I could see wanting a longer timeout than 6 seconds, 
though.

 GetKafka if stopped then started doesn't resume pulling messages
 

 Key: NIFI-732
 URL: https://issues.apache.org/jira/browse/NIFI-732
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
 Environment: linux
Reporter: Joseph Witt
Assignee: Mark Payne
 Fix For: 0.2.0


 A nifi user reported that they had to restart nifi to get the GetKafka 
 processor to resume pulling data once they had stopped the processor.  Upon 
 restarting it showed that it was started but did not resume pulling data.
 Need to attempt to reproduce and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-732) GetKafka if stopped then started doesn't resume pulling messages

2015-07-03 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14613191#comment-14613191
 ] 

Mark Payne commented on NIFI-732:
-

I went ahead and addressed the timeouts (and all other properties that were 
exposed and not used :( ) in NIFI-733.

 GetKafka if stopped then started doesn't resume pulling messages
 

 Key: NIFI-732
 URL: https://issues.apache.org/jira/browse/NIFI-732
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
 Environment: linux
Reporter: Joseph Witt
Assignee: Mark Payne
 Fix For: 0.2.0


 A nifi user reported that they had to restart nifi to get the GetKafka 
 processor to resume pulling data once they had stopped the processor.  Upon 
 restarting it showed that it was started but did not resume pulling data.
 Need to attempt to reproduce and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-749) Unit tests fail on Windows for InvokeHTTP

2015-07-03 Thread Mark Payne (JIRA)
Mark Payne created NIFI-749:
---

 Summary: Unit tests fail on Windows for InvokeHTTP
 Key: NIFI-749
 URL: https://issues.apache.org/jira/browse/NIFI-749
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 0.2.0
Reporter: Mark Payne
Priority: Blocker
 Fix For: 0.2.0
 Attachments: 0001-NIFI-749-Ignore-line-endings-in-unit-test.patch





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-749) Unit tests fail on Windows for InvokeHTTP

2015-07-03 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-749:

Attachment: 0001-NIFI-749-Ignore-line-endings-in-unit-test.patch

 Unit tests fail on Windows for InvokeHTTP
 -

 Key: NIFI-749
 URL: https://issues.apache.org/jira/browse/NIFI-749
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 0.2.0
Reporter: Mark Payne
Priority: Blocker
 Fix For: 0.2.0

 Attachments: 0001-NIFI-749-Ignore-line-endings-in-unit-test.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-745) Disabling Controller Service stuck

2015-07-03 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14613317#comment-14613317
 ] 

Mark Payne commented on NIFI-745:
-

[~mcgilman] I think it's really a bit of a gray area. However, the consequences 
of how it works now are pretty terrible. And it's unlikely that someone really 
wants the behavior of being called repeatedly in the event that the services 
throws the exception. The invoking of the method is run in a background thread, 
so if someone does need that behavior, they could certainly implement the 
method to catch the Exception, sleep for a bit, and retry. So I would vote that 
we go ahead and change it.

 Disabling Controller Service stuck
 --

 Key: NIFI-745
 URL: https://issues.apache.org/jira/browse/NIFI-745
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
 Fix For: 0.2.0


 Per the nifi-api a controller service OnDisable method will be invoked when 
 the user disables that service. If that method fails with an exception it 
 will be retried a short time later. This will continue until it successfully 
 completes.
 Unfortunately, this means that if services continually throws an exception 
 during OnDisable the user will not be able to do anything with the service. 
 This is because controller services need to be Disabled in order to support 
 editing its configuration or attempting to Enable. The service in question 
 will not transition to the Disabled state until its OnDisable completes 
 without issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-738) Do not write conversion error messages to flow file content

2015-07-03 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-738.
-
Resolution: Fixed

 Do not write conversion error messages to flow file content
 ---

 Key: NIFI-738
 URL: https://issues.apache.org/jira/browse/NIFI-738
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 0.1.0
Reporter: Ryan Blue
Assignee: Ryan Blue
 Fix For: 0.2.0


 NIFI-551 extended the error handling provided by the ConvertJSONToAvro 
 processor, but wrote error messages as the content of a file sent on the 
 failure relationship. I think the right thing to do is to output the bad 
 records as the file content and put the error messages in the outgoing 
 attributes.
 NIFI-551 wasn't included in 0.1.0, so changing this behavior is safe. 
 Consequently, I'd like to get this fix into 0.2.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-717) nifi-bootstrap.log written to directory relative to current working directory

2015-07-03 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-717:

Attachment: 0001-NIFI-717-Set-working-directory-to-NIFI_HOME-before-r.patch

 nifi-bootstrap.log written to directory relative to current working directory
 -

 Key: NIFI-717
 URL: https://issues.apache.org/jira/browse/NIFI-717
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
Priority: Minor
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-717-Set-working-directory-to-NIFI_HOME-before-r.patch


 It appears that nifi-bootstrap.log is written to a directory that is relative 
 to the current working directory. If NiFi is launched from outside $NIFI_HOME 
 the logs end up outside of $NIFI_HOME. It is confusing since its configured 
 to be written to logs/ just like nifi-app.log and nifi-user.log but it is 
 written to logs/ in a different location.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-734) GetKafka Kafka Timeout property is ignored

2015-07-03 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-734.
-
   Resolution: Duplicate
Fix Version/s: 0.2.0

This issue was addressed in NIFI-733. There were a few properties that were 
ignored; they were all addressed in NIFI-733.

 GetKafka Kafka Timeout property is ignored
 --

 Key: NIFI-734
 URL: https://issues.apache.org/jira/browse/NIFI-734
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Brian Ghigiarelli
 Fix For: 0.2.0


 The GetKafka processor has a Kafka Timeout property feature that doesn't work 
 as expected. This property should likely be passed to the Kafka Consumer 
 properties as consumer.timeout.ms, but it is instead ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-745) Disabling Controller Service stuck

2015-07-03 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14613407#comment-14613407
 ] 

Mark Payne commented on NIFI-745:
-

Thanks, Aldrin.

Good catch, I did forget to run the contrib-check profile. Fixed the issue and 
pushed.

 Disabling Controller Service stuck
 --

 Key: NIFI-745
 URL: https://issues.apache.org/jira/browse/NIFI-745
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-745-Only-call-methods-with-OnDisabled-once-rega.patch


 Per the nifi-api a controller service OnDisable method will be invoked when 
 the user disables that service. If that method fails with an exception it 
 will be retried a short time later. This will continue until it successfully 
 completes.
 Unfortunately, this means that if services continually throws an exception 
 during OnDisable the user will not be able to do anything with the service. 
 This is because controller services need to be Disabled in order to support 
 editing its configuration or attempting to Enable. The service in question 
 will not transition to the Disabled state until its OnDisable completes 
 without issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-745) Disabling Controller Service stuck

2015-07-03 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-745:

Attachment: 0001-NIFI-745-Only-call-methods-with-OnDisabled-once-rega.patch

 Disabling Controller Service stuck
 --

 Key: NIFI-745
 URL: https://issues.apache.org/jira/browse/NIFI-745
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-745-Only-call-methods-with-OnDisabled-once-rega.patch


 Per the nifi-api a controller service OnDisable method will be invoked when 
 the user disables that service. If that method fails with an exception it 
 will be retried a short time later. This will continue until it successfully 
 completes.
 Unfortunately, this means that if services continually throws an exception 
 during OnDisable the user will not be able to do anything with the service. 
 This is because controller services need to be Disabled in order to support 
 editing its configuration or attempting to Enable. The service in question 
 will not transition to the Disabled state until its OnDisable completes 
 without issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-732) GetKafka if stopped then started doesn't resume pulling messages

2015-07-03 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14613413#comment-14613413
 ] 

Mark Payne commented on NIFI-732:
-

[~brianghig] I am not able to duplicate this issue. Any insight as to how to 
duplicate the issue?

 GetKafka if stopped then started doesn't resume pulling messages
 

 Key: NIFI-732
 URL: https://issues.apache.org/jira/browse/NIFI-732
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
 Environment: linux
Reporter: Joseph Witt
Assignee: Mark Payne
 Fix For: 0.2.0


 A nifi user reported that they had to restart nifi to get the GetKafka 
 processor to resume pulling data once they had stopped the processor.  Upon 
 restarting it showed that it was started but did not resume pulling data.
 Need to attempt to reproduce and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-743) .getSolr-mock-processor and .httpCache-mock-processor files in conf dir

2015-07-03 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14613355#comment-14613355
 ] 

Mark Payne commented on NIFI-743:
-

Code looks good. Confirmed that build is okay, all unit tests pass. The files 
are no longer appearing in the conf/ directory. +1

Thanks for knocking this out!

 .getSolr-mock-processor and .httpCache-mock-processor files in conf dir
 ---

 Key: NIFI-743
 URL: https://issues.apache.org/jira/browse/NIFI-743
 Project: Apache NiFi
  Issue Type: Bug
  Components: Tools and Build
Reporter: Mark Payne
Assignee: Bryan Bende
 Fix For: 0.2.0

 Attachments: NIFI-743-2.patch, NIFI-743.patch


 I'm not sure where these are coming from but when I do a clean build, I'm 
 ending up with 2 files in the conf/ directory that shouldn't be there: 
 .httpCache-mock-processor and .getSolr-mock-processor.
 Not sure if these were created when I did the build or when I launched the 
 application, but either way they shouldn't be there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-749) Unit tests fail on Windows for InvokeHTTP

2015-07-03 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14613368#comment-14613368
 ] 

Mark Payne commented on NIFI-749:
-

[~jskora] no worries, that's one of the great things about apache land. Lots of 
people testing in lots of different environments. 

Thanks for the contribution!

 Unit tests fail on Windows for InvokeHTTP
 -

 Key: NIFI-749
 URL: https://issues.apache.org/jira/browse/NIFI-749
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 0.2.0
Reporter: Mark Payne
Priority: Blocker
 Fix For: 0.2.0

 Attachments: 0001-NIFI-749-Ignore-line-endings-in-unit-test.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-747) ListenHTTP should expose property for the URL path to accept data on

2015-07-02 Thread Mark Payne (JIRA)
Mark Payne created NIFI-747:
---

 Summary: ListenHTTP should expose property for the URL path to 
accept data on
 Key: NIFI-747
 URL: https://issues.apache.org/jira/browse/NIFI-747
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Mark Payne
 Fix For: 0.3.0


When using ListenHTTP, you have to post to 
http(s)://hostname:port/contentListener

The /contentListener part is often problematic, as it's often mistyped and 
really serves no purpose, since the processor starts an embedded web server 
with only a single servlet.

We cannot remove the /contentListener path because we need to maintain 
backward compatibility. However, we can provide a new property to configure it, 
setting the default to /contentListener. This way, when a user adds a 
ListenHTTP Processor, he/she can change the path to /



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-747) ListenHTTP should expose property for the URL path to accept data on

2015-07-02 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-747:

Labels: beginner newbie  (was: )

 ListenHTTP should expose property for the URL path to accept data on
 

 Key: NIFI-747
 URL: https://issues.apache.org/jira/browse/NIFI-747
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Mark Payne
  Labels: beginner, newbie
 Fix For: 0.3.0


 When using ListenHTTP, you have to post to 
 http(s)://hostname:port/contentListener
 The /contentListener part is often problematic, as it's often mistyped and 
 really serves no purpose, since the processor starts an embedded web server 
 with only a single servlet.
 We cannot remove the /contentListener path because we need to maintain 
 backward compatibility. However, we can provide a new property to configure 
 it, setting the default to /contentListener. This way, when a user adds a 
 ListenHTTP Processor, he/she can change the path to /



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-745) Disabling Controller Service stuck

2015-07-02 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612108#comment-14612108
 ] 

Mark Payne commented on NIFI-745:
-

[~mcgilman] is the suggestion here to change the behavior, so that if a method 
annotated with @OnDisable throws an Exception, we just log it and move on?

I like that approach. I don't know why in the world I wrote it such that it 
will continually retry. It should certainly behave the same as Processors do 
with their @OnStopped / @OnUnscheduled. They are notified of the lifecycle 
event and given a chance to handle it. Then we move on, regardless of whether 
or not the method succeeds.

 Disabling Controller Service stuck
 --

 Key: NIFI-745
 URL: https://issues.apache.org/jira/browse/NIFI-745
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
 Fix For: 0.2.0


 Per the nifi-api a controller service OnDisable method will be invoked when 
 the user disables that service. If that method fails with an exception it 
 will be retried a short time later. This will continue until it successfully 
 completes.
 Unfortunately, this means that if services continually throws an exception 
 during OnDisable the user will not be able to do anything with the service. 
 This is because controller services need to be Disabled in order to support 
 editing its configuration or attempting to Enable. The service in question 
 will not transition to the Disabled state until its OnDisable completes 
 without issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-740) StandardFlowServiceTest need to be updated.

2015-07-02 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612098#comment-14612098
 ] 

Mark Payne commented on NIFI-740:
-

Toivo,

I think the XSD is valid and up-to-date, but it's possible that it is not. I 
would consider the FlowFromDOMFactory the de facto validation logic moreso than 
the XSD.

I am not opposed to changing the logic to use JAXB, but we would have to first 
ensure that the xml generated by JAXB is identical to what is generated now 
(which means we would have to verify that the XSD is correct). To date, xml 
validation has largely been ignored.

The XML files in those test, if i am not mistaken, are actually several years 
old, when NiFi was actually in the SNAPSHOT phase of the first release. It's 
changed quite a bit since then, so the XML just needs to be updated to adhere 
to the actual schema.

Thanks
-Mark

 StandardFlowServiceTest need to be updated.
 ---

 Key: NIFI-740
 URL: https://issues.apache.org/jira/browse/NIFI-740
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Toivo Adams
Assignee: Toivo Adams
Priority: Minor
 Fix For: 0.3.0


 Currently 
 /nifi-framework-core/src/test/java/org/apache/nifi/controller/StandardFlowServiceTest.java
  :
 [Error] :22:16: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :28:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :46:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :69:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :75:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :80:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{size}' is expected. 
 [Error] :87:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :93:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :112:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :118:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :126:25: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :142:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :152:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :169:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :186:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 And finally public void testLoadExistingFlow() test fails: 
 org.apache.nifi.controller.FlowSynchronizationException: 
 java.lang.NullPointerException: Name is null 
 at 
 org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:317)
  
 at 
 org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1154)
  
 at 
 org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:72)
  
 at 
 org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:608)
  
 at 
 org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:458)
  
 at 
 org.apache.nifi.controller.StandardFlowServiceTest.testLoadExistingFlow(StandardFlowServiceTest.java:98)
  
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  
 at java.lang.reflect.Method.invoke(Method.java:606) 
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  
 at 
 

[jira] [Commented] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very erratic

2015-07-02 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612081#comment-14612081
 ] 

Mark Payne commented on NIFI-731:
-

The patch provided provides better performance for me, running Windows 8.1 on 
both my desktop and laptop, when using magnetic drives. Using an SSD drive, it 
provides still better performance but the improvement is not nearly as stark as 
those statistics provided above.

Moreover, it appears that when running in Linux (CentOS) and Mac OSX, this 
patch actually results in significantly reduce performance.

As a result, I suggest that we not include this patch in this baseline.

A better solution, as noted above, will come in NIFI-744, which is currently 
slated for version 0.3.0, as it is a much more involved change.

In the meantime, those affected by the issue can make the most out of the 
current state by adhering to the following recommendations:
* When possible, configure Processors (especially source processors) to use a 
Run Duration (in the Settings tab of the configuration) of 25 ms rather than 
0.
* In nifi.properties, change the value of the 
nifi.flowfile.repository.checkpoint.interval property from 2 mins to 30 
secs. This tends to provide very significant performance gains and smoother 
performance.
* In nifi.properties, change the value of the 
nifi.content.repository.archive.enabled property from true to false - 
this will disable archiving of content. though is will improve performance, the 
content will not be available for view, download, or replay from the Data 
Provenance UI.

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very erratic
 --

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very erratic

2015-07-02 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-731.
-
Resolution: Duplicate

Duplicate of NIFI-744

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very erratic
 --

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very erratic

2015-07-02 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne reopened NIFI-731:
-

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very erratic
 --

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-748) If unable to find a specific Provenance event, should not fail entire search

2015-07-02 Thread Mark Payne (JIRA)
Mark Payne created NIFI-748:
---

 Summary: If unable to find a specific Provenance event, should not 
fail entire search
 Key: NIFI-748
 URL: https://issues.apache.org/jira/browse/NIFI-748
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Mark Payne
 Fix For: 0.3.0


We have a case where running with the prov being written to a disk that can be 
ejected. Disk was accidentally ejected while running. Provenance Event appears 
to have been indexed but event is not in the repo.

Specifically, we are reaching Line 104 of DocsReader:
{code}
throw new IOException(Failed to find Provenance Event  + d);
{code}

As a result, searching for a specific Component ID is returning an error, so we 
can't search on that Component ID at all (unless we shrink the time range to a 
time when that didn't occur).

We should generate a warning, and notify the user that X number of events could 
not be found and show what we can, rather than erroring out entirely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-740) StandardFlowServiceTest need to be updated.

2015-07-02 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612154#comment-14612154
 ] 

Mark Payne commented on NIFI-740:
-

Toivo,

I agree with your assessment of what is in scope for this ticket. But if you 
think JAXB makes sense going forward, it may make sense to create a new ticket 
for that and play around with it and see if it seems like a good alternative.

 StandardFlowServiceTest need to be updated.
 ---

 Key: NIFI-740
 URL: https://issues.apache.org/jira/browse/NIFI-740
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Toivo Adams
Assignee: Toivo Adams
Priority: Minor
 Fix For: 0.3.0


 Currently 
 /nifi-framework-core/src/test/java/org/apache/nifi/controller/StandardFlowServiceTest.java
  :
 [Error] :22:16: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :28:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :46:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :69:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :75:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :80:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{size}' is expected. 
 [Error] :87:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :93:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{styles}' is expected. 
 [Error] :112:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :118:24: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comments}' is expected. 
 [Error] :126:25: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :142:20: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{comment}' is expected. 
 [Error] :152:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :169:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 [Error] :186:21: cvc-complex-type.2.4.a: Invalid content was found starting 
 with element 'style'. One of '{sourceId}' is expected. 
 And finally public void testLoadExistingFlow() test fails: 
 org.apache.nifi.controller.FlowSynchronizationException: 
 java.lang.NullPointerException: Name is null 
 at 
 org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:317)
  
 at 
 org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1154)
  
 at 
 org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:72)
  
 at 
 org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:608)
  
 at 
 org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:458)
  
 at 
 org.apache.nifi.controller.StandardFlowServiceTest.testLoadExistingFlow(StandardFlowServiceTest.java:98)
  
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  
 at java.lang.reflect.Method.invoke(Method.java:606) 
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
  
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
  
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
  
 at 

[jira] [Commented] (NIFI-694) If Enabling Controller Service fails, no indication is provided to user

2015-07-02 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612448#comment-14612448
 ] 

Mark Payne commented on NIFI-694:
-

Rebuilt from branch, verified functionality. All looks good. +1

 If Enabling Controller Service fails, no indication is provided to user
 ---

 Key: NIFI-694
 URL: https://issues.apache.org/jira/browse/NIFI-694
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework, Core UI
Reporter: Mark Payne
 Fix For: 0.2.0


 To replicate the issue:
 1. Add a DBCPService controller service.
 2. Configure service. For the Driver Url use file://tmp/non-existent-file 
 or use an invalid Driver Class Name.
 3. Click Apply.
 4. Click Enable.
 The UI will show the spinner indefinitely, as the service will keep failing 
 to enable because it throws Exceptions from its @OnEnabled method. The UI 
 should provide some sort of indication that this is occurring. Otherwise, it 
 appears to the user that the system is non-responsive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-724) Controller Services and Reporting Tasks should be able to emit bulletins

2015-07-02 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612449#comment-14612449
 ] 

Mark Payne commented on NIFI-724:
-

Rebuilt from branch, verified functionality. All looks good. +1

 Controller Services and Reporting Tasks should be able to emit bulletins
 

 Key: NIFI-724
 URL: https://issues.apache.org/jira/browse/NIFI-724
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-695) Cancel Enable/Disable controller service should provide immediate feedback to user

2015-07-02 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612445#comment-14612445
 ] 

Mark Payne commented on NIFI-695:
-

Rebuilt from branch, verified functionality. All looks good. +1

 Cancel Enable/Disable controller service should provide immediate feedback to 
 user
 --

 Key: NIFI-695
 URL: https://issues.apache.org/jira/browse/NIFI-695
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Reporter: Matt Gilman
Assignee: Matt Gilman
Priority: Minor
 Fix For: 0.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very sporadic

2015-07-01 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610193#comment-14610193
 ] 

Mark Payne commented on NIFI-731:
-

The patch supplied here provides a few improvements. It allows the user to 
synchronize individual partitions of the flowfile repo on regular intervals, 
which will allow some content claims to start being archived/desctroyed 
immediately. Currently, we wait until the repo is checkpointed and start 
destroying all content claims, so this will provide a smoother performance. 
Additionally, it allows the user to change the number of partitions used by the 
FlowFile Repo. This is done because experimentation shows that 16 partitions is 
generally enough and results in much better performance than 256 - so the 
default was also changed from 256 to 16.

A better but much more involved solution is to allow the Content Repository to 
append to an existing Content Claim, as described in NIFI-744. This will result 
in far fewer files to be deleted, and this will very much alleviate this 
problem.

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very sporadic
 ---

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-744) Allow FileSystemRepository to write to the same file for multiple (non-parallel) sessions

2015-07-01 Thread Mark Payne (JIRA)
Mark Payne created NIFI-744:
---

 Summary: Allow FileSystemRepository to write to the same file for 
multiple (non-parallel) sessions
 Key: NIFI-744
 URL: https://issues.apache.org/jira/browse/NIFI-744
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne


Currently, when a ProcessSession is committed, the Content Claim that was being 
written to is now finished and will never be written to again.

When a flow has processors that generate many, many FlowFiles, each in their 
own session, this means that we have many, many files on disk on the Content 
Repository, as well. Generally, this hasn't been a problem to write to these 
files. However, when the files are to be archived or destroyed, this is very 
taxing and can cause erratic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very erratic

2015-07-01 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-731:

Summary: If content repo is unable to destroy content as fast as it is 
generated, nifi performance becomes very erratic  (was: If content repo is 
unable to destroy content as fast as it is generated, nifi performance becomes 
very sporadic)

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very erratic
 --

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very sporadic

2015-07-01 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-731:

Attachment: 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very sporadic
 ---

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very sporadic

2015-07-01 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-731:

Attachment: (was: 
0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch)

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very sporadic
 ---

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-743) .getSolr-mock-processor and .httpCache-mock-processor files in conf dir

2015-06-30 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609288#comment-14609288
 ] 

Mark Payne commented on NIFI-743:
-

Ah, I gotcha. I think OnShutdown or OnStopped, either one, is fine. The issue 
here is likely that there's no OnRemoved method to cleanup. Should have 
something like:

@OnRemoved
public void cleanup() {
stateFile.delete();
}

That way, even though the file gets created, it will be cleaned up properly as 
well.

Though if we wanted to avoid creating the file all together, we could move the 
code that creates the file to @OnStopped.

 .getSolr-mock-processor and .httpCache-mock-processor files in conf dir
 ---

 Key: NIFI-743
 URL: https://issues.apache.org/jira/browse/NIFI-743
 Project: Apache NiFi
  Issue Type: Bug
  Components: Tools and Build
Reporter: Mark Payne
 Fix For: 0.2.0


 I'm not sure where these are coming from but when I do a clean build, I'm 
 ending up with 2 files in the conf/ directory that shouldn't be there: 
 .httpCache-mock-processor and .getSolr-mock-processor.
 Not sure if these were created when I did the build or when I launched the 
 application, but either way they shouldn't be there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (NIFI-743) .getSolr-mock-processor and .httpCache-mock-processor files in conf dir

2015-06-30 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609288#comment-14609288
 ] 

Mark Payne edited comment on NIFI-743 at 7/1/15 12:02 AM:
--

Ah, I gotcha. I think OnShutdown or OnStopped, either one, is fine. The issue 
here is likely that there's no OnRemoved method to cleanup. Should have 
something like:

{code}
@OnRemoved
public void cleanup() {
stateFile.delete();
}
{code}

That way, even though the file gets created, it will be cleaned up properly as 
well.

Though if we wanted to avoid creating the file all together, we could move the 
code that creates the file to @OnStopped.


was (Author: markap14):
Ah, I gotcha. I think OnShutdown or OnStopped, either one, is fine. The issue 
here is likely that there's no OnRemoved method to cleanup. Should have 
something like:

@OnRemoved
public void cleanup() {
stateFile.delete();
}

That way, even though the file gets created, it will be cleaned up properly as 
well.

Though if we wanted to avoid creating the file all together, we could move the 
code that creates the file to @OnStopped.

 .getSolr-mock-processor and .httpCache-mock-processor files in conf dir
 ---

 Key: NIFI-743
 URL: https://issues.apache.org/jira/browse/NIFI-743
 Project: Apache NiFi
  Issue Type: Bug
  Components: Tools and Build
Reporter: Mark Payne
 Fix For: 0.2.0


 I'm not sure where these are coming from but when I do a clean build, I'm 
 ending up with 2 files in the conf/ directory that shouldn't be there: 
 .httpCache-mock-processor and .getSolr-mock-processor.
 Not sure if these were created when I did the build or when I launched the 
 application, but either way they shouldn't be there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very sporadic

2015-06-30 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608507#comment-14608507
 ] 

Mark Payne commented on NIFI-731:
-

I have made some changes to the way that the FlowFileRepository and 
ContentRepository interact with one another. I created some benchmarks to 
compare the results before and after the changes. The changes include a new 
property in the nifi.properties file to configure how often the FlowFile Repo 
performs a 'sync' (and a good bit of documentation added to the Admin Guide 
about what this means).

Benchmark was performed against both my desktop and my laptop. Note that the 
flow used is designed specifically to ensure that this issue arises by creating 
Content Claims that are exactly 1 byte in size, so that massive stress is put 
on deleting tons of files. It is not intended to mimick a typical flow.

After the change
--
FlowFile Repo Settings:
8 partitions
Sync every 100 updates
 
Hardware:
Laptop: 1 drive, 5400 RPM
Desktop: 2 drives, 7200 RPM (1 for Content, 1 for FlowFile)
 
Flow:
GenerateFlowFile - LogAttribute
GenerateFlowFile set to 1 byte files, batch size of 1, 0 ms run duration. So 1 
byte per Content Claim/File on Disk
LogAttribute set to 'debug' level so it doesn't actually log. 25 ms run 
duration.
 
With Content Repo's archive disabled:
Laptop: 125,000 FlowFiles / 5 min. Warns about backpressure
Desktop: 1.03 million FlowFiles / 5 min. Does not warn about backpressure
 
With archive enabled:
Laptop:  25,000 FlowFIles / 5 min. Warns about backpressure
Desktop: 115,000 FlowFiles / 5 min. Warns about backpressure
 - Changed Batch Size property of GenerateFlowFile to 5 FlowFiles per Content 
Claim. Got 435,000 FlowFiles - about 5 times as much, which is what I expected. 
But a good sanity check.


Baseline to compare against, before the patch was applied

Laptop: Reached 60,000 FlowFiles/5 mins, then saw very long pause as the 
Content Repo destroyed content. FlowFiles per 5 mins dropped from 60K to 30K 
and eventually to under 15K and then back up and down and up and down. Pauses 
were very noticeable in the UI.
Desktop: 481,000 FlowFiles/5 mins, then saw very long pause as the Content Repo 
destroyed content. FlowFiles per 5 mins then dropped and fluctuated similarly 
to Laptop's results.



 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very sporadic
 ---

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very sporadic

2015-06-30 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-731:

Attachment: 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very sporadic
 ---

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very sporadic

2015-06-30 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-731:

Attachment: (was: 
0001-NIFI-731-Refactored-how-Content-and-FlowFile-Repos-i.patch)

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very sporadic
 ---

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Updated-admin-guide-to-explain-the-flowfile.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-743) .getSolr-mock-processor and .httpCache-mock-processor files in conf dir

2015-06-30 Thread Mark Payne (JIRA)
Mark Payne created NIFI-743:
---

 Summary: .getSolr-mock-processor and .httpCache-mock-processor 
files in conf dir
 Key: NIFI-743
 URL: https://issues.apache.org/jira/browse/NIFI-743
 Project: Apache NiFi
  Issue Type: Bug
  Components: Tools and Build
Reporter: Mark Payne
 Fix For: 0.2.0


I'm not sure where these are coming from but when I do a clean build, I'm 
ending up with 2 files in the conf/ directory that shouldn't be there: 
.httpCache-mock-processor and .getSolr-mock-processor.

Not sure if these were created when I did the build or when I launched the 
application, but either way they shouldn't be there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very sporadic

2015-06-30 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-731:

Attachment: (was: 
0001-NIFI-731-Refactored-how-Content-and-FlowFile-Repos-i.patch)

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very sporadic
 ---

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Refactored-how-Content-and-FlowFile-Repos-i.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very sporadic

2015-06-30 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-731:

Attachment: 0001-NIFI-731-Refactored-how-Content-and-FlowFile-Repos-i.patch

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very sporadic
 ---

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-731-Refactored-how-Content-and-FlowFile-Repos-i.patch


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very sporadic

2015-06-27 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-731:

Summary: If content repo is unable to destroy content as fast as it is 
generated, nifi performance becomes very sporadic  (was: If content repo is 
unable to destroy content as fast as it is generated, nifi performance becomes 
very sporatic)

 If content repo is unable to destroy content as fast as it is generated, nifi 
 performance becomes very sporadic
 ---

 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0


 When the FlowFile Repository marks claims as destructable, it puts the 
 notification on a queue that the content repo pulls from. If the content repo 
 cannot keep up, the queue will fill, resulting in backpressure, that prevents 
 the FlowFile repository from being updated. This, in turn, causes Processors 
 to block, waiting on space to become available. This is by design.
 However, the capacity of this queue is quite large, and the content repo 
 drains the entire queue, then destroys all content claims that are on it. As 
 a result, this act of destroying claims can be quite long, and Processors can 
 block for quite a period of time, leading to very sporadic performance.
 Instead, the content repo should pull from the queue and destroy the claims 
 one at a time or in small batches, instead of draining the entire queue each 
 time. This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-717) nifi-bootstrap.log written to directory relative to current working directory

2015-06-27 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14604295#comment-14604295
 ] 

Mark Payne commented on NIFI-717:
-

I think maybe the best solution is just to make sure that in the nifi.sh script 
we set the working directory before calling 'java' to launch the RunNiFi class

 nifi-bootstrap.log written to directory relative to current working directory
 -

 Key: NIFI-717
 URL: https://issues.apache.org/jira/browse/NIFI-717
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
Priority: Minor
 Fix For: 0.2.0


 It appears that nifi-bootstrap.log is written to a directory that is relative 
 to the current working directory. If NiFi is launched from outside $NIFI_HOME 
 the logs end up outside of $NIFI_HOME. It is confusing since its configured 
 to be written to logs/ just like nifi-app.log and nifi-user.log but it is 
 written to logs/ in a different location.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-717) nifi-bootstrap.log written to directory relative to current working directory

2015-06-27 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-717:

Fix Version/s: (was: 0.3.0)
   0.2.0

 nifi-bootstrap.log written to directory relative to current working directory
 -

 Key: NIFI-717
 URL: https://issues.apache.org/jira/browse/NIFI-717
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
Priority: Minor
 Fix For: 0.2.0


 It appears that nifi-bootstrap.log is written to a directory that is relative 
 to the current working directory. If NiFi is launched from outside $NIFI_HOME 
 the logs end up outside of $NIFI_HOME. It is confusing since its configured 
 to be written to logs/ just like nifi-app.log and nifi-user.log but it is 
 written to logs/ in a different location.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-728) Unit test failure on multi-core/fast system builds: nifi-distributed-cache-server

2015-06-25 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601215#comment-14601215
 ] 

Mark Payne commented on NIFI-728:
-

Joe,

See if the above patch helps you out.

Thanks
-Mark

 Unit test failure on multi-core/fast system builds: 
 nifi-distributed-cache-server
 -

 Key: NIFI-728
 URL: https://issues.apache.org/jira/browse/NIFI-728
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Joseph Witt
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-728-Allow-Mock-Framework-to-use-property-descri.patch


 Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.16 sec  
 FAILURE! - in org.apache.nifi.distributed.cache.server.TestServerAndClient
 testNonPersistentMapServerAndClient(org.apache.nifi.distributed.cache.server.TestServerAndClient)
   Time elapsed: 0.001 sec   FAILURE!
 java.lang.AssertionError: Failed to enable Controller Service 
 MapServer[id=server] due to java.net.BindException: Address already in use
   at org.junit.Assert.fail(Assert.java:88)
   at 
 org.apache.nifi.util.StandardProcessorTestRunner.enableControllerService(StandardProcessorTestRunner.java:616)
   at 
 org.apache.nifi.distributed.cache.server.TestServerAndClient.testNonPersistentMapServerAndClient(TestServerAndClient.java:322)
 Results :
 Failed tests: 
   TestServerAndClient.testNonPersistentMapServerAndClient:322 Failed to 
 enable Controller Service MapServer[id=server] due to java.net.BindException: 
 Address already in use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-728) Unit test failure on multi-core/fast system builds: nifi-distributed-cache-server

2015-06-25 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-728:

Attachment: 0001-NIFI-728-Allow-Mock-Framework-to-use-property-descri.patch

 Unit test failure on multi-core/fast system builds: 
 nifi-distributed-cache-server
 -

 Key: NIFI-728
 URL: https://issues.apache.org/jira/browse/NIFI-728
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Joseph Witt
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-728-Allow-Mock-Framework-to-use-property-descri.patch


 Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.16 sec  
 FAILURE! - in org.apache.nifi.distributed.cache.server.TestServerAndClient
 testNonPersistentMapServerAndClient(org.apache.nifi.distributed.cache.server.TestServerAndClient)
   Time elapsed: 0.001 sec   FAILURE!
 java.lang.AssertionError: Failed to enable Controller Service 
 MapServer[id=server] due to java.net.BindException: Address already in use
   at org.junit.Assert.fail(Assert.java:88)
   at 
 org.apache.nifi.util.StandardProcessorTestRunner.enableControllerService(StandardProcessorTestRunner.java:616)
   at 
 org.apache.nifi.distributed.cache.server.TestServerAndClient.testNonPersistentMapServerAndClient(TestServerAndClient.java:322)
 Results :
 Failed tests: 
   TestServerAndClient.testNonPersistentMapServerAndClient:322 Failed to 
 enable Controller Service MapServer[id=server] due to java.net.BindException: 
 Address already in use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-731) If content repo is unable to destroy content as fast as it is generated, nifi performance becomes very sporatic

2015-06-25 Thread Mark Payne (JIRA)
Mark Payne created NIFI-731:
---

 Summary: If content repo is unable to destroy content as fast as 
it is generated, nifi performance becomes very sporatic
 Key: NIFI-731
 URL: https://issues.apache.org/jira/browse/NIFI-731
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0


When the FlowFile Repository marks claims as destructable, it puts the 
notification on a queue that the content repo pulls from. If the content repo 
cannot keep up, the queue will fill, resulting in backpressure, that prevents 
the FlowFile repository from being updated. This, in turn, causes Processors to 
block, waiting on space to become available. This is by design.

However, the capacity of this queue is quite large, and the content repo drains 
the entire queue, then destroys all content claims that are on it. As a result, 
this act of destroying claims can be quite long, and Processors can block for 
quite a period of time, leading to very sporadic performance.

Instead, the content repo should pull from the queue and destroy the claims one 
at a time or in small batches, instead of draining the entire queue each time. 
This should result in much less sporadic behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-683) When connecting to cluster, splash screen remains indefinitely if error sent back

2015-06-24 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599331#comment-14599331
 ] 

Mark Payne commented on NIFI-683:
-

Matt,

Looked at the code. All looks good. Returned the error message, as I expected.

Looks like you have a typo in the 'default' error message though: An 
unexcepted error has occurred That unexcepted should probably have been 
unexpected :)

Otherwise +1

Thanks
-Mark

 When connecting to cluster, splash screen remains indefinitely if error sent 
 back
 -

 Key: NIFI-683
 URL: https://issues.apache.org/jira/browse/NIFI-683
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Reporter: Mark Payne
Assignee: Matt Gilman
 Fix For: 0.2.0

 Attachments: 0001-NIFI-683.patch


 I created a cluster with the NCM running on my host and a single node running 
 in a VM. In the VM, I configured the node to report its fully qualified 
 hostname. The Host running the NCM, however, does not recognize the fully 
 qualified hostname of the VM and gets a Socket Read timeout. This is returned 
 to the UI as a 409: Conflict.
 The UI ignores this, leaving the splash screen. The UI should instead report 
 back an error, as it does if there are no connected nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-724) Controller Services and Reporting Tasks should be able to emit bulletins

2015-06-24 Thread Mark Payne (JIRA)
Mark Payne created NIFI-724:
---

 Summary: Controller Services and Reporting Tasks should be able to 
emit bulletins
 Key: NIFI-724
 URL: https://issues.apache.org/jira/browse/NIFI-724
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-719) For Reporting Tasks, ConfigurationContext should provide scheduling information

2015-06-24 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-719:

Attachment: 0001-NIFI-719-Expose-scheduling-period-to-the-Configurati.patch

 For Reporting Tasks, ConfigurationContext should  provide scheduling 
 information
 

 Key: NIFI-719
 URL: https://issues.apache.org/jira/browse/NIFI-719
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-719-Expose-scheduling-period-to-the-Configurati.patch


 Currently the ConfigurationContext does not provide any information about 
 scheduling, as it was originally designed for Controller Services. However, 
 it is used also for Reporting Tasks, and should have methods Long 
 getSchedulingPeriod(TimeUnit timeUnit) and String getSchedulingPeriod() as 
 the ReportingInitializationContext does. These methods should return null for 
 Controller Services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-719) For Reporting Tasks, ConfigurationContext should provide scheduling information

2015-06-24 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599321#comment-14599321
 ] 

Mark Payne commented on NIFI-719:
-

D'oh! Sorry about that. Provided new patch that fixes contrib-check failure.

 For Reporting Tasks, ConfigurationContext should  provide scheduling 
 information
 

 Key: NIFI-719
 URL: https://issues.apache.org/jira/browse/NIFI-719
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-719-Expose-scheduling-period-to-the-Configurati.patch


 Currently the ConfigurationContext does not provide any information about 
 scheduling, as it was originally designed for Controller Services. However, 
 it is used also for Reporting Tasks, and should have methods Long 
 getSchedulingPeriod(TimeUnit timeUnit) and String getSchedulingPeriod() as 
 the ReportingInitializationContext does. These methods should return null for 
 Controller Services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-716) Framework allows you to define two properties with the same name

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597770#comment-14597770
 ] 

Mark Payne commented on NIFI-716:
-

Dan,

All good ideas. We actually already have a ticket, NIFI-34, that deals with 
having the Mock Framework detect a lot of these types of things. Please check 
out that ticket and add anything to it that you think is appropriate for the 
mock framework to do.

re: forgetting to do something vs. a valid case of intentionally not doing 
it... I think we could handle that by having an override for 
TestRunners.newTestRunner(Processor proc) that allows us to pass in something 
like: TestRunners.newTestRunner(Processor proc, CodeQualityChecks checks) and 
then have some sort of builder that allows us to turn specific checks on/off. 
That way, we can explicitly tell the test runner that we know we did something 
different than usual, but we want it that way.

 Framework allows you to define two properties with the same name
 

 Key: NIFI-716
 URL: https://issues.apache.org/jira/browse/NIFI-716
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.1.0
Reporter: Dan Bress
Priority: Minor
  Labels: beginner, newbie

 If you are lazy and copy and paste a PropertyDescriptor and forget to change 
 the value assigned to name(), the framework(test or regular) does not detect 
 this and proceeds happily.
 It would be great if this situation was detected as soon as possible, and 
 either mark the processor as invalid, or fail to consider it as a possible 
 processor.
 This applies to Processors, ControllerServices and ReportingTasks
 Example
 {code}
 public static final PropertyDescriptor MIN_SIZE = new 
 PropertyDescriptor.Builder()
 .name(Minimum Group Size)
 .description(The minimum size of for the bundle)
 .required(true)
 .defaultValue(0 B)
 .addValidator(StandardValidators.DATA_SIZE_VALIDATOR)
 .build();
 public static final PropertyDescriptor MAX_SIZE = new 
 PropertyDescriptor.Builder()
 .name(Minimum Group Size)
 .description(The maximum size for the bundle. If not specified, there is no 
 maximum.)
 .required(false)
 .addValidator(StandardValidators.DATA_SIZE_VALIDATOR)
 .build();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-674) FileSystemRepository should not create new threads in its constructor

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597835#comment-14597835
 ] 

Mark Payne commented on NIFI-674:
-

I reviewed the changes to the FileSystemRepository. Code looks good. Build 
works with checkstyle and all unit tests. All appears to work properly (and 
better than before). +1

 FileSystemRepository should not create new threads in its constructor
 -

 Key: NIFI-674
 URL: https://issues.apache.org/jira/browse/NIFI-674
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.1.0
Reporter: Mark Payne
Assignee: Dan Bress
Priority: Minor
 Fix For: 0.2.0


 FileSystemRepository creates two different ExecutorServices in its 
 constructor. This is problematic because we iterate over the ServiceLoader, 
 which creates an instance of FileSystemRepository and then throws it away.
 Instead, these executors should be created in the initialize method.
 It should also be documented in the ContentRepository interface that any time 
 that initialize() is called, it is expected that the shutdown() method will 
 also be called, as this is where we can cleanup the things that we did during 
 initialization. This was the intended lifecycle but never was clearly 
 documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-716) Framework allows you to define two properties with the same name

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597714#comment-14597714
 ] 

Mark Payne commented on NIFI-716:
-

Dan,

I know that others have had this issue, as well. I wouldn't be opposed to 
marking the processor as invalid, but I definitely don't think we should avoid 
showing the Processor at all. If we did that, I think it would make things MUCH 
more confusing when the processor didn't show up. Marking as invalid at least 
provides the ability to display a nice explanation about what is wrong.

We should also make sure that the mock framework fails the unit test if we do 
TestRunners.newTestRunner(new ProcessorWithDuplicateProperty());

That way, we should catch the issue immediately and provide a nice explanation 
of what happened. Along the same lines, we should make sure in the mock 
framework that no two properties have the exact same description, either, as I 
could see someone copying  pasting and changing the name but not the 
description (though i wouldn't make the processor invalid in this case, in the 
actual flow - only in the mock framework).

 Framework allows you to define two properties with the same name
 

 Key: NIFI-716
 URL: https://issues.apache.org/jira/browse/NIFI-716
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.1.0
Reporter: Dan Bress
Priority: Minor
  Labels: beginner, newbie

 If you are lazy and copy and paste a PropertyDescriptor and forget to change 
 the value assigned to name(), the framework(test or regular) does not detect 
 this and proceeds happily.
 It would be great if this situation was detected as soon as possible, and 
 either mark the processor as invalid, or fail to consider it as a possible 
 processor.
 Example
 {code}
 public static final PropertyDescriptor MIN_SIZE = new 
 PropertyDescriptor.Builder()
 .name(Minimum Group Size)
 .description(The minimum size of for the bundle)
 .required(true)
 .defaultValue(0 B)
 .addValidator(StandardValidators.DATA_SIZE_VALIDATOR)
 .build();
 public static final PropertyDescriptor MAX_SIZE = new 
 PropertyDescriptor.Builder()
 .name(Minimum Group Size)
 .description(The maximum size for the bundle. If not specified, there is no 
 maximum.)
 .required(false)
 .addValidator(StandardValidators.DATA_SIZE_VALIDATOR)
 .build();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-532) Exception handling in RunNifi

2015-06-23 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-532.
-
Resolution: Fixed

This issue has been addressed as part of NIFI-488.

 Exception handling in RunNifi
 -

 Key: NIFI-532
 URL: https://issues.apache.org/jira/browse/NIFI-532
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Aldrin Piri
 Fix For: 0.2.0


 Exceptions that are thrown after the NiFiListener are started are not handled 
 appropriately and the application continues to run.  Additionally, these are 
 not logged anywhere when the the application is launched via 'start' which 
 may be a byproduct of NIFI-488.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-716) Framework allows you to define two properties with the same name

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597795#comment-14597795
 ] 

Mark Payne commented on NIFI-716:
-

haha i thought you had copied  pasted it :)

Generated that ticket the last time that someone had this exact same problem of 
allowing two properties with the same name. Glad we came to the same 
conclusions!

 Framework allows you to define two properties with the same name
 

 Key: NIFI-716
 URL: https://issues.apache.org/jira/browse/NIFI-716
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.1.0
Reporter: Dan Bress
Priority: Minor
  Labels: beginner, newbie

 If you are lazy and copy and paste a PropertyDescriptor and forget to change 
 the value assigned to name(), the framework(test or regular) does not detect 
 this and proceeds happily.
 It would be great if this situation was detected as soon as possible, and 
 either mark the processor as invalid, or fail to consider it as a possible 
 processor.
 This applies to Processors, ControllerServices and ReportingTasks
 Example
 {code}
 public static final PropertyDescriptor MIN_SIZE = new 
 PropertyDescriptor.Builder()
 .name(Minimum Group Size)
 .description(The minimum size of for the bundle)
 .required(true)
 .defaultValue(0 B)
 .addValidator(StandardValidators.DATA_SIZE_VALIDATOR)
 .build();
 public static final PropertyDescriptor MAX_SIZE = new 
 PropertyDescriptor.Builder()
 .name(Minimum Group Size)
 .description(The maximum size for the bundle. If not specified, there is no 
 maximum.)
 .required(false)
 .addValidator(StandardValidators.DATA_SIZE_VALIDATOR)
 .build();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-683) When connecting to cluster, splash screen remains indefinitely if error sent back

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597694#comment-14597694
 ] 

Mark Payne commented on NIFI-683:
-

[~mcgilman] I'm not sure which request timed out at this point. I did see in 
the NCM's logs that it was a SocketTimeoutException that was causing a 409 to 
be returned. This occurred after I attempted to go to 
http://localhost:8080/nifi.

 When connecting to cluster, splash screen remains indefinitely if error sent 
 back
 -

 Key: NIFI-683
 URL: https://issues.apache.org/jira/browse/NIFI-683
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Reporter: Mark Payne
 Fix For: 0.2.0


 I created a cluster with the NCM running on my host and a single node running 
 in a VM. In the VM, I configured the node to report its fully qualified 
 hostname. The Host running the NCM, however, does not recognize the fully 
 qualified hostname of the VM and gets a Socket Read timeout. This is returned 
 to the UI as a 409: Conflict.
 The UI ignores this, leaving the splash screen. The UI should instead report 
 back an error, as it does if there are no connected nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-718) nifi.sh install does not properly install nifi as a linux service

2015-06-23 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-718:

Attachment: 0001-NIFI-718-Add-links-to-etc-rc2.d-when-installing-nifi.patch

 nifi.sh install does not properly install nifi as a linux service
 -

 Key: NIFI-718
 URL: https://issues.apache.org/jira/browse/NIFI-718
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Priority: Critical
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-718-Add-links-to-etc-rc2.d-when-installing-nifi.patch


 After running bin/nifi.sh install, users can now start and stop nfii as a 
 service by using service nifi start and service nifi stop. However, the 
 service is not recognized by systemctl or chkconfig, and the service does not 
 start automatically on system reboot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-565) System-Level bulletins generated on nodes do not show in the UI when clustered

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597632#comment-14597632
 ] 

Mark Payne commented on NIFI-565:
-

[~mcgilman] what I was doing here was to create a MonitorMemory Reporting Task 
and set it to alert at 1% of heap used. I saw output in the logs, but the 
bulletins never showed up. When I ran in standalone, they showed up at the 
controller level, as expected.

 System-Level bulletins generated on nodes do not show in the UI when clustered
 --

 Key: NIFI-565
 URL: https://issues.apache.org/jira/browse/NIFI-565
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework, Core UI
Reporter: Mark Payne
Assignee: Matt Gilman
 Fix For: 0.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-718) nifi.sh install does not properly install nifi as a linux service

2015-06-23 Thread Mark Payne (JIRA)
Mark Payne created NIFI-718:
---

 Summary: nifi.sh install does not properly install nifi as a linux 
service
 Key: NIFI-718
 URL: https://issues.apache.org/jira/browse/NIFI-718
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Priority: Critical
 Fix For: 0.2.0


After running bin/nifi.sh install, users can now start and stop nfii as a 
service by using service nifi start and service nifi stop. However, the 
service is not recognized by systemctl or chkconfig, and the service does not 
start automatically on system reboot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-717) nifi-bootstrap.log written to directory relative to current working directory

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597932#comment-14597932
 ] 

Mark Payne commented on NIFI-717:
-

Yup, I agree. I just don't know what the best solution is. Any ideas on how to 
fix it?

 nifi-bootstrap.log written to directory relative to current working directory
 -

 Key: NIFI-717
 URL: https://issues.apache.org/jira/browse/NIFI-717
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
Priority: Minor
 Fix For: 0.2.0


 It appears that nifi-bootstrap.log is written to a directory that is relative 
 to the current working directory. If NiFi is launched from outside $NIFI_HOME 
 the logs end up outside of $NIFI_HOME. It is confusing since its configured 
 to be written to logs/ just like nifi-app.log and nifi-user.log but it is 
 written to logs/ in a different location.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-34) Mock Framework should provide option to detect common bad practices/bugs

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597936#comment-14597936
 ] 

Mark Payne commented on NIFI-34:


I would actually argue that these are both okay. No reason that we need to 
build the objects ahead of time and return immutable objects. Some may consider 
that a best practice, but I usually construct a new List/Set each time, because 
I believe the code is cleaner and easier to understand. It's also fairly common 
to write a processor where one or both of these are dynamic.

 Mock Framework should provide option to detect common bad practices/bugs
 

 Key: NIFI-34
 URL: https://issues.apache.org/jira/browse/NIFI-34
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Mark Payne
Priority: Minor

 Mock Framework should detect common errors such as:
 * Processor has member variable that is a PropertyDescriptor, but the 
 PropertyDescriptor isn't returned in the list of supported property 
 descriptors.
 * Processor has member variable that is a Relationship, but the Relationship 
 isn't returned in the Set of Relationships.
 * Processor has multiple properties or relationships as member variables with 
 the same name.
 * No META-INF/services file
 * META-INF/services file doesn't contain the Component's Fully Qualified 
 Class Name
 * No @CapabilityDescription annotation
 * No @Tags annotation
 Mock Framework should automatically detect these things and fail the unit 
 test unless checking is disabled. This requires building an object that 
 allows developer to enable/disable each of these checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-680) Processor docs don't always need to mention Sensitive properties or EL

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597958#comment-14597958
 ] 

Mark Payne commented on NIFI-680:
-

+1: Looks good, Dan. Thanks for calling out specific extensions that are good 
points of reference. Definitely makes it a lot easier to verify!

Thanks
-Mark

 Processor docs don't always need to mention Sensitive properties or EL
 --

 Key: NIFI-680
 URL: https://issues.apache.org/jira/browse/NIFI-680
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Affects Versions: 0.1.0
Reporter: Mike Drob
Assignee: Dan Bress
Priority: Minor
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-680-Processor-docs-don-t-always-need-to-mention.patch


 If a processor doesn't have any properties that use the Apache NiFi EL or 
 have any sensitive properties, then there is no reason to mention them in the 
 preamble to the attribute table on the processor documentation. I'd like to 
 imagine that this can all be auto-detected.
 For example, on 
 shttps://nifi.incubator.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.HashContent/index.html
  the paragraph could be reduced to:
 {quote}
 In the list below, the names of required properties appear in bold. Any other 
 properties (not in bold) are considered optional. The table also indicates 
 any default values.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-720) If Reporting Task fails to start properly and is then stopped, it can continue to run once it is able to start

2015-06-23 Thread Mark Payne (JIRA)
Mark Payne created NIFI-720:
---

 Summary: If Reporting Task fails to start properly and is then 
stopped, it can continue to run once it is able to start
 Key: NIFI-720
 URL: https://issues.apache.org/jira/browse/NIFI-720
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
 Fix For: 0.2.0


Steps to replicate:

Create a MonitorMemory reporting task.
Set an invalid value for the Memory Pool
Start the reporting task
See that errors are logged indicating that it couldn't start properly
Stop Reporting Task
Change Memory Pool to a valid value

MonitorMemory will begin to run. Clicking Start will cause two threads to now 
run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-719) For Reporting Tasks, ConfigurationContext should provide scheduling information

2015-06-23 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-719:

Attachment: 0001-NIFI-719-Expose-scheduling-period-to-the-Configurati.patch

 For Reporting Tasks, ConfigurationContext should  provide scheduling 
 information
 

 Key: NIFI-719
 URL: https://issues.apache.org/jira/browse/NIFI-719
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-719-Expose-scheduling-period-to-the-Configurati.patch


 Currently the ConfigurationContext does not provide any information about 
 scheduling, as it was originally designed for Controller Services. However, 
 it is used also for Reporting Tasks, and should have methods Long 
 getSchedulingPeriod(TimeUnit timeUnit) and String getSchedulingPeriod() as 
 the ReportingInitializationContext does. These methods should return null for 
 Controller Services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-720) If Reporting Task fails to start properly and is then stopped, it can continue to run once it is able to start

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598625#comment-14598625
 ] 

Mark Payne commented on NIFI-720:
-

[~mcgilman] sorry, cleaned up some code and lost my 'return' statement. 
Attached a new patch that will hopefully work better for you :)

 If Reporting Task fails to start properly and is then stopped, it can 
 continue to run once it is able to start
 --

 Key: NIFI-720
 URL: https://issues.apache.org/jira/browse/NIFI-720
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-720-Ensure-that-if-Reporting-Task-stopped-while.patch


 Steps to replicate:
 Create a MonitorMemory reporting task.
 Set an invalid value for the Memory Pool
 Start the reporting task
 See that errors are logged indicating that it couldn't start properly
 Stop Reporting Task
 Change Memory Pool to a valid value
 MonitorMemory will begin to run. Clicking Start will cause two threads to now 
 run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-720) If Reporting Task fails to start properly and is then stopped, it can continue to run once it is able to start

2015-06-23 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-720:

Attachment: (was: 
0001-NIFI-720-Ensure-that-if-Reporting-Task-stopped-while.patch)

 If Reporting Task fails to start properly and is then stopped, it can 
 continue to run once it is able to start
 --

 Key: NIFI-720
 URL: https://issues.apache.org/jira/browse/NIFI-720
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-720-Ensure-that-if-Reporting-Task-stopped-while.patch


 Steps to replicate:
 Create a MonitorMemory reporting task.
 Set an invalid value for the Memory Pool
 Start the reporting task
 See that errors are logged indicating that it couldn't start properly
 Stop Reporting Task
 Change Memory Pool to a valid value
 MonitorMemory will begin to run. Clicking Start will cause two threads to now 
 run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-720) If Reporting Task fails to start properly and is then stopped, it can continue to run once it is able to start

2015-06-23 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-720:

Attachment: 0001-NIFI-720-Ensure-that-if-Reporting-Task-stopped-while.patch

 If Reporting Task fails to start properly and is then stopped, it can 
 continue to run once it is able to start
 --

 Key: NIFI-720
 URL: https://issues.apache.org/jira/browse/NIFI-720
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-720-Ensure-that-if-Reporting-Task-stopped-while.patch


 Steps to replicate:
 Create a MonitorMemory reporting task.
 Set an invalid value for the Memory Pool
 Start the reporting task
 See that errors are logged indicating that it couldn't start properly
 Stop Reporting Task
 Change Memory Pool to a valid value
 MonitorMemory will begin to run. Clicking Start will cause two threads to now 
 run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-719) For Reporting Tasks, ConfigurationContext should provide scheduling information

2015-06-23 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598633#comment-14598633
 ] 

Mark Payne commented on NIFI-719:
-

The MonitorMemory reporting task and potentially others should also make use of 
this value instead of looking at the value in the init method, as this can 
change from one scheduling of the component to the next.

 For Reporting Tasks, ConfigurationContext should  provide scheduling 
 information
 

 Key: NIFI-719
 URL: https://issues.apache.org/jira/browse/NIFI-719
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
 Fix For: 0.2.0


 Currently the ConfigurationContext does not provide any information about 
 scheduling, as it was originally designed for Controller Services. However, 
 it is used also for Reporting Tasks, and should have methods Long 
 getSchedulingPeriod(TimeUnit timeUnit) and String getSchedulingPeriod() as 
 the ReportingInitializationContext does. These methods should return null for 
 Controller Services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-704) StandardProcessorTestRunner should allow you to wait before calling OnUnScheduled methods

2015-06-22 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14596149#comment-14596149
 ] 

Mark Payne commented on NIFI-704:
-

Dan,

Nope, I think the change is fine. Build still works fine. All unit tests pass. 
Will merge to develop now.

Thanks!
-Mark

 StandardProcessorTestRunner should allow you to wait before calling 
 OnUnScheduled methods
 -

 Key: NIFI-704
 URL: https://issues.apache.org/jira/browse/NIFI-704
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Tools and Build
Affects Versions: 0.1.0
Reporter: Dan Bress
Assignee: Dan Bress
Priority: Minor
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-704-StandardProcessorTestRunner-should-allow-yo.patch


 [StandardProcessorTestRunner does not 
 wait|https://github.com/apache/incubator-nifi/blob/develop/nifi/nifi-mock/src/main/java/org/apache/nifi/util/StandardProcessorTestRunner.java#L208-L210]
  for the processor 'run' calls to finish before invoking the @OnUnscheduled 
 methods.  This may result in the Processor 'run' calls acting weird, because 
 @OnUnscheduled has been called.
 Notice that 
 [ExecutorService.shutdown()|http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html#shutdown%28%29]
  says _This method does not wait for previously submitted tasks to complete 
 execution. Use awaitTermination to do that._
 I would suggest that the StandardProcessorTestRunner either always wait for 
 the processor run calls to finish, or let you specify an amount of time to 
 wait for them to finish.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-378) MergeContent in Defragment mode will merge fragments without checking index

2015-06-22 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-378:

Attachment: 0001-NIFI-378-updated-documentation-to-explain-contract-o.patch

 MergeContent in Defragment mode will merge fragments without checking index
 ---

 Key: NIFI-378
 URL: https://issues.apache.org/jira/browse/NIFI-378
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 0.0.1
Reporter: Michael Moser
Assignee: Joseph Witt
Priority: Minor
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-378-updated-documentation-to-explain-contract-o.patch


 When in Defragment mode, the MergeContent processor looks for 
 fragment.identifier and fragment.count attributes in order to place FlowFiles 
 in the correct bin.  The fragment.index attribute is ignored.
 If you happen to have many FlowFile in the queue to MergeContent, and they 
 all have fragment.identifier=foo and fragment.count=2, then it will merge two 
 FlowFiles that have fragment.index=1 or it will merge two FlowFiles that have 
 fragment.index=2.
 Granted this may seem odd.  The use case is to give the MergeContent 
 processor two input queues.  We configure one queue to contain files with 
 fragment.index=1 and the other queue to contain files with fragment.index=2.  
 We want one file from each queue to be merged.  Instead it will merge two 
 files from the same queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-711) Exception thrown when routing FlowFile to multiple connections

2015-06-22 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-711:

Attachment: 0001-NIFI-711-Do-not-check-status-of-FlowFile-when-emitti.patch

 Exception thrown when routing FlowFile to multiple connections
 --

 Key: NIFI-711
 URL: https://issues.apache.org/jira/browse/NIFI-711
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.2.0
Reporter: Mark Payne
Assignee: Mark Payne
Priority: Blocker
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-711-Do-not-check-status-of-FlowFile-when-emitti.patch


 This issue appears to have been created by the solution for NIFI-37. Creating 
 a new ticket for the issue instead of updating NIFI-37, since it was already 
 pushed to develop and would like to have a ticket outlining the problem in 
 case others run into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-578) FlowFileNode does not set clusterNodeIdentifier

2015-06-22 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14596392#comment-14596392
 ] 

Mark Payne commented on NIFI-578:
-

[~mcgilman] the getter and setter don't appear to be used anywhere, and the 
getter is always returning null at this point, right? I'd recommend that you 
just mark them as deprecated in the nifi-api and we will remove the methods all 
together in 1.0.0

 FlowFileNode does not set clusterNodeIdentifier
 ---

 Key: NIFI-578
 URL: https://issues.apache.org/jira/browse/NIFI-578
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.0.2
Reporter: Mark Latimer
Assignee: Matt Gilman
Priority: Minor
 Fix For: 0.2.0


 FlowFileNode class has a place to store the clusterNodeIdentifier and 
 getClusterNodeIdentifier but the value for this is never set.
 The method is called in dtoFactory on types implementing LineageNode.
 Presumably some places expecting a value for the cluster node id are instead 
 blank. 
 It is not obvious to me what the cluster node id should be or where to get it 
 from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-714) Should have ability to link to documentation for a specific processor

2015-06-22 Thread Mark Payne (JIRA)
Mark Payne created NIFI-714:
---

 Summary: Should have ability to link to documentation for a 
specific processor
 Key: NIFI-714
 URL: https://issues.apache.org/jira/browse/NIFI-714
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Reporter: Mark Payne


If I want to create a link to the documentation of a specific processor (for 
example ExecuteStreamCommand), I should be able to go to 
http://nifi.incubator.apache.org/docs.html#ExecuteStreamCommand

This works in a running instance of NiFi, to append #ProcessorType to the URL 
but it does not work in the static page rendered by nifi.incubator.apache.org



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (NIFI-37) If provenance receive/create/fork/join/clone event registered against flowfile not created in session...

2015-06-22 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne reopened NIFI-37:


 If provenance receive/create/fork/join/clone event registered against 
 flowfile not created in session...
 

 Key: NIFI-37
 URL: https://issues.apache.org/jira/browse/NIFI-37
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Joseph Witt
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-37-Ensure-that-prov-events-that-are-emitted-are.patch


 If provenance receive/create/fork/join/clone event registered against 
 flowfile not created in session should throw FlowFileHandlingException
 **Need to ensure this is done in the Mock Framework as well**



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-711) Exception thrown when routing FlowFile to multiple connections

2015-06-22 Thread Mark Payne (JIRA)
Mark Payne created NIFI-711:
---

 Summary: Exception thrown when routing FlowFile to multiple 
connections
 Key: NIFI-711
 URL: https://issues.apache.org/jira/browse/NIFI-711
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.2.0
Reporter: Mark Payne
Assignee: Mark Payne
Priority: Blocker
 Fix For: 0.2.0


This issue appears to have been created by the solution for NIFI-37. Creating a 
new ticket for the issue instead of updating NIFI-37, since it was already 
pushed to develop and would like to have a ticket outlining the problem in case 
others run into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-680) Processor docs don't always need to mention Sensitive properties or EL

2015-06-22 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-680:

Fix Version/s: 0.2.0

 Processor docs don't always need to mention Sensitive properties or EL
 --

 Key: NIFI-680
 URL: https://issues.apache.org/jira/browse/NIFI-680
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Affects Versions: 0.1.0
Reporter: Mike Drob
Assignee: Dan Bress
Priority: Minor
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-680-Processor-docs-don-t-always-need-to-mention.patch


 If a processor doesn't have any properties that use the Apache NiFi EL or 
 have any sensitive properties, then there is no reason to mention them in the 
 preamble to the attribute table on the processor documentation. I'd like to 
 imagine that this can all be auto-detected.
 For example, on 
 shttps://nifi.incubator.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.HashContent/index.html
  the paragraph could be reduced to:
 {quote}
 In the list below, the names of required properties appear in bold. Any other 
 properties (not in bold) are considered optional. The table also indicates 
 any default values.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-714) Should have ability to link to documentation for a specific processor

2015-06-22 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14596683#comment-14596683
 ] 

Mark Payne commented on NIFI-714:
-

Dan, I'd like to have the context on the left. The link you provided is quite 
nice but that context on the left would nice too :)

 Should have ability to link to documentation for a specific processor
 -

 Key: NIFI-714
 URL: https://issues.apache.org/jira/browse/NIFI-714
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Reporter: Mark Payne

 If I want to create a link to the documentation of a specific processor (for 
 example ExecuteStreamCommand), I should be able to go to 
 http://nifi.incubator.apache.org/docs.html#ExecuteStreamCommand
 This works in a running instance of NiFi, to append #ProcessorType to the 
 URL but it does not work in the static page rendered by 
 nifi.incubator.apache.org



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-578) FlowFileNode does not set clusterNodeIdentifier

2015-06-22 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14596611#comment-14596611
 ] 

Mark Payne commented on NIFI-578:
-

[~mcgilman] +1, looks good :)

 FlowFileNode does not set clusterNodeIdentifier
 ---

 Key: NIFI-578
 URL: https://issues.apache.org/jira/browse/NIFI-578
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.0.2
Reporter: Mark Latimer
Assignee: Matt Gilman
Priority: Minor
 Fix For: 0.2.0

 Attachments: 0001-NIFI-578.patch


 FlowFileNode class has a place to store the clusterNodeIdentifier and 
 getClusterNodeIdentifier but the value for this is never set.
 The method is called in dtoFactory on types implementing LineageNode.
 Presumably some places expecting a value for the cluster node id are instead 
 blank. 
 It is not obvious to me what the cluster node id should be or where to get it 
 from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-642) Eliminate hardcoded HDFS compression codec classnames.

2015-06-22 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14596354#comment-14596354
 ] 

Mark Payne commented on NIFI-642:
-

Thanks, Tim. Will close this ticket out.

 Eliminate hardcoded HDFS compression codec classnames.
 --

 Key: NIFI-642
 URL: https://issues.apache.org/jira/browse/NIFI-642
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Tim Reardon
Priority: Minor
 Fix For: 0.2.0


 Available HDFS compression codec classes were hardcoded as part of NIFI-600. 
 This ticket will allow PutHDFS to discover the available codecs via 
 CompressionCodecFactory, and allow GetHDFS to choose the codec to use based 
 on file extension.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-642) Eliminate hardcoded HDFS compression codec classnames.

2015-06-22 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14595898#comment-14595898
 ] 

Mark Payne commented on NIFI-642:
-

I can't argue there. It's important to remember, too, when implementing this 
that the associated Provenance SEND event needs to also indicate the filename 
that was used to send the data to HDFS.

 Eliminate hardcoded HDFS compression codec classnames.
 --

 Key: NIFI-642
 URL: https://issues.apache.org/jira/browse/NIFI-642
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Tim Reardon
Priority: Minor
 Fix For: 0.2.0


 Available HDFS compression codec classes were hardcoded as part of NIFI-600. 
 This ticket will allow PutHDFS to discover the available codecs via 
 CompressionCodecFactory, and allow GetHDFS to choose the codec to use based 
 on file extension.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-545) DataFlowDaoImpl writeDataFlow creates an unused dataflow

2015-06-22 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-545:

Attachment: 0001-NIFI-545-Code-cleanup.patch

 DataFlowDaoImpl writeDataFlow creates an unused dataflow
 

 Key: NIFI-545
 URL: https://issues.apache.org/jira/browse/NIFI-545
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.0.2
Reporter: Mark Latimer
Assignee: Mark Payne
Priority: Trivial
 Fix For: 0.2.0

 Attachments: 0001-NIFI-545-Code-cleanup.patch


 {code:title=DataFlowDaoImpl.java}
   private void writeDataFlow(final File file, final ClusterDataFlow 
 clusterDataFlow) throws IOException, JAXBException {
 // get the data flow
 DataFlow dataFlow = clusterDataFlow.getDataFlow();
 // if no dataflow, then write a new dataflow
 if (dataFlow == null) {
 //dataFlow created but not used
 dataFlow = new StandardDataFlow(new byte[0], new byte[0], new 
 byte[0]);
 }
 // setup the cluster metadata
 final ClusterMetadata clusterMetadata = new ClusterMetadata();
 clusterMetadata.setPrimaryNodeId(clusterDataFlow.getPrimaryNodeId());
 // write to disk
 //write the unmodified clusterDataFlow
 //if the clusterDataFlows dataFlow element is null a getEmptyFlowBytes() is 
 written
 writeDataFlow(file, clusterDataFlow, clusterMetadata);
 }
 {code}
 Not clear if the null check is ever true but if it is the new DataFlow is not 
 used. Another findbug reported issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-604) ExecuteStreamCommand does not support arguments with semicolons

2015-06-22 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14595903#comment-14595903
 ] 

Mark Payne commented on NIFI-604:
-

[~rickysaltzer] just wanted to ping you on this. Any thoughts on the comments 
that I added above?

Thanks
-Mark

 ExecuteStreamCommand does not support arguments with semicolons 
 

 Key: NIFI-604
 URL: https://issues.apache.org/jira/browse/NIFI-604
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 0.1.0
Reporter: Ricky Saltzer
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: NIFI-604.1.patch, NIFI-604.2.patch


 The following code in ExecuteStreamCommand assumes you're not passing 
 semicolons within your argument. This is a problem for people who need to 
 pass semicolons to the executing program as part of the argument. 
 {code}
 224for (String arg : commandArguments.split(;)) { 
 {code}
 To allow for escaped semicolons, I propose we change this to the following 
 regex.
 {code}
 224for (String arg : commandArguments.split([^\\];)) { 
 {code}
 *or*... could we just change the way arguments are passed to make it more 
 similar to how ExecuteCommand works? The whole semicolon per argument took 
 some getting used to, and doesn't seem very intuitive. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-482) users are currently able to evaluate a function against the result of an aggregrate function

2015-06-20 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14594545#comment-14594545
 ] 

Mark Payne commented on NIFI-482:
-

Ha, great catch! Have to admit, I verified that I saw the Expression attempts 
to call and assumed the rest was okay. I know what the issue is. Will address.

Thanks!
-Mark

 users are currently able to evaluate a function against the result of an 
 aggregrate function
 

 Key: NIFI-482
 URL: https://issues.apache.org/jira/browse/NIFI-482
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.0.2
Reporter: Ben Icore
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-482-Allowed-for-new-literal-function.-Make-expr.patch


 users should not be able to evaluate a function against the result of an 
 aggregrate function as it will often yield indeterminate results.
  
 expression should be invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-482) users are currently able to evaluate a function against the result of an aggregrate function

2015-06-20 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14594622#comment-14594622
 ] 

Mark Payne commented on NIFI-482:
-

[~mcgilman] I replaced the patch with a new one. To make the error message 
appropriate, I had to do some code cleanup (I was using an Abstract class in a 
lot of places where I should have been using the interface). So I went ahead 
and continued on with the cleanup. Feel free to verify that it works as 
expected now.

Thanks!
-Mark

 users are currently able to evaluate a function against the result of an 
 aggregrate function
 

 Key: NIFI-482
 URL: https://issues.apache.org/jira/browse/NIFI-482
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.0.2
Reporter: Ben Icore
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-482-Allowed-for-new-literal-function.-Make-expr.patch


 users should not be able to evaluate a function against the result of an 
 aggregrate function as it will often yield indeterminate results.
  
 expression should be invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-482) users are currently able to evaluate a function against the result of an aggregrate function

2015-06-20 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-482:

Attachment: (was: 
0001-NIFI-482-Allowed-for-new-literal-function.-Make-expr.patch)

 users are currently able to evaluate a function against the result of an 
 aggregrate function
 

 Key: NIFI-482
 URL: https://issues.apache.org/jira/browse/NIFI-482
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 0.0.2
Reporter: Ben Icore
Assignee: Mark Payne
 Fix For: 0.2.0

 Attachments: 
 0001-NIFI-482-Allowed-for-new-literal-function.-Make-expr.patch


 users should not be able to evaluate a function against the result of an 
 aggregrate function as it will often yield indeterminate results.
  
 expression should be invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   8   9   10   >