[jira] [Comment Edited] (NIFI-3383) MonitorMemory produces UnsupportedOperationException

2017-01-27 Thread Avish Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843968#comment-15843968
 ] 

Avish Saha edited comment on NIFI-3383 at 1/28/17 7:48 AM:
---

I cloned the GIT mirror for Apache NiFi (https://github.com/apache/nifi), did a 
build and while trying to set MonitorMemory for Eden or Survivor space, I was 
able to replicate this issue locally - 

2017-01-28 12:32:01,275 ERROR [StandardProcessScheduler Thread-5] 
org.apache.nifi.controller.MonitorMemory 
java.lang.UnsupportedOperationException: Usage threshold is not supported
at 
sun.management.MemoryPoolImpl.setUsageThreshold(MemoryPoolImpl.java:114) 
~[na:1.8.0_101]
at 
org.apache.nifi.controller.MonitorMemory.onConfigured(MonitorMemory.java:178) 
~[na:na]
2017-01-28 12:32:01,280 ERROR [StandardProcessScheduler Thread-5] 
o.a.n.c.s.StandardProcessScheduler Failed to invoke the On-Scheduled Lifecycle 
methods of [MonitorMemory[id=e3e0dfa1-0159-1000-e04e-7a0312f0e017], 
java.lang.reflect.InvocationTargetException, 30 sec] due to {}; 
administratively yielding this ReportingTask and will attempt to schedule it 
again after {}
java.lang.reflect.InvocationTargetException: null
Caused by: java.lang.UnsupportedOperationException: Usage threshold is not 
supported
at 
sun.management.MemoryPoolImpl.setUsageThreshold(MemoryPoolImpl.java:114) 
~[na:1.8.0_101]
at 
org.apache.nifi.controller.MonitorMemory.onConfigured(MonitorMemory.java:178) 
~[na:na]
... 16 common frames omitted



was (Author: avish.saha):
I cloned the GIT mirror for Apache NiFi (https://github.com/apache/nifi), did a 
build and while trying to set MonitorMemory for Eden or Survivor space, I was 
able to replicate this issue locally - 

2017-01-28 12:32:01,275 ERROR [StandardProcessScheduler Thread-5] 
org.apache.nifi.controller.MonitorMemory 
java.lang.UnsupportedOperationException: Usage threshold is not supported
at 
sun.management.MemoryPoolImpl.setUsageThreshold(MemoryPoolImpl.java:114) 
~[na:1.8.0_101]
at 
org.apache.nifi.controller.MonitorMemory.onConfigured(MonitorMemory.java:178) 
~[na:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[na:1.8.0_101]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[na:1.8.0_101]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_101]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_101]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
 ~[na:na]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
 ~[na:na]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
 ~[na:na]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
 ~[na:na]
at 
org.apache.nifi.controller.scheduling.StandardProcessScheduler$2.run(StandardProcessScheduler.java:213)
 ~[na:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_101]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[na:1.8.0_101]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_101]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
2017-01-28 12:32:01,280 ERROR [StandardProcessScheduler Thread-5] 
o.a.n.c.s.StandardProcessScheduler Failed to invoke the On-Scheduled Lifecycle 
methods of [MonitorMemory[id=e3e0dfa1-0159-1000-e04e-7a0312f0e017], 
java.lang.reflect.InvocationTargetException, 30 sec] due to {}; 
administratively yielding this ReportingTask and will attempt to schedule it 
again after {}
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[na:1.8.0_101]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[na:1.8.0_101]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_101]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_101]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
 ~[na:na]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
 ~[na:na]
at 

[jira] [Commented] (NIFI-3383) MonitorMemory produces UnsupportedOperationException

2017-01-27 Thread Avish Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843968#comment-15843968
 ] 

Avish Saha commented on NIFI-3383:
--

I cloned the GIT mirror for Apache NiFi (https://github.com/apache/nifi), did a 
build and while trying to set MonitorMemory for Eden or Survivor space, I was 
able to replicate this issue locally - 

2017-01-28 12:32:01,275 ERROR [StandardProcessScheduler Thread-5] 
org.apache.nifi.controller.MonitorMemory 
java.lang.UnsupportedOperationException: Usage threshold is not supported
at 
sun.management.MemoryPoolImpl.setUsageThreshold(MemoryPoolImpl.java:114) 
~[na:1.8.0_101]
at 
org.apache.nifi.controller.MonitorMemory.onConfigured(MonitorMemory.java:178) 
~[na:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[na:1.8.0_101]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[na:1.8.0_101]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_101]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_101]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
 ~[na:na]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
 ~[na:na]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
 ~[na:na]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
 ~[na:na]
at 
org.apache.nifi.controller.scheduling.StandardProcessScheduler$2.run(StandardProcessScheduler.java:213)
 ~[na:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_101]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[na:1.8.0_101]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_101]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
2017-01-28 12:32:01,280 ERROR [StandardProcessScheduler Thread-5] 
o.a.n.c.s.StandardProcessScheduler Failed to invoke the On-Scheduled Lifecycle 
methods of [MonitorMemory[id=e3e0dfa1-0159-1000-e04e-7a0312f0e017], 
java.lang.reflect.InvocationTargetException, 30 sec] due to {}; 
administratively yielding this ReportingTask and will attempt to schedule it 
again after {}
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[na:1.8.0_101]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[na:1.8.0_101]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_101]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_101]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
 ~[na:na]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
 ~[na:na]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
 ~[na:na]
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
 ~[na:na]
at 
org.apache.nifi.controller.scheduling.StandardProcessScheduler$2.run(StandardProcessScheduler.java:213)
 ~[na:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_101]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[na:1.8.0_101]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_101]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
Caused by: java.lang.UnsupportedOperationException: Usage threshold is not 
supported
at 
sun.management.MemoryPoolImpl.setUsageThreshold(MemoryPoolImpl.java:114) 
~[na:1.8.0_101]
at 
org.apache.nifi.controller.MonitorMemory.onConfigured(MonitorMemory.java:178) 
~[na:na]
... 16 common frames omitted


> 

[jira] [Commented] (NIFI-3417) Allow users to delete process groups containing template definitions

2017-01-27 Thread Matt Gilman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843831#comment-15843831
 ] 

Matt Gilman commented on NIFI-3417:
---

Yes, templates are associated with a Process Group to establish base 
permissions for when the template is created (just like creating other 
components). Check out [1] for the details.

[1] https://issues.apache.org/jira/browse/NIFI-2530 

> Allow users to delete process groups containing template definitions
> 
>
> Key: NIFI-3417
> URL: https://issues.apache.org/jira/browse/NIFI-3417
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Andy LoPresto
>Priority: Minor
>  Labels: process-group, template
>
> As reported on the mailing list, a user encountered an issue trying to delete 
> what appeared to be an empty process group because it contained template 
> definitions. 
> I propose that the error dialog should allow for (2 - 4) options to resolve 
> this issue:
> # Delete this process group and any contained templates (if the process group 
> is empty, the template definitions are now irrelevant anyway)
> # Cancel (current option)
> Other possibilities:
> # Delete and move all template definitions to the parent process group (this 
> could cause broken connections/lost definitions)
> # Delete and export all template definitions to XML (again, not sure this 
> would be of any value as the components and connections no longer exist)
> This probably needs some discussion, especially if [~mcgilman] can weigh in. 
> [Link to mailing list 
> thread|https://lists.apache.org/thread.html/10ad2b010bfd4a24e67ca1ad606247a359ce5b47f7db95c03c98cd4d@%3Cusers.nifi.apache.org%3E]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3417) Allow users to delete process groups containing template definitions

2017-01-27 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843810#comment-15843810
 ] 

Joseph Witt commented on NIFI-3417:
---

Would like to understand why templates are tied to a given process group again 
(i'm sure i'm forgetting).  I assume it has to do with permissions for those 
templates being associated with the permissions of the group in which they were 
created.

> Allow users to delete process groups containing template definitions
> 
>
> Key: NIFI-3417
> URL: https://issues.apache.org/jira/browse/NIFI-3417
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Andy LoPresto
>Priority: Minor
>  Labels: process-group, template
>
> As reported on the mailing list, a user encountered an issue trying to delete 
> what appeared to be an empty process group because it contained template 
> definitions. 
> I propose that the error dialog should allow for (2 - 4) options to resolve 
> this issue:
> # Delete this process group and any contained templates (if the process group 
> is empty, the template definitions are now irrelevant anyway)
> # Cancel (current option)
> Other possibilities:
> # Delete and move all template definitions to the parent process group (this 
> could cause broken connections/lost definitions)
> # Delete and export all template definitions to XML (again, not sure this 
> would be of any value as the components and connections no longer exist)
> This probably needs some discussion, especially if [~mcgilman] can weigh in. 
> [Link to mailing list 
> thread|https://lists.apache.org/thread.html/10ad2b010bfd4a24e67ca1ad606247a359ce5b47f7db95c03c98cd4d@%3Cusers.nifi.apache.org%3E]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3417) Allow users to delete process groups containing template definitions

2017-01-27 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-3417:

Description: 
As reported on the mailing list, a user encountered an issue trying to delete 
what appeared to be an empty process group because it contained template 
definitions. 

I propose that the error dialog should allow for (2 - 4) options to resolve 
this issue:

# Delete this process group and any contained templates (if the process group 
is empty, the template definitions are now irrelevant anyway)
# Cancel (current option)

Other possibilities:

# Delete and move all template definitions to the parent process group (this 
could cause broken connections/lost definitions)
# Delete and export all template definitions to XML (again, not sure this would 
be of any value as the components and connections no longer exist)

This probably needs some discussion, especially if [~mcgilman] can weigh in. 

[Link to mailing list 
thread|https://lists.apache.org/thread.html/10ad2b010bfd4a24e67ca1ad606247a359ce5b47f7db95c03c98cd4d@%3Cusers.nifi.apache.org%3E]

  was:
As reported on the mailing list, a user encountered an issue trying to delete 
what appeared to be an empty process group because it contained template 
definitions. 

I propose that the error dialog should allow for (2 or 3) options to resolve 
this issue:

1. Delete this process group and any contained templates (if the process group 
is empty, the template definitions are now irrelevant anyway)
1. Cancel (current option)

Other possibilities:

1. Delete and move all template definitions to the parent process group (this 
could cause broken connections/lost definitions)
1. Delete and export all template definitions to XML (again, not sure this 
would be of any value as the components and connections no longer exist)

This probably needs some discussion, especially if [~mcgilman] can weigh in. 

[Link to mailing list 
thread|https://lists.apache.org/thread.html/10ad2b010bfd4a24e67ca1ad606247a359ce5b47f7db95c03c98cd4d@%3Cusers.nifi.apache.org%3E]


> Allow users to delete process groups containing template definitions
> 
>
> Key: NIFI-3417
> URL: https://issues.apache.org/jira/browse/NIFI-3417
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Andy LoPresto
>Priority: Minor
>  Labels: process-group, template
>
> As reported on the mailing list, a user encountered an issue trying to delete 
> what appeared to be an empty process group because it contained template 
> definitions. 
> I propose that the error dialog should allow for (2 - 4) options to resolve 
> this issue:
> # Delete this process group and any contained templates (if the process group 
> is empty, the template definitions are now irrelevant anyway)
> # Cancel (current option)
> Other possibilities:
> # Delete and move all template definitions to the parent process group (this 
> could cause broken connections/lost definitions)
> # Delete and export all template definitions to XML (again, not sure this 
> would be of any value as the components and connections no longer exist)
> This probably needs some discussion, especially if [~mcgilman] can weigh in. 
> [Link to mailing list 
> thread|https://lists.apache.org/thread.html/10ad2b010bfd4a24e67ca1ad606247a359ce5b47f7db95c03c98cd4d@%3Cusers.nifi.apache.org%3E]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3417) Allow users to delete process groups containing template definitions

2017-01-27 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-3417:

Description: 
As reported on the mailing list, a user encountered an issue trying to delete 
what appeared to be an empty process group because it contained template 
definitions. 

I propose that the error dialog should allow for (2 or 3) options to resolve 
this issue:

1. Delete this process group and any contained templates (if the process group 
is empty, the template definitions are now irrelevant anyway)
1. Cancel (current option)

Other possibilities:

1. Delete and move all template definitions to the parent process group (this 
could cause broken connections/lost definitions)
1. Delete and export all template definitions to XML (again, not sure this 
would be of any value as the components and connections no longer exist)

This probably needs some discussion, especially if [~mcgilman] can weigh in. 

[Link to mailing list 
thread|https://lists.apache.org/thread.html/10ad2b010bfd4a24e67ca1ad606247a359ce5b47f7db95c03c98cd4d@%3Cusers.nifi.apache.org%3E]

  was:
As reported on the mailing list, a user encountered an issue trying to delete 
what appeared to be an empty process group because it contained template 
definitions. 

I propose that the error dialog should allow for (2 or 3) options to resolve 
this issue:

1. Delete this process group and any contained templates (if the process group 
is empty, the template definitions are now irrelevant anyway)
1. Cancel (current option)

Other possibilities:

1. Delete and move all template definitions to the parent process group (this 
could cause broken connections/lost definitions)
1. Delete and export all template definitions to XML (again, not sure this 
would be of any value as the components and connections no longer exist)

This probably needs some discussion, especially if [~mcgilman] can weigh in. 

I will add the link to the mailing list thread when PonyMail indexes it. 


> Allow users to delete process groups containing template definitions
> 
>
> Key: NIFI-3417
> URL: https://issues.apache.org/jira/browse/NIFI-3417
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Andy LoPresto
>Priority: Minor
>  Labels: process-group, template
>
> As reported on the mailing list, a user encountered an issue trying to delete 
> what appeared to be an empty process group because it contained template 
> definitions. 
> I propose that the error dialog should allow for (2 or 3) options to resolve 
> this issue:
> 1. Delete this process group and any contained templates (if the process 
> group is empty, the template definitions are now irrelevant anyway)
> 1. Cancel (current option)
> Other possibilities:
> 1. Delete and move all template definitions to the parent process group (this 
> could cause broken connections/lost definitions)
> 1. Delete and export all template definitions to XML (again, not sure this 
> would be of any value as the components and connections no longer exist)
> This probably needs some discussion, especially if [~mcgilman] can weigh in. 
> [Link to mailing list 
> thread|https://lists.apache.org/thread.html/10ad2b010bfd4a24e67ca1ad606247a359ce5b47f7db95c03c98cd4d@%3Cusers.nifi.apache.org%3E]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3417) Allow users to delete process groups containing template definitions

2017-01-27 Thread Andy LoPresto (JIRA)
Andy LoPresto created NIFI-3417:
---

 Summary: Allow users to delete process groups containing template 
definitions
 Key: NIFI-3417
 URL: https://issues.apache.org/jira/browse/NIFI-3417
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.1.1
Reporter: Andy LoPresto
Priority: Minor


As reported on the mailing list, a user encountered an issue trying to delete 
what appeared to be an empty process group because it contained template 
definitions. 

I propose that the error dialog should allow for (2 or 3) options to resolve 
this issue:

1. Delete this process group and any contained templates (if the process group 
is empty, the template definitions are now irrelevant anyway)
1. Cancel (current option)

Other possibilities:

1. Delete and move all template definitions to the parent process group (this 
could cause broken connections/lost definitions)
1. Delete and export all template definitions to XML (again, not sure this 
would be of any value as the components and connections no longer exist)

This probably needs some discussion, especially if [~mcgilman] can weigh in. 

I will add the link to the mailing list thread when PonyMail indexes it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3416) PutSFtp Processor Support for Proxy Connections

2017-01-27 Thread Eric Ulicny (JIRA)
Eric Ulicny created NIFI-3416:
-

 Summary: PutSFtp Processor Support for Proxy Connections
 Key: NIFI-3416
 URL: https://issues.apache.org/jira/browse/NIFI-3416
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Eric Ulicny
Priority: Minor


Currently the PutSFTP processor does not have support for Proxy URL / Proxy 
Port which makes connecting behind a proxy not possible.  This fix will allow 
proxying of connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3415) Add "Rollback on Failure" property to PutHiveStreaming, PutHiveQL, and PutSQL

2017-01-27 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-3415:
--

 Summary: Add "Rollback on Failure" property to PutHiveStreaming, 
PutHiveQL, and PutSQL
 Key: NIFI-3415
 URL: https://issues.apache.org/jira/browse/NIFI-3415
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Matt Burgess


Many Put processors (such as PutHiveStreaming, PutHiveQL, and PutSQL) offer 
"failure" and "retry" relationships for flow files that cannot be processed, 
perhaps due to issues with the external system or other errors.

However there are use cases where if a Put fails, then no other flow files 
should be processed until the issue(s) have been resolved.  This should be 
configurable for said processors, to enable both the current behavior and a 
"stop on failure" type of behavior.

I propose a property be added to the Put processors (at a minimum the 
PutHiveStreaming, PutHiveQL, and PutSQL processors) called "Rollback on 
Failure", which offers true or false values.  If set to true, then the 
"failure" and "retry" relationships should be removed from the processor 
instance, and if set to false, those relationships should be offered.

If Rollback on Failure is false, then the processor should continue to behave 
as it has. If set to true, then if any error occurs while processing a flow 
file, the session should be rolled back rather than transferring the flow file 
to some error-handling relationship.

It may also be the case that if Rollback on Failure is true, then the incoming 
connection must use a FIFO Prioritizer, but I'm not positive. The documentation 
should be updated to include any such requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3414) Implement an EnforceOrder processor

2017-01-27 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-3414:
--

 Summary: Implement an EnforceOrder processor
 Key: NIFI-3414
 URL: https://issues.apache.org/jira/browse/NIFI-3414
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Matt Burgess


For some flows, it is imperative that the flow files are processed in a certain 
order.  The PriorityAttributePrioritizer can be used on a connection to ensure 
that flow files going through that connection are in priority order, but 
depending on error-handling, branching, and other flow designs, it is possible 
for flow files to get out-of-order.

I propose an EnforceOrder processor, which would be single-threaded and have 
(at a minimum) the following properties:

1) Order Attribute: This would be the name of a flow file attribute from which 
the current value will be retrieved.
2) Initial Value: This property specifies an initial value for the order. The 
processor is stateful, however, so this property is only used when there is no 
entry in the state map for current value.

The processor would store the Initial Value into the state map (if no state map 
entry exists), then for each incoming flow file, it checks the value in the 
Order Attribute against the current value.  If the attribute value matches the 
current value, the flow file is transferred to the "success" relationship, and 
the current value is incremented in the state map. If the attribute value does 
not match the current value, the session will be rolled back.

Using this processor, along with a PriorityAttributePrioritizer on the incoming 
connection, will allow for out-of-order flow files to have a sort of "barrier", 
thereby guaranteeing that flow files transferred to the "success" relationship 
are in the specified order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3413) Implement a GetChangeDataCapture processor

2017-01-27 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-3413:
--

 Summary: Implement a GetChangeDataCapture processor
 Key: NIFI-3413
 URL: https://issues.apache.org/jira/browse/NIFI-3413
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Matt Burgess


Database systems such as MySQL, Oracle, and SQL Server allow access to their 
transactional logs and such, in order for external clients to have a "change 
data capture" (CDC) capability. I propose a GetChangeDataCapture processor to 
enable this in NiFi.

The processor would be configured with a DBCPConnectionPool controller service, 
as well as a Database Type property (similar to the one in QueryDatabaseTable) 
for database-specific handling. Additional properties might include the CDC 
table name, etc.  Additional database-specific properties could be handled 
using dynamic properties (and the documentation should reflect this).

The processor would accept no incoming connections (it is a "Get" or source 
processor), would be intended to run on the primary node only as a single 
threaded processor, and would generate a flow file for each operation (INSERT, 
UPDATE, DELETE, e,g,) in one or some number of formats (JSON, e.g.). The flow 
files would be transferred in time order (to enable a replication solution, for 
example), perhaps with some auto-incrementing attribute to also indicate order 
if need be.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3015) NiFi service starts from root user after installation

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843383#comment-15843383
 ] 

ASF GitHub Bot commented on NIFI-3015:
--

Github user jfrazee commented on the issue:

https://github.com/apache/nifi/pull/1422
  
LGTM +1

Verified builds on Amazon Linux w/ and w/o -Prpm and checked rpm vs. tar.gz 
and .zip installs to verify correct bootstrap.conf.


> NiFi service starts from root user after installation
> -
>
> Key: NIFI-3015
> URL: https://issues.apache.org/jira/browse/NIFI-3015
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.1.0
> Environment: Centos 7.2
>Reporter: Artem Yermakov
>Assignee: Andre
>Priority: Critical
>
> When install NiFi using command nifi.sh install, and then start NiFi by 
> command service nifi start, NiFi will start from user root.
> I suggest to run it from user nifi which is created during rpm installation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1422: NIFI-3015 - Corrects issue were RPM profile run.as propert...

2017-01-27 Thread jfrazee
Github user jfrazee commented on the issue:

https://github.com/apache/nifi/pull/1422
  
LGTM +1

Verified builds on Amazon Linux w/ and w/o -Prpm and checked rpm vs. tar.gz 
and .zip installs to verify correct bootstrap.conf.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (NIFI-3412) GetTCP testSuccessInteraction failing for multi-threaded builds and bare .m2 repo

2017-01-27 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky reassigned NIFI-3412:
--

Assignee: Oleg Zhurakousky

> GetTCP testSuccessInteraction failing for multi-threaded builds and bare .m2 
> repo
> -
>
> Key: NIFI-3412
> URL: https://issues.apache.org/jira/browse/NIFI-3412
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joey Frazee
>Assignee: Oleg Zhurakousky
>
> GetTCP tests are failing for multi-threaded builds (e.g., mvn -T 2.0C clean 
> install ...) when .m2 is completely empty and the build is being run for the 
> first time.
> {code}
> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 
> 2015-11-10T16:41:47+00:00)
> Maven home: /usr/local/maven
> Java version: 1.8.0_121, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.121-0.b13.29.amzn1.x86_64/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "4.4.41-36.55.amzn1.x86_64", arch: "amd64", 
> family: "unix"
> {code}
> {code}
> ---
> Test set: org.apache.nifi.processors.gettcp.TestGetTCP
> ---
> Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.775 sec <<< 
> FAILURE! - in org.apache.nifi.processors.gettcp.TestGetTCP
> testSuccessInteraction(org.apache.nifi.processors.gettcp.TestGetTCP)  Time 
> elapsed: 0.712 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.nifi.util.StandardProcessorTestRunner.assertTransferCount(StandardProcessorTestRunner.java:319)
>   at 
> org.apache.nifi.util.StandardProcessorTestRunner.assertAllFlowFilesTransferred(StandardProcessorTestRunner.java:314)
>   at 
> org.apache.nifi.processors.gettcp.TestGetTCP.testSuccessInteraction(TestGetTCP.java:82)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3223) Allow PublishAMQP to use NiFi expression language

2017-01-27 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky updated NIFI-3223:
---
Fix Version/s: 1.2.0

> Allow PublishAMQP to use NiFi expression language
> -
>
> Key: NIFI-3223
> URL: https://issues.apache.org/jira/browse/NIFI-3223
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Extensions
>Affects Versions: 0.7.0, 1.1.0, 0.7.1
>Reporter: Brian
>Assignee: Oleg Zhurakousky
> Fix For: 1.2.0
>
>
> Enable the use of NiFi expression language for the PublishAMQP processors, 
> Routing Key value to allow it to be better used within the NiFi workflows.
> PublishAMQP fields to enable:
> "Routing Key"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3223) Allow PublishAMQP to use NiFi expression language

2017-01-27 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky updated NIFI-3223:
---
Status: Patch Available  (was: Open)

> Allow PublishAMQP to use NiFi expression language
> -
>
> Key: NIFI-3223
> URL: https://issues.apache.org/jira/browse/NIFI-3223
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Extensions
>Affects Versions: 0.7.1, 1.1.0, 0.7.0
>Reporter: Brian
>Assignee: Oleg Zhurakousky
>
> Enable the use of NiFi expression language for the PublishAMQP processors, 
> Routing Key value to allow it to be better used within the NiFi workflows.
> PublishAMQP fields to enable:
> "Routing Key"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3223) Allow PublishAMQP to use NiFi expression language

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843308#comment-15843308
 ] 

ASF GitHub Bot commented on NIFI-3223:
--

GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/1449

NIFI-3223 added support for expression language

- EXCHANGE
- ROUTING_KEY

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-3223

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1449.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1449


commit 0b2d1417a948804cebe121acde19bc5f2184da44
Author: Oleg Zhurakousky 
Date:   2017-01-27T18:41:54Z

NIFI-3223 added support for expression language
- EXCHANGE
- ROUTING_KEY




> Allow PublishAMQP to use NiFi expression language
> -
>
> Key: NIFI-3223
> URL: https://issues.apache.org/jira/browse/NIFI-3223
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Extensions
>Affects Versions: 0.7.0, 1.1.0, 0.7.1
>Reporter: Brian
>Assignee: Oleg Zhurakousky
> Fix For: 1.2.0
>
>
> Enable the use of NiFi expression language for the PublishAMQP processors, 
> Routing Key value to allow it to be better used within the NiFi workflows.
> PublishAMQP fields to enable:
> "Routing Key"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1449: NIFI-3223 added support for expression language

2017-01-27 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/1449

NIFI-3223 added support for expression language

- EXCHANGE
- ROUTING_KEY

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-3223

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1449.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1449


commit 0b2d1417a948804cebe121acde19bc5f2184da44
Author: Oleg Zhurakousky 
Date:   2017-01-27T18:41:54Z

NIFI-3223 added support for expression language
- EXCHANGE
- ROUTING_KEY




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3412) GetTCP testSuccessInteraction failing for multi-threaded builds and bare .m2 repo

2017-01-27 Thread Joey Frazee (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Frazee updated NIFI-3412:
--
Description: 
GetTCP tests are failing for multi-threaded builds (e.g., mvn -T 2.0C clean 
install ...) when .m2 is completely empty and the build is being run for the 
first time.

{code}
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 
2015-11-10T16:41:47+00:00)
Maven home: /usr/local/maven
Java version: 1.8.0_121, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.121-0.b13.29.amzn1.x86_64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.4.41-36.55.amzn1.x86_64", arch: "amd64", family: 
"unix"
{code}

{code}
---
Test set: org.apache.nifi.processors.gettcp.TestGetTCP
---
Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.775 sec <<< 
FAILURE! - in org.apache.nifi.processors.gettcp.TestGetTCP
testSuccessInteraction(org.apache.nifi.processors.gettcp.TestGetTCP)  Time 
elapsed: 0.712 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertTransferCount(StandardProcessorTestRunner.java:319)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertAllFlowFilesTransferred(StandardProcessorTestRunner.java:314)
at 
org.apache.nifi.processors.gettcp.TestGetTCP.testSuccessInteraction(TestGetTCP.java:82)
{code}

  was:
GetTCP tests are failing for multi-threaded builds (e.g., mvn -T 2.0C clean 
install ...) when .m2 is completely empty and the build is being run for the 
first time.

{code}
---
Test set: org.apache.nifi.processors.gettcp.TestGetTCP
---
Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.775 sec <<< 
FAILURE! - in org.apache.nifi.processors.gettcp.TestGetTCP
testSuccessInteraction(org.apache.nifi.processors.gettcp.TestGetTCP)  Time 
elapsed: 0.712 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertTransferCount(StandardProcessorTestRunner.java:319)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertAllFlowFilesTransferred(StandardProcessorTestRunner.java:314)
at 
org.apache.nifi.processors.gettcp.TestGetTCP.testSuccessInteraction(TestGetTCP.java:82)
{code}


> GetTCP testSuccessInteraction failing for multi-threaded builds and bare .m2 
> repo
> -
>
> Key: NIFI-3412
> URL: https://issues.apache.org/jira/browse/NIFI-3412
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joey Frazee
>
> GetTCP tests are failing for multi-threaded builds (e.g., mvn -T 2.0C clean 
> install ...) when .m2 is completely empty and the build is being run for the 
> first time.
> {code}
> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 
> 2015-11-10T16:41:47+00:00)
> Maven home: /usr/local/maven
> Java version: 1.8.0_121, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.121-0.b13.29.amzn1.x86_64/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "4.4.41-36.55.amzn1.x86_64", arch: "amd64", 
> family: "unix"
> {code}
> {code}
> ---
> Test set: org.apache.nifi.processors.gettcp.TestGetTCP
> ---
> Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.775 sec <<< 
> FAILURE! - in org.apache.nifi.processors.gettcp.TestGetTCP
> testSuccessInteraction(org.apache.nifi.processors.gettcp.TestGetTCP)  Time 
> elapsed: 0.712 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
> 

[jira] [Created] (NIFI-3412) GetTCP testSuccessInteraction failing for multi-threaded builds and bare .m2 repo

2017-01-27 Thread Joey Frazee (JIRA)
Joey Frazee created NIFI-3412:
-

 Summary: GetTCP testSuccessInteraction failing for multi-threaded 
builds and bare .m2 repo
 Key: NIFI-3412
 URL: https://issues.apache.org/jira/browse/NIFI-3412
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Joey Frazee


GetTCP tests are failing for multi-threaded builds (e.g., mvn -T 2.0C clean 
install ...) when .m2 is completely empty and the build is being run for the 
first time.

{code}
---
Test set: org.apache.nifi.processors.gettcp.TestGetTCP
---
Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.775 sec <<< 
FAILURE! - in org.apache.nifi.processors.gettcp.TestGetTCP
testSuccessInteraction(org.apache.nifi.processors.gettcp.TestGetTCP)  Time 
elapsed: 0.712 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertTransferCount(StandardProcessorTestRunner.java:319)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertAllFlowFilesTransferred(StandardProcessorTestRunner.java:314)
at 
org.apache.nifi.processors.gettcp.TestGetTCP.testSuccessInteraction(TestGetTCP.java:82)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1436: NIFI-3354 Added support for simple AVRO/CSV/JSON transform...

2017-01-27 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/1436
  
@joewitt copyrights are fixed. Please see the latest commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3354) Create CSV To Avro transformer

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843194#comment-15843194
 ] 

ASF GitHub Bot commented on NIFI-3354:
--

Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/1436
  
@joewitt copyrights are fixed. Please see the latest commit.


> Create CSV To Avro transformer
> --
>
> Key: NIFI-3354
> URL: https://issues.apache.org/jira/browse/NIFI-3354
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Oleg Zhurakousky
>Assignee: Oleg Zhurakousky
>
> While we currently have CSV to AVRO transformer it required HDFS/Kite 
> dependencies which could be easily eliminated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3363) PutKafka throws NullPointerException when User-Defined partition strategy is used

2017-01-27 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky updated NIFI-3363:
---
Fix Version/s: 1.2.0
   0.8.0

> PutKafka throws NullPointerException when User-Defined partition strategy is 
> used
> -
>
> Key: NIFI-3363
> URL: https://issues.apache.org/jira/browse/NIFI-3363
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 0.4.0, 0.5.0, 0.6.0, 0.4.1, 0.5.1, 0.7.0, 0.6.1, 
> 1.1.0, 0.7.1, 1.1.1, 1.0.1
> Environment: By looking at the release tags contained in a commit 
> that added User-Defined partition strategy (NIFI-1097 22de23b), it seems all 
> NiFi versions since 0.4.0 is affected.
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
> Fix For: 0.8.0, 1.2.0
>
>
> NullPointerException is thrown because PutKafka tries to put null into 
> properties since following if statements doesn't cover 
> USER_DEFINED_PARTITIONING.
> {code: title=PutKafka.java buildKafkaConfigProperties}
> String partitionStrategy = context.getProperty(PARTITION_STRATEGY).getValue();
> String partitionerClass = null;
> if (partitionStrategy.equalsIgnoreCase(ROUND_ROBIN_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RoundRobinPartitioner.class.getName();
> } else if 
> (partitionStrategy.equalsIgnoreCase(RANDOM_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RandomPartitioner.class.getName();
> }
> properties.setProperty("partitioner.class", partitionerClass); // Happens here
> {code}
> A naive fix for this would be adding one more if statement so that it puts 
> 'partitioner.class' property only if partitionerClass is set.
> However, while I was testing the fix, I found following facts, that revealed 
> this approach wouldn't be the right solution for this issue.
> In short, we don't have to set 'partitioner.class' property with Kafka 0.8.x 
> client in the first place. I assume it's there because PutKafka came through 
> a long history..
> h2. PutKafka history analysis
> - PutKafka used to cover Kafka 0.8 and 0.9
> - Around the time Kafka 0.9 was released, PutKafka added 'partitioner.class' 
> via NIFI-1097: start using new API. There were two client libraries 
> kafka-clients and kafka_2.9.1, both 0.8.2.2.
> - Then PublishKafka is added for Kafka 0.9, at this point, we could add 
> 'partition' property to PublishKafka, but we didn't do that for some reason. 
> PublishKafka doesn't support user defined partition as of this writing. 
> NIFI-1296.
> - The code adding 'partitioner.class' has been left in PutKafka.
> - Further, we separated nar into 0.8, 0.9 and 0.10.
> - Now only PutKafka(0.8) uses 'partitioner.class' property, but 0.8 client 
> doesn't use that property. So we don't need that code at all.
> h2. Then, how should we fix this?
> Since PutKafka in both master and 0.x branch specifically uses Kafka 0.8.x 
> client. We can simply remove the codes adding 'partitioner.class', probably 
> PARTITION_STRATEGY processor property, too.
> h2. Expected result after fix
> - Users can specify Kafka partition with PutKafka 'partition' property, but 
> no need to specify 'partition strategy', NullPointerException won't be thrown
> - A warning log, that used to be logged in nifi-app.log won't be logged any 
> more: {code}2017-01-18 13:53:33,071 WARN [Timer-Driven Process Thread-9] 
> o.a.k.clients.producer.ProducerConfig The configuration partitioner.class = 
> null was supplied but isn't a known config.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3363) PutKafka throws NullPointerException when User-Defined partition strategy is used

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843179#comment-15843179
 ] 

ASF GitHub Bot commented on NIFI-3363:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1425


> PutKafka throws NullPointerException when User-Defined partition strategy is 
> used
> -
>
> Key: NIFI-3363
> URL: https://issues.apache.org/jira/browse/NIFI-3363
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 0.4.0, 0.5.0, 0.6.0, 0.4.1, 0.5.1, 0.7.0, 0.6.1, 
> 1.1.0, 0.7.1, 1.1.1, 1.0.1
> Environment: By looking at the release tags contained in a commit 
> that added User-Defined partition strategy (NIFI-1097 22de23b), it seems all 
> NiFi versions since 0.4.0 is affected.
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
> Fix For: 0.8.0, 1.2.0
>
>
> NullPointerException is thrown because PutKafka tries to put null into 
> properties since following if statements doesn't cover 
> USER_DEFINED_PARTITIONING.
> {code: title=PutKafka.java buildKafkaConfigProperties}
> String partitionStrategy = context.getProperty(PARTITION_STRATEGY).getValue();
> String partitionerClass = null;
> if (partitionStrategy.equalsIgnoreCase(ROUND_ROBIN_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RoundRobinPartitioner.class.getName();
> } else if 
> (partitionStrategy.equalsIgnoreCase(RANDOM_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RandomPartitioner.class.getName();
> }
> properties.setProperty("partitioner.class", partitionerClass); // Happens here
> {code}
> A naive fix for this would be adding one more if statement so that it puts 
> 'partitioner.class' property only if partitionerClass is set.
> However, while I was testing the fix, I found following facts, that revealed 
> this approach wouldn't be the right solution for this issue.
> In short, we don't have to set 'partitioner.class' property with Kafka 0.8.x 
> client in the first place. I assume it's there because PutKafka came through 
> a long history..
> h2. PutKafka history analysis
> - PutKafka used to cover Kafka 0.8 and 0.9
> - Around the time Kafka 0.9 was released, PutKafka added 'partitioner.class' 
> via NIFI-1097: start using new API. There were two client libraries 
> kafka-clients and kafka_2.9.1, both 0.8.2.2.
> - Then PublishKafka is added for Kafka 0.9, at this point, we could add 
> 'partition' property to PublishKafka, but we didn't do that for some reason. 
> PublishKafka doesn't support user defined partition as of this writing. 
> NIFI-1296.
> - The code adding 'partitioner.class' has been left in PutKafka.
> - Further, we separated nar into 0.8, 0.9 and 0.10.
> - Now only PutKafka(0.8) uses 'partitioner.class' property, but 0.8 client 
> doesn't use that property. So we don't need that code at all.
> h2. Then, how should we fix this?
> Since PutKafka in both master and 0.x branch specifically uses Kafka 0.8.x 
> client. We can simply remove the codes adding 'partitioner.class', probably 
> PARTITION_STRATEGY processor property, too.
> h2. Expected result after fix
> - Users can specify Kafka partition with PutKafka 'partition' property, but 
> no need to specify 'partition strategy', NullPointerException won't be thrown
> - A warning log, that used to be logged in nifi-app.log won't be logged any 
> more: {code}2017-01-18 13:53:33,071 WARN [Timer-Driven Process Thread-9] 
> o.a.k.clients.producer.ProducerConfig The configuration partitioner.class = 
> null was supplied but isn't a known config.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3363) PutKafka throws NullPointerException when User-Defined partition strategy is used

2017-01-27 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky updated NIFI-3363:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> PutKafka throws NullPointerException when User-Defined partition strategy is 
> used
> -
>
> Key: NIFI-3363
> URL: https://issues.apache.org/jira/browse/NIFI-3363
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 0.4.0, 0.5.0, 0.6.0, 0.4.1, 0.5.1, 0.7.0, 0.6.1, 
> 1.1.0, 0.7.1, 1.1.1, 1.0.1
> Environment: By looking at the release tags contained in a commit 
> that added User-Defined partition strategy (NIFI-1097 22de23b), it seems all 
> NiFi versions since 0.4.0 is affected.
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
> Fix For: 0.8.0, 1.2.0
>
>
> NullPointerException is thrown because PutKafka tries to put null into 
> properties since following if statements doesn't cover 
> USER_DEFINED_PARTITIONING.
> {code: title=PutKafka.java buildKafkaConfigProperties}
> String partitionStrategy = context.getProperty(PARTITION_STRATEGY).getValue();
> String partitionerClass = null;
> if (partitionStrategy.equalsIgnoreCase(ROUND_ROBIN_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RoundRobinPartitioner.class.getName();
> } else if 
> (partitionStrategy.equalsIgnoreCase(RANDOM_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RandomPartitioner.class.getName();
> }
> properties.setProperty("partitioner.class", partitionerClass); // Happens here
> {code}
> A naive fix for this would be adding one more if statement so that it puts 
> 'partitioner.class' property only if partitionerClass is set.
> However, while I was testing the fix, I found following facts, that revealed 
> this approach wouldn't be the right solution for this issue.
> In short, we don't have to set 'partitioner.class' property with Kafka 0.8.x 
> client in the first place. I assume it's there because PutKafka came through 
> a long history..
> h2. PutKafka history analysis
> - PutKafka used to cover Kafka 0.8 and 0.9
> - Around the time Kafka 0.9 was released, PutKafka added 'partitioner.class' 
> via NIFI-1097: start using new API. There were two client libraries 
> kafka-clients and kafka_2.9.1, both 0.8.2.2.
> - Then PublishKafka is added for Kafka 0.9, at this point, we could add 
> 'partition' property to PublishKafka, but we didn't do that for some reason. 
> PublishKafka doesn't support user defined partition as of this writing. 
> NIFI-1296.
> - The code adding 'partitioner.class' has been left in PutKafka.
> - Further, we separated nar into 0.8, 0.9 and 0.10.
> - Now only PutKafka(0.8) uses 'partitioner.class' property, but 0.8 client 
> doesn't use that property. So we don't need that code at all.
> h2. Then, how should we fix this?
> Since PutKafka in both master and 0.x branch specifically uses Kafka 0.8.x 
> client. We can simply remove the codes adding 'partitioner.class', probably 
> PARTITION_STRATEGY processor property, too.
> h2. Expected result after fix
> - Users can specify Kafka partition with PutKafka 'partition' property, but 
> no need to specify 'partition strategy', NullPointerException won't be thrown
> - A warning log, that used to be logged in nifi-app.log won't be logged any 
> more: {code}2017-01-18 13:53:33,071 WARN [Timer-Driven Process Thread-9] 
> o.a.k.clients.producer.ProducerConfig The configuration partitioner.class = 
> null was supplied but isn't a known config.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1425: NIFI-3363: PutKafka NPE with User-Defined partition

2017-01-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1425


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3363) PutKafka throws NullPointerException when User-Defined partition strategy is used

2017-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843178#comment-15843178
 ] 

ASF subversion and git services commented on NIFI-3363:
---

Commit 63c763885c36ab06111edf2c9d7743563ea57fcb in nifi's branch 
refs/heads/master from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=63c7638 ]

NIFI-3363: PutKafka NPE with User-Defined partition

- Marked PutKafka Partition Strategy property as deprecated, as Kafka 0.8 
client doesn't use 'partitioner.class' as producer property, we don't have to 
specify it.
- Changed Partition Strategy property from a required one to a dynamic 
property, so that existing processor config can stay in valid state.
- Fixed partition property to work.
- Route a flow file if it failed to be published due to invalid partition.

This closes #1425


> PutKafka throws NullPointerException when User-Defined partition strategy is 
> used
> -
>
> Key: NIFI-3363
> URL: https://issues.apache.org/jira/browse/NIFI-3363
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 0.4.0, 0.5.0, 0.6.0, 0.4.1, 0.5.1, 0.7.0, 0.6.1, 
> 1.1.0, 0.7.1, 1.1.1, 1.0.1
> Environment: By looking at the release tags contained in a commit 
> that added User-Defined partition strategy (NIFI-1097 22de23b), it seems all 
> NiFi versions since 0.4.0 is affected.
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> NullPointerException is thrown because PutKafka tries to put null into 
> properties since following if statements doesn't cover 
> USER_DEFINED_PARTITIONING.
> {code: title=PutKafka.java buildKafkaConfigProperties}
> String partitionStrategy = context.getProperty(PARTITION_STRATEGY).getValue();
> String partitionerClass = null;
> if (partitionStrategy.equalsIgnoreCase(ROUND_ROBIN_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RoundRobinPartitioner.class.getName();
> } else if 
> (partitionStrategy.equalsIgnoreCase(RANDOM_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RandomPartitioner.class.getName();
> }
> properties.setProperty("partitioner.class", partitionerClass); // Happens here
> {code}
> A naive fix for this would be adding one more if statement so that it puts 
> 'partitioner.class' property only if partitionerClass is set.
> However, while I was testing the fix, I found following facts, that revealed 
> this approach wouldn't be the right solution for this issue.
> In short, we don't have to set 'partitioner.class' property with Kafka 0.8.x 
> client in the first place. I assume it's there because PutKafka came through 
> a long history..
> h2. PutKafka history analysis
> - PutKafka used to cover Kafka 0.8 and 0.9
> - Around the time Kafka 0.9 was released, PutKafka added 'partitioner.class' 
> via NIFI-1097: start using new API. There were two client libraries 
> kafka-clients and kafka_2.9.1, both 0.8.2.2.
> - Then PublishKafka is added for Kafka 0.9, at this point, we could add 
> 'partition' property to PublishKafka, but we didn't do that for some reason. 
> PublishKafka doesn't support user defined partition as of this writing. 
> NIFI-1296.
> - The code adding 'partitioner.class' has been left in PutKafka.
> - Further, we separated nar into 0.8, 0.9 and 0.10.
> - Now only PutKafka(0.8) uses 'partitioner.class' property, but 0.8 client 
> doesn't use that property. So we don't need that code at all.
> h2. Then, how should we fix this?
> Since PutKafka in both master and 0.x branch specifically uses Kafka 0.8.x 
> client. We can simply remove the codes adding 'partitioner.class', probably 
> PARTITION_STRATEGY processor property, too.
> h2. Expected result after fix
> - Users can specify Kafka partition with PutKafka 'partition' property, but 
> no need to specify 'partition strategy', NullPointerException won't be thrown
> - A warning log, that used to be logged in nifi-app.log won't be logged any 
> more: {code}2017-01-18 13:53:33,071 WARN [Timer-Driven Process Thread-9] 
> o.a.k.clients.producer.ProducerConfig The configuration partitioner.class = 
> null was supplied but isn't a known config.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3363) PutKafka throws NullPointerException when User-Defined partition strategy is used

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843176#comment-15843176
 ] 

ASF GitHub Bot commented on NIFI-3363:
--

Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/1426
  
@ijokarumawak please close this PR as it was merged to 0.x branch


> PutKafka throws NullPointerException when User-Defined partition strategy is 
> used
> -
>
> Key: NIFI-3363
> URL: https://issues.apache.org/jira/browse/NIFI-3363
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 0.4.0, 0.5.0, 0.6.0, 0.4.1, 0.5.1, 0.7.0, 0.6.1, 
> 1.1.0, 0.7.1, 1.1.1, 1.0.1
> Environment: By looking at the release tags contained in a commit 
> that added User-Defined partition strategy (NIFI-1097 22de23b), it seems all 
> NiFi versions since 0.4.0 is affected.
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> NullPointerException is thrown because PutKafka tries to put null into 
> properties since following if statements doesn't cover 
> USER_DEFINED_PARTITIONING.
> {code: title=PutKafka.java buildKafkaConfigProperties}
> String partitionStrategy = context.getProperty(PARTITION_STRATEGY).getValue();
> String partitionerClass = null;
> if (partitionStrategy.equalsIgnoreCase(ROUND_ROBIN_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RoundRobinPartitioner.class.getName();
> } else if 
> (partitionStrategy.equalsIgnoreCase(RANDOM_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RandomPartitioner.class.getName();
> }
> properties.setProperty("partitioner.class", partitionerClass); // Happens here
> {code}
> A naive fix for this would be adding one more if statement so that it puts 
> 'partitioner.class' property only if partitionerClass is set.
> However, while I was testing the fix, I found following facts, that revealed 
> this approach wouldn't be the right solution for this issue.
> In short, we don't have to set 'partitioner.class' property with Kafka 0.8.x 
> client in the first place. I assume it's there because PutKafka came through 
> a long history..
> h2. PutKafka history analysis
> - PutKafka used to cover Kafka 0.8 and 0.9
> - Around the time Kafka 0.9 was released, PutKafka added 'partitioner.class' 
> via NIFI-1097: start using new API. There were two client libraries 
> kafka-clients and kafka_2.9.1, both 0.8.2.2.
> - Then PublishKafka is added for Kafka 0.9, at this point, we could add 
> 'partition' property to PublishKafka, but we didn't do that for some reason. 
> PublishKafka doesn't support user defined partition as of this writing. 
> NIFI-1296.
> - The code adding 'partitioner.class' has been left in PutKafka.
> - Further, we separated nar into 0.8, 0.9 and 0.10.
> - Now only PutKafka(0.8) uses 'partitioner.class' property, but 0.8 client 
> doesn't use that property. So we don't need that code at all.
> h2. Then, how should we fix this?
> Since PutKafka in both master and 0.x branch specifically uses Kafka 0.8.x 
> client. We can simply remove the codes adding 'partitioner.class', probably 
> PARTITION_STRATEGY processor property, too.
> h2. Expected result after fix
> - Users can specify Kafka partition with PutKafka 'partition' property, but 
> no need to specify 'partition strategy', NullPointerException won't be thrown
> - A warning log, that used to be logged in nifi-app.log won't be logged any 
> more: {code}2017-01-18 13:53:33,071 WARN [Timer-Driven Process Thread-9] 
> o.a.k.clients.producer.ProducerConfig The configuration partitioner.class = 
> null was supplied but isn't a known config.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3363) PutKafka throws NullPointerException when User-Defined partition strategy is used

2017-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843174#comment-15843174
 ] 

ASF subversion and git services commented on NIFI-3363:
---

Commit 008bffd9cd1787295840b411f1498439265bc8c5 in nifi's branch refs/heads/0.x 
from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=008bffd ]

NIFI-3363: PutKafka NPE with User-Defined partition

- Marked PutKafka Partition Strategy property as deprecated, as
  Kafka 0.8 client doesn't use 'partitioner.class' as producer property, we 
don't have to specify it.
- Changed Partition Strategy property from a required one to a dynamic 
property, so that existing processor config can stay in valid state.
- Fixed partition property to work.
- Route a flow file if it failed to be published due to invalid partition.


> PutKafka throws NullPointerException when User-Defined partition strategy is 
> used
> -
>
> Key: NIFI-3363
> URL: https://issues.apache.org/jira/browse/NIFI-3363
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 0.4.0, 0.5.0, 0.6.0, 0.4.1, 0.5.1, 0.7.0, 0.6.1, 
> 1.1.0, 0.7.1, 1.1.1, 1.0.1
> Environment: By looking at the release tags contained in a commit 
> that added User-Defined partition strategy (NIFI-1097 22de23b), it seems all 
> NiFi versions since 0.4.0 is affected.
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> NullPointerException is thrown because PutKafka tries to put null into 
> properties since following if statements doesn't cover 
> USER_DEFINED_PARTITIONING.
> {code: title=PutKafka.java buildKafkaConfigProperties}
> String partitionStrategy = context.getProperty(PARTITION_STRATEGY).getValue();
> String partitionerClass = null;
> if (partitionStrategy.equalsIgnoreCase(ROUND_ROBIN_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RoundRobinPartitioner.class.getName();
> } else if 
> (partitionStrategy.equalsIgnoreCase(RANDOM_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RandomPartitioner.class.getName();
> }
> properties.setProperty("partitioner.class", partitionerClass); // Happens here
> {code}
> A naive fix for this would be adding one more if statement so that it puts 
> 'partitioner.class' property only if partitionerClass is set.
> However, while I was testing the fix, I found following facts, that revealed 
> this approach wouldn't be the right solution for this issue.
> In short, we don't have to set 'partitioner.class' property with Kafka 0.8.x 
> client in the first place. I assume it's there because PutKafka came through 
> a long history..
> h2. PutKafka history analysis
> - PutKafka used to cover Kafka 0.8 and 0.9
> - Around the time Kafka 0.9 was released, PutKafka added 'partitioner.class' 
> via NIFI-1097: start using new API. There were two client libraries 
> kafka-clients and kafka_2.9.1, both 0.8.2.2.
> - Then PublishKafka is added for Kafka 0.9, at this point, we could add 
> 'partition' property to PublishKafka, but we didn't do that for some reason. 
> PublishKafka doesn't support user defined partition as of this writing. 
> NIFI-1296.
> - The code adding 'partitioner.class' has been left in PutKafka.
> - Further, we separated nar into 0.8, 0.9 and 0.10.
> - Now only PutKafka(0.8) uses 'partitioner.class' property, but 0.8 client 
> doesn't use that property. So we don't need that code at all.
> h2. Then, how should we fix this?
> Since PutKafka in both master and 0.x branch specifically uses Kafka 0.8.x 
> client. We can simply remove the codes adding 'partitioner.class', probably 
> PARTITION_STRATEGY processor property, too.
> h2. Expected result after fix
> - Users can specify Kafka partition with PutKafka 'partition' property, but 
> no need to specify 'partition strategy', NullPointerException won't be thrown
> - A warning log, that used to be logged in nifi-app.log won't be logged any 
> more: {code}2017-01-18 13:53:33,071 WARN [Timer-Driven Process Thread-9] 
> o.a.k.clients.producer.ProducerConfig The configuration partitioner.class = 
> null was supplied but isn't a known config.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-3055) StandardRecordWriter can throw UTFDataFormatException

2017-01-27 Thread Joe Skora (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Skora reassigned NIFI-3055:
---

Assignee: Joe Skora

> StandardRecordWriter can throw UTFDataFormatException
> -
>
> Key: NIFI-3055
> URL: https://issues.apache.org/jira/browse/NIFI-3055
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.7.1
>Reporter: Brandon DeVries
>Assignee: Joe Skora
>
> StandardRecordWriter.writeRecord()\[1] uses DataOutputStream.writeUTF()\[2] 
> without checking the length of the value to be written.  If this length is 
> greater than  65535 (2^16 - 1), you get a UTFDataFormatException "encoded 
> string too long..."\[3].  Ultimately, this can result in an 
> IllegalStateException\[4], -bringing a halt to the data flow- causing 
> PersistentProvenanceRepository "Unable to merge  with other 
> Journal Files due to..." WARNings.
> Several of the field values being written in this way are pre-defined, and 
> thus not likely an issue.  However, the "details" field can be populated by a 
> processor, and can be of an arbitrary length.  -Additionally, if the detail 
> filed is indexed (which I didn't investigate, but I'm sure is easy enough to 
> determine), then the length might be subject to the Lucene limit discussed in 
> NIFI-2787-.
> \[1] 
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/StandardRecordWriter.java#L163-L173
> \[2] 
> http://docs.oracle.com/javase/7/docs/api/java/io/DataOutputStream.html#writeUTF%28java.lang.String%29
> \[3] 
> http://stackoverflow.com/questions/22741556/dataoutputstream-purpose-of-the-encoded-string-too-long-restriction
> \[4] 
> https://github.com/apache/nifi/blob/5fd4a55791da27fdba577636ac985a294618328a/nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/PersistentProvenanceRepository.java#L754-L755



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3291) UI - Upgrade jQuery

2017-01-27 Thread Scott Aslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-3291:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> UI - Upgrade jQuery
> ---
>
> Key: NIFI-3291
> URL: https://issues.apache.org/jira/browse/NIFI-3291
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0
>
>
> Also consider upgrading all jQuery plugins too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3291) UI - Upgrade jQuery

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843116#comment-15843116
 ] 

ASF GitHub Bot commented on NIFI-3291:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/1438
  
Thanks @mcgilman this has been merged to master.


> UI - Upgrade jQuery
> ---
>
> Key: NIFI-3291
> URL: https://issues.apache.org/jira/browse/NIFI-3291
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0
>
>
> Also consider upgrading all jQuery plugins too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1438: NIFI-3291: Continue addressing issues from jQuery upgrade

2017-01-27 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/1438
  
Thanks @mcgilman this has been merged to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3291) UI - Upgrade jQuery

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843114#comment-15843114
 ] 

ASF GitHub Bot commented on NIFI-3291:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1438


> UI - Upgrade jQuery
> ---
>
> Key: NIFI-3291
> URL: https://issues.apache.org/jira/browse/NIFI-3291
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0
>
>
> Also consider upgrading all jQuery plugins too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3291) UI - Upgrade jQuery

2017-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843113#comment-15843113
 ] 

ASF subversion and git services commented on NIFI-3291:
---

Commit f8f66fa22b10012759a56dd190b22e6b96f6c92c in nifi's branch 
refs/heads/master from [~mcgilman]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=f8f66fa ]

NIFI-3291:
- Removing dead code.

This closes #1438


> UI - Upgrade jQuery
> ---
>
> Key: NIFI-3291
> URL: https://issues.apache.org/jira/browse/NIFI-3291
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0
>
>
> Also consider upgrading all jQuery plugins too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1438: NIFI-3291: Continue addressing issues from jQuery u...

2017-01-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1438


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3291) UI - Upgrade jQuery

2017-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843112#comment-15843112
 ] 

ASF subversion and git services commented on NIFI-3291:
---

Commit 0950186fbbc69575825da49d4f355314d2e2d178 in nifi's branch 
refs/heads/master from [~mcgilman]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=0950186 ]

NIFI-3291:
- Fixing overflow calculation to determine if scrollbars are necessary.
- Fixing styles with jquery ui slider usage.


> UI - Upgrade jQuery
> ---
>
> Key: NIFI-3291
> URL: https://issues.apache.org/jira/browse/NIFI-3291
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0
>
>
> Also consider upgrading all jQuery plugins too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3291) UI - Upgrade jQuery

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843089#comment-15843089
 ] 

ASF GitHub Bot commented on NIFI-3291:
--

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1438#discussion_r98240235
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/provenance/nf-provenance-table.js
 ---
@@ -617,7 +617,7 @@
 
 // adjust the field width for a potential scrollbar
 var searchFieldContainer = 
$('#searchable-fields-container');
-if (searchFieldContainer.get(0).scrollHeight > 
searchFieldContainer.innerHeight()) {
+if (searchFieldContainer.get(0).scrollHeight > 
Math.round(searchFieldContainer.innerHeight())) {
--- End diff --

Yep, agreed. Will update.


> UI - Upgrade jQuery
> ---
>
> Key: NIFI-3291
> URL: https://issues.apache.org/jira/browse/NIFI-3291
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0
>
>
> Also consider upgrading all jQuery plugins too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1438: NIFI-3291: Continue addressing issues from jQuery u...

2017-01-27 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1438#discussion_r98240235
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/provenance/nf-provenance-table.js
 ---
@@ -617,7 +617,7 @@
 
 // adjust the field width for a potential scrollbar
 var searchFieldContainer = 
$('#searchable-fields-container');
-if (searchFieldContainer.get(0).scrollHeight > 
searchFieldContainer.innerHeight()) {
+if (searchFieldContainer.get(0).scrollHeight > 
Math.round(searchFieldContainer.innerHeight())) {
--- End diff --

Yep, agreed. Will update.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3291) UI - Upgrade jQuery

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843085#comment-15843085
 ] 

ASF GitHub Bot commented on NIFI-3291:
--

Github user scottyaslan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1438#discussion_r98239800
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/provenance/nf-provenance-table.js
 ---
@@ -617,7 +617,7 @@
 
 // adjust the field width for a potential scrollbar
 var searchFieldContainer = 
$('#searchable-fields-container');
-if (searchFieldContainer.get(0).scrollHeight > 
searchFieldContainer.innerHeight()) {
+if (searchFieldContainer.get(0).scrollHeight > 
Math.round(searchFieldContainer.innerHeight())) {
--- End diff --

@mcgilman I think this whole block of code is dead. The widths of the 
inputs for the "Search Events" dialog of the "Data Provenance" shell's 
$('#searchable-fields-container') are defined by CSS (in provenance.css line 
#141) and the widths being applied with this code are causing those inputs not 
to fill their available space.


> UI - Upgrade jQuery
> ---
>
> Key: NIFI-3291
> URL: https://issues.apache.org/jira/browse/NIFI-3291
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0
>
>
> Also consider upgrading all jQuery plugins too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1438: NIFI-3291: Continue addressing issues from jQuery u...

2017-01-27 Thread scottyaslan
Github user scottyaslan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1438#discussion_r98239800
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/provenance/nf-provenance-table.js
 ---
@@ -617,7 +617,7 @@
 
 // adjust the field width for a potential scrollbar
 var searchFieldContainer = 
$('#searchable-fields-container');
-if (searchFieldContainer.get(0).scrollHeight > 
searchFieldContainer.innerHeight()) {
+if (searchFieldContainer.get(0).scrollHeight > 
Math.round(searchFieldContainer.innerHeight())) {
--- End diff --

@mcgilman I think this whole block of code is dead. The widths of the 
inputs for the "Search Events" dialog of the "Data Provenance" shell's 
$('#searchable-fields-container') are defined by CSS (in provenance.css line 
#141) and the widths being applied with this code are causing those inputs not 
to fill their available space.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2017-01-27 Thread Joe Skora (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843084#comment-15843084
 ] 

Joe Skora commented on NIFI-2854:
-

[~markap14], Is there any reason that 
[StandardRecordReader|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/StandardRecordReader.java]
 was not deprecated by this change like 
[StandardRecordWriter|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/StandardRecordWriter.java]?


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.
> The following also would be true:
> * Apache NiFi 1.0.0 repositories should work just fine when applied to an 
> Apache NiFi 1.1.0 installation.
> * Repositories made/updated in Apache NiFi 1.1.0 onward would not work in 
> older Apache NiFi releases (such as 1.0.0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3411) Latest "NiFi" Status History data point should only be reported when complete

2017-01-27 Thread Joseph Gresock (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock updated NIFI-3411:
-
Attachment: Status History 2.png
Status History 1.png

Status History 1 shows the dip in the graph, and Status History 2 shows the dip 
resolved after I refreshed the view several times.  Also note the difference in 
average cluster metrics.

> Latest "NiFi" Status History data point should only be reported when complete
> -
>
> Key: NIFI-3411
> URL: https://issues.apache.org/jira/browse/NIFI-3411
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.1.1
>Reporter: Joseph Gresock
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: Status History 1.png, Status History 2.png
>
>
> In the Status History view on a NiFi cluster, the cluster's metrics (called 
> "NiFi") are reported as a sum of the individual nodes' metrics, and appear as 
> a blue line in the graph.  
> When the view is first displayed (or at any refresh), if any of the nodes 
> have not reported back their metrics, the most recent data point is often 
> misrepresented on the NiFi cluster line as the sum of only the nodes that 
> have reported their metrics.  Since this is a line graph, the difference 
> looks jarring, and can lead to the immediate interpretation that data has 
> stopped flowing with such a large drop in the graph.  In addition, the 
> summary metrics (avg/min/max) incorporate this incomplete cluster metric.  In 
> order to get an accurate average data rate, the user has to continue to 
> refresh the graph until all nodes have reported in the latest data point.
> I understand that this isn't a trivial fix (i.e., what do you do if one of 
> the nodes genuinely is disconnected?), but it feels like this user experience 
> could be improved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3411) Latest "NiFi" Status History data point should only be reported when complete

2017-01-27 Thread Joseph Gresock (JIRA)
Joseph Gresock created NIFI-3411:


 Summary: Latest "NiFi" Status History data point should only be 
reported when complete
 Key: NIFI-3411
 URL: https://issues.apache.org/jira/browse/NIFI-3411
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core UI
Affects Versions: 1.1.1
Reporter: Joseph Gresock
Priority: Minor
 Fix For: 1.2.0


In the Status History view on a NiFi cluster, the cluster's metrics (called 
"NiFi") are reported as a sum of the individual nodes' metrics, and appear as a 
blue line in the graph.  

When the view is first displayed (or at any refresh), if any of the nodes have 
not reported back their metrics, the most recent data point is often 
misrepresented on the NiFi cluster line as the sum of only the nodes that have 
reported their metrics.  Since this is a line graph, the difference looks 
jarring, and can lead to the immediate interpretation that data has stopped 
flowing with such a large drop in the graph.  In addition, the summary metrics 
(avg/min/max) incorporate this incomplete cluster metric.  In order to get an 
accurate average data rate, the user has to continue to refresh the graph until 
all nodes have reported in the latest data point.

I understand that this isn't a trivial fix (i.e., what do you do if one of the 
nodes genuinely is disconnected?), but it feels like this user experience could 
be improved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (NIFI-3410) Incorrect styles for Processor Details dialog when opened from Summary page.

2017-01-27 Thread Scott Aslan (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843012#comment-15843012
 ] 

Scott Aslan edited comment on NIFI-3410 at 1/27/17 3:41 PM:


Notice the text value for the "Automatically Terminate Relationships" in the 
"Summary Page Processor Details.png"


was (Author: scottyaslan):
Notice the text value for the "Automatically Terminate Relationships" 

> Incorrect styles for Processor Details dialog when opened from Summary page.
> 
>
> Key: NIFI-3410
> URL: https://issues.apache.org/jira/browse/NIFI-3410
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.1.0
>Reporter: Scott Aslan
>Priority: Trivial
> Attachments: Summary Page Processor Details.png, View Configuration 
> Processor Details.png
>
>
> Users can view the "Processor Details" dialog by "View Configuration" of a 
> running processor or they can open the Summary Shell and click the "info" 
> icon. The styles for these two dialogs should be the same but the style of 
> the text for the "Automatically Terminate Relationships" "success" values do 
> not match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (NIFI-3410) Incorrect styles for Processor Details dialog when opened from Summary page.

2017-01-27 Thread Scott Aslan (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843010#comment-15843010
 ] 

Scott Aslan edited comment on NIFI-3410 at 1/27/17 3:40 PM:


The attachment "View Configuration Processor Details.png" is the correct 
styling for the "Setting" tab of the "Processor Details" dialog.


was (Author: scottyaslan):
This is the correct styling for the "Setting" tab of the "Processor Details" 
dialog.

> Incorrect styles for Processor Details dialog when opened from Summary page.
> 
>
> Key: NIFI-3410
> URL: https://issues.apache.org/jira/browse/NIFI-3410
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.1.0
>Reporter: Scott Aslan
>Priority: Trivial
> Attachments: Summary Page Processor Details.png, View Configuration 
> Processor Details.png
>
>
> Users can view the "Processor Details" dialog by "View Configuration" of a 
> running processor or they can open the Summary Shell and click the "info" 
> icon. The styles for these two dialogs should be the same but the style of 
> the text for the "Automatically Terminate Relationships" "success" values do 
> not match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3410) Incorrect styles for Processor Details dialog when opened from Summary page.

2017-01-27 Thread Scott Aslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-3410:
--
Attachment: Summary Page Processor Details.png

Notice the text value for the "Automatically Terminate Relationships" 

> Incorrect styles for Processor Details dialog when opened from Summary page.
> 
>
> Key: NIFI-3410
> URL: https://issues.apache.org/jira/browse/NIFI-3410
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.1.0
>Reporter: Scott Aslan
>Priority: Trivial
> Attachments: Summary Page Processor Details.png, View Configuration 
> Processor Details.png
>
>
> Users can view the "Processor Details" dialog by "View Configuration" of a 
> running processor or they can open the Summary Shell and click the "info" 
> icon. The styles for these two dialogs should be the same but the style of 
> the text for the "Automatically Terminate Relationships" "success" values do 
> not match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3410) Incorrect styles for Processor Details dialog when opened from Summary page.

2017-01-27 Thread Scott Aslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-3410:
--
Attachment: View Configuration Processor Details.png

This is the correct styling for the "Setting" tab of the "Processor Details" 
dialog.

> Incorrect styles for Processor Details dialog when opened from Summary page.
> 
>
> Key: NIFI-3410
> URL: https://issues.apache.org/jira/browse/NIFI-3410
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.1.0
>Reporter: Scott Aslan
>Priority: Trivial
> Attachments: View Configuration Processor Details.png
>
>
> Users can view the "Processor Details" dialog by "View Configuration" of a 
> running processor or they can open the Summary Shell and click the "info" 
> icon. The styles for these two dialogs should be the same but the style of 
> the text for the "Automatically Terminate Relationships" "success" values do 
> not match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3410) Incorrect styles for Processor Details dialog when opened from Summary page.

2017-01-27 Thread Scott Aslan (JIRA)
Scott Aslan created NIFI-3410:
-

 Summary: Incorrect styles for Processor Details dialog when opened 
from Summary page.
 Key: NIFI-3410
 URL: https://issues.apache.org/jira/browse/NIFI-3410
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 1.1.0
Reporter: Scott Aslan
Priority: Trivial


Users can view the "Processor Details" dialog by "View Configuration" of a 
running processor or they can open the Summary Shell and click the "info" icon. 
The styles for these two dialogs should be the same but the style of the text 
for the "Automatically Terminate Relationships" "success" values do not match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3409) Batch users/groups import - LDAP

2017-01-27 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-3409:


 Summary: Batch users/groups import - LDAP
 Key: NIFI-3409
 URL: https://issues.apache.org/jira/browse/NIFI-3409
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core Framework, Core UI
Reporter: Pierre Villard
Assignee: Pierre Villard


Creating the sub task to answer:

{quote}
Batch user import
* Whether the users are providing client certificates, LDAP credentials, or 
Kerberos tickets to authenticate, the canonical source of identity is still 
managed by NiFi. I propose a mechanism to quickly define multiple users in the 
system (without affording any policy assignments). Here I am looking for 
substantial community input on the most common/desired use cases, but my 
initial thoughts are:
** LDAP-specific
*** A manager DN and password (similar to necessary for LDAP authentication) 
are used to authenticate the admin/user manager, and then a LDAP query string 
(i.e. {{ou=users,dc=nifi,dc=apache,dc=org}}) is provided and the dialog 
displays/API returns a list of users/groups matching the query. The admin can 
then select which to import to NiFi and confirm. 
{quote}

In particular the initial implementation would be to add a feature allowing to 
sync users and groups with LDAP based on additional parameters given in the 
login identity provider configuration file and custom filters provided by the 
user through the UI.

It is not foreseen to delete users/groups that exist in NiFi but are not 
retrieved in the LDAP. It'd be only creating/updating users/groups based on 
what is in LDAP server.

The feature would be exposed through a new REST API endpoint. In case another 
identity provider is configured (not LDAP), an unsupported operation exception 
would be returned at the moment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3408) Add an attribute for failure reason to InvokeHTTP in all cases to match docs

2017-01-27 Thread Joseph Percivall (JIRA)
Joseph Percivall created NIFI-3408:
--

 Summary: Add an attribute for failure reason to InvokeHTTP in all 
cases to match docs
 Key: NIFI-3408
 URL: https://issues.apache.org/jira/browse/NIFI-3408
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Joseph Percivall
Priority: Minor


Originally brought up on the mailing list[1], the InvokeHTTP docs state:

```
The original FlowFile will be routed on any type of connection failure, timeout 
or general exception. It will have new attributes detailing the request.
```

Adding attributes currently only happens though when a response is received 
from the server, so not attributes if it fails to connect or the socket times 
out.

These attributes should be added.

[1] 
http://apache-nifi-users-list.2361937.n4.nabble.com/InvokeHTTP-and-SocketTimeoutException-td743.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1847) improve provenance space utilization

2017-01-27 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842946#comment-15842946
 ] 

Mark Payne commented on NIFI-1847:
--

I have no problem with this proposal - except that the wording "recommend the 
max size be changed to a percentage" - I would not want to *change* how it 
worked but rather give the user the option of choosing one or the other by 
introducing a new property (nifi.provenance.repository.max.storage.size would 
stay but also nifi.provenance.repository.max.storage.percentage would be added).

> improve provenance space utilization
> 
>
> Key: NIFI-1847
> URL: https://issues.apache.org/jira/browse/NIFI-1847
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.5.1
>Reporter: Ben Icore
>Assignee: Joe Skora
>
> currently the max storage size of the provenance repo is specified in bytes.  
> this is ok if there is a single provenance repo.  If multple repos are 
> specified, the space can be significantly under utilized.
> consider the following examples
> repo 1 has 500GB of space
> repo 2 has 500GB of space
> max storeage size would likely be set at 900GB, since the combine space is 
> 1TB.  900GB seems like a "safe" value, because provenance informaiton is 
> generally stripped evenly accross the repos, however this is not garanteed.  
> with the max size is considerablly larger than the size of any given 
> partition, any given partition could easily reach 100%
> The only safe way to prevent a given partion in the above example from 
> filling is to set the max size at say 450GB, however this caps the entire 
> provenance repo at 450GB, effectively rendering 650GB of disk space unuseable.
> If the repo sizes where of uneven size, say
> repo 1 has 700GB of space
> repo 2 has 300GB of space
> you would have the same 1TB of provenance space, but this individual repos 
> are uneven, so the 900GB of storage would definately cause repo 2 to run out 
> of disk space.  The only way to ensure that repo 2 did not run out of disk 
> space would be to set the max size to 250GB, effectively loosing 750GB of 
> disk space
> recommend the max size be changed to a percentage and applyed to the 
> individual repos.  provenance records should still be distributed as evenly 
> as possible, but if one repo has exceed its max, information would written to 
> the other
> so in example 1 
> repo 1 has 500GB of space
> repo 2 has 500GB of space
> max space is 90%
> effective and "usable" repo space would be 900GB
> so in example 1 
> repo 1 has 700GB of space
> repo 2 has 300GB of space
> max space is 90%
> effective and "usable" repo space would be 900GB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3291) UI - Upgrade jQuery

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842947#comment-15842947
 ] 

ASF GitHub Bot commented on NIFI-3291:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/1438
  
Reviewing


> UI - Upgrade jQuery
> ---
>
> Key: NIFI-3291
> URL: https://issues.apache.org/jira/browse/NIFI-3291
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0
>
>
> Also consider upgrading all jQuery plugins too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1438: NIFI-3291: Continue addressing issues from jQuery upgrade

2017-01-27 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/1438
  
Reviewing


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3363) PutKafka throws NullPointerException when User-Defined partition strategy is used

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842939#comment-15842939
 ] 

ASF GitHub Bot commented on NIFI-3363:
--

Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/1425
  
Tested with Kafka 0.8 broker, looks good, will merge once build completes


> PutKafka throws NullPointerException when User-Defined partition strategy is 
> used
> -
>
> Key: NIFI-3363
> URL: https://issues.apache.org/jira/browse/NIFI-3363
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 0.4.0, 0.5.0, 0.6.0, 0.4.1, 0.5.1, 0.7.0, 0.6.1, 
> 1.1.0, 0.7.1, 1.1.1, 1.0.1
> Environment: By looking at the release tags contained in a commit 
> that added User-Defined partition strategy (NIFI-1097 22de23b), it seems all 
> NiFi versions since 0.4.0 is affected.
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> NullPointerException is thrown because PutKafka tries to put null into 
> properties since following if statements doesn't cover 
> USER_DEFINED_PARTITIONING.
> {code: title=PutKafka.java buildKafkaConfigProperties}
> String partitionStrategy = context.getProperty(PARTITION_STRATEGY).getValue();
> String partitionerClass = null;
> if (partitionStrategy.equalsIgnoreCase(ROUND_ROBIN_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RoundRobinPartitioner.class.getName();
> } else if 
> (partitionStrategy.equalsIgnoreCase(RANDOM_PARTITIONING.getValue())) {
> partitionerClass = Partitioners.RandomPartitioner.class.getName();
> }
> properties.setProperty("partitioner.class", partitionerClass); // Happens here
> {code}
> A naive fix for this would be adding one more if statement so that it puts 
> 'partitioner.class' property only if partitionerClass is set.
> However, while I was testing the fix, I found following facts, that revealed 
> this approach wouldn't be the right solution for this issue.
> In short, we don't have to set 'partitioner.class' property with Kafka 0.8.x 
> client in the first place. I assume it's there because PutKafka came through 
> a long history..
> h2. PutKafka history analysis
> - PutKafka used to cover Kafka 0.8 and 0.9
> - Around the time Kafka 0.9 was released, PutKafka added 'partitioner.class' 
> via NIFI-1097: start using new API. There were two client libraries 
> kafka-clients and kafka_2.9.1, both 0.8.2.2.
> - Then PublishKafka is added for Kafka 0.9, at this point, we could add 
> 'partition' property to PublishKafka, but we didn't do that for some reason. 
> PublishKafka doesn't support user defined partition as of this writing. 
> NIFI-1296.
> - The code adding 'partitioner.class' has been left in PutKafka.
> - Further, we separated nar into 0.8, 0.9 and 0.10.
> - Now only PutKafka(0.8) uses 'partitioner.class' property, but 0.8 client 
> doesn't use that property. So we don't need that code at all.
> h2. Then, how should we fix this?
> Since PutKafka in both master and 0.x branch specifically uses Kafka 0.8.x 
> client. We can simply remove the codes adding 'partitioner.class', probably 
> PARTITION_STRATEGY processor property, too.
> h2. Expected result after fix
> - Users can specify Kafka partition with PutKafka 'partition' property, but 
> no need to specify 'partition strategy', NullPointerException won't be thrown
> - A warning log, that used to be logged in nifi-app.log won't be logged any 
> more: {code}2017-01-18 13:53:33,071 WARN [Timer-Driven Process Thread-9] 
> o.a.k.clients.producer.ProducerConfig The configuration partitioner.class = 
> null was supplied but isn't a known config.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1425: NIFI-3363: PutKafka NPE with User-Defined partition

2017-01-27 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/1425
  
Tested with Kafka 0.8 broker, looks good, will merge once build completes


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2881) Allow Database Fetch processor(s) to accept incoming flow files and use Expression Language

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842378#comment-15842378
 ] 

ASF GitHub Bot commented on NIFI-2881:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1407
  
@mattyb149 Thanks for the update. Sorry for the late response, somehow the 
notification was slipped through my attention.. I'm going to review it and 
comment soon!


> Allow Database Fetch processor(s) to accept incoming flow files and use 
> Expression Language
> ---
>
> Key: NIFI-2881
> URL: https://issues.apache.org/jira/browse/NIFI-2881
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> The QueryDatabaseTable and GenerateTableFetch processors do not allow 
> Expression Language to be used in the properties, mainly because they also do 
> not allow incoming connections. This means if the user desires to fetch from 
> multiple tables, they currently need one instance of the processor for each 
> table, and those table names must be hard-coded.
> To support the same capabilities for multiple tables and more flexible 
> configuration via Expression Language, these processors should have 
> properties that accept Expression Language, and GenerateTableFetch should 
> accept (optional) incoming connections.
> Conversation about the behavior of the processors is welcomed and encouraged. 
> For example, if an incoming flow file is available, do we also still run the 
> incremental fetch logic for tables that aren't specified by this flow file, 
> or do we just do incremental fetching when the processor is scheduled but 
> there is no incoming flow file. The latter implies a denial-of-service could 
> take place, by flooding the processor with flow files and not letting it do 
> its original job of querying the table, keeping track of maximum values, etc.
> This is likely a breaking change to the processors because of how state 
> management is implemented. Currently since the table name is hard coded, only 
> the column name comprises the key in the state. This would have to be 
> extended to have a compound key that represents table name, max-value column 
> name, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1407: NIFI-2881: Added EL support to DB Fetch processors, allow ...

2017-01-27 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1407
  
@mattyb149 Thanks for the update. Sorry for the late response, somehow the 
notification was slipped through my attention.. I'm going to review it and 
comment soon!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---