[jira] [Commented] (NIFI-5238) Capital "F" in Jolt Transformation moves caret

2018-11-14 Thread Anna Vergeles (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687611#comment-16687611
 ] 

Anna Vergeles commented on NIFI-5238:
-

Guys, thanks for the help, but I actually know these possible workarounds. :)

This ticket is for someone with straight hands (not like mine) to do changes in 
NiFi code to fix this.

> Capital "F" in Jolt Transformation moves caret
> --
>
> Key: NIFI-5238
> URL: https://issues.apache.org/jira/browse/NIFI-5238
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Anna Vergeles
>Priority: Minor
> Attachments: nifiFbug.png
>
>
> When Jolt Transformation Component is in Advanced mode in web-UI, it is 
> impossible to enter capital letter "F" into "Jolt Specification" field – this 
> results inplacing coursor to the beginning of the text field.
> !nifiFbug.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4957) Enable JoltTransformJSON to pickup a Jolt Spec file from a file location

2018-11-14 Thread Koji Kawamura (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687537#comment-16687537
 ] 

Koji Kawamura commented on NIFI-4957:
-

Hi [~rahst12], I've added you as a Contributor to NiFi JIRA. Now you can assign 
yourself to any NiFi JIRA ticket. Please give it a try. Looking forward to see 
your contribution! Thanks

> Enable JoltTransformJSON to pickup a Jolt Spec file from a file location
> 
>
> Key: NIFI-4957
> URL: https://issues.apache.org/jira/browse/NIFI-4957
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Ryan Hendrickson
>Priority: Minor
> Attachments: image-2018-03-09-23-56-43-912.png
>
>
> Add a property to allow the Jolt Spec to be read from a file on disk and/or 
> the classpath.
> !image-2018-03-09-23-56-43-912.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5652) LogMessage emits "transfer relationship not specified" error

2018-11-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687530#comment-16687530
 ] 

ASF subversion and git services commented on NIFI-5652:
---

Commit 13011ac6d61961ecd3f3524b1e0dfda382ed4dd6 in nifi's branch 
refs/heads/master from [~ca9mbu]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=13011ac ]

NIFI-5652: Fixed LogMessage when logging level is disabled

This closes #3170.

Signed-off-by: Koji Kawamura 


> LogMessage emits "transfer relationship not specified" error
> 
>
> Key: NIFI-5652
> URL: https://issues.apache.org/jira/browse/NIFI-5652
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
> Environment: 3 nodes cluster of CentOS 7.4 servers; JRE 1.8
>Reporter: Takeshi Koya
>Assignee: Matt Burgess
>Priority: Minor
>
> Hi, all NiFi developers and users.
> I upgraded our staging cluster from 1.5 to 1.7.1 recently, 
>  but I am in trouble with LogMessage errors.
> The combination of a log level and a bulletin level that is higher than the 
> log level seems to bring about the following "transfer relationship not 
> specified" error.
> {code:java}
> 2018-10-02 00:00:00,442 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.standard.LogMessage LogMessage[id=...] LogMessage[id=...] 
> failed to process session due to 
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified; Processor Administratively Yielded for 
> 1 sec: org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified
> {code}
> Errors disappear if we make a bulletin level equal to or lower than a log 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5652) LogMessage emits "transfer relationship not specified" error

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687531#comment-16687531
 ] 

ASF GitHub Bot commented on NIFI-5652:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3170


> LogMessage emits "transfer relationship not specified" error
> 
>
> Key: NIFI-5652
> URL: https://issues.apache.org/jira/browse/NIFI-5652
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
> Environment: 3 nodes cluster of CentOS 7.4 servers; JRE 1.8
>Reporter: Takeshi Koya
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.9.0
>
>
> Hi, all NiFi developers and users.
> I upgraded our staging cluster from 1.5 to 1.7.1 recently, 
>  but I am in trouble with LogMessage errors.
> The combination of a log level and a bulletin level that is higher than the 
> log level seems to bring about the following "transfer relationship not 
> specified" error.
> {code:java}
> 2018-10-02 00:00:00,442 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.standard.LogMessage LogMessage[id=...] LogMessage[id=...] 
> failed to process session due to 
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified; Processor Administratively Yielded for 
> 1 sec: org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified
> {code}
> Errors disappear if we make a bulletin level equal to or lower than a log 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5652) LogMessage emits "transfer relationship not specified" error

2018-11-14 Thread Koji Kawamura (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-5652:

Fix Version/s: 1.9.0

> LogMessage emits "transfer relationship not specified" error
> 
>
> Key: NIFI-5652
> URL: https://issues.apache.org/jira/browse/NIFI-5652
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
> Environment: 3 nodes cluster of CentOS 7.4 servers; JRE 1.8
>Reporter: Takeshi Koya
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.9.0
>
>
> Hi, all NiFi developers and users.
> I upgraded our staging cluster from 1.5 to 1.7.1 recently, 
>  but I am in trouble with LogMessage errors.
> The combination of a log level and a bulletin level that is higher than the 
> log level seems to bring about the following "transfer relationship not 
> specified" error.
> {code:java}
> 2018-10-02 00:00:00,442 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.standard.LogMessage LogMessage[id=...] LogMessage[id=...] 
> failed to process session due to 
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified; Processor Administratively Yielded for 
> 1 sec: org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified
> {code}
> Errors disappear if we make a bulletin level equal to or lower than a log 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5652) LogMessage emits "transfer relationship not specified" error

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687528#comment-16687528
 ] 

ASF GitHub Bot commented on NIFI-5652:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/3170
  
LGTM +1, merging. Thanks @mattyb149!


> LogMessage emits "transfer relationship not specified" error
> 
>
> Key: NIFI-5652
> URL: https://issues.apache.org/jira/browse/NIFI-5652
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
> Environment: 3 nodes cluster of CentOS 7.4 servers; JRE 1.8
>Reporter: Takeshi Koya
>Assignee: Matt Burgess
>Priority: Minor
>
> Hi, all NiFi developers and users.
> I upgraded our staging cluster from 1.5 to 1.7.1 recently, 
>  but I am in trouble with LogMessage errors.
> The combination of a log level and a bulletin level that is higher than the 
> log level seems to bring about the following "transfer relationship not 
> specified" error.
> {code:java}
> 2018-10-02 00:00:00,442 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.standard.LogMessage LogMessage[id=...] LogMessage[id=...] 
> failed to process session due to 
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified; Processor Administratively Yielded for 
> 1 sec: org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified
> {code}
> Errors disappear if we make a bulletin level equal to or lower than a log 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5652) LogMessage emits "transfer relationship not specified" error

2018-11-14 Thread Koji Kawamura (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-5652:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> LogMessage emits "transfer relationship not specified" error
> 
>
> Key: NIFI-5652
> URL: https://issues.apache.org/jira/browse/NIFI-5652
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
> Environment: 3 nodes cluster of CentOS 7.4 servers; JRE 1.8
>Reporter: Takeshi Koya
>Assignee: Matt Burgess
>Priority: Minor
>
> Hi, all NiFi developers and users.
> I upgraded our staging cluster from 1.5 to 1.7.1 recently, 
>  but I am in trouble with LogMessage errors.
> The combination of a log level and a bulletin level that is higher than the 
> log level seems to bring about the following "transfer relationship not 
> specified" error.
> {code:java}
> 2018-10-02 00:00:00,442 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.standard.LogMessage LogMessage[id=...] LogMessage[id=...] 
> failed to process session due to 
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified; Processor Administratively Yielded for 
> 1 sec: org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified
> {code}
> Errors disappear if we make a bulletin level equal to or lower than a log 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3170: NIFI-5652: Fixed LogMessage when logging level is d...

2018-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3170


---


[GitHub] nifi issue #3170: NIFI-5652: Fixed LogMessage when logging level is disabled

2018-11-14 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/3170
  
LGTM +1, merging. Thanks @mattyb149!


---


[jira] [Created] (NIFI-5821) ExecuteScript should say Python is really Jython running

2018-11-14 Thread Ryan Hendrickson (JIRA)
Ryan Hendrickson created NIFI-5821:
--

 Summary: ExecuteScript should say Python is really Jython running
 Key: NIFI-5821
 URL: https://issues.apache.org/jira/browse/NIFI-5821
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.7.1, 1.8.0, 1.7.0, 1.6.0, 1.5.0
Reporter: Ryan Hendrickson
 Attachments: image-2018-11-15-00-37-05-004.png

Code executed in the ExecuteScript processor, when Python is selected, is 
actually running as Jython.  This should be made far more clear on the UI as a 
user is selecting the Script Language.  The only place python is made reference 
to is in the tags for the processor, which also makes reference to python.

Jython datetime.datetime is not handled the same way that Python 
datetime.datetime is because of mapping datetime back to Java objects in 
Jython.  This can cause plenty of issues, and cause Python code to need to be 
modified to jython supported code.

Additionally, there's probably a series of benefits that can be taken advantage 
of if you know it's actually jython vs python.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5238) Capital "F" in Jolt Transformation moves caret

2018-11-14 Thread Ryan Hendrickson (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687490#comment-16687490
 ] 

Ryan Hendrickson commented on NIFI-5238:


You can also click the CapsLock key, and F should be able to be entered.

> Capital "F" in Jolt Transformation moves caret
> --
>
> Key: NIFI-5238
> URL: https://issues.apache.org/jira/browse/NIFI-5238
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Anna Vergeles
>Priority: Minor
> Attachments: nifiFbug.png
>
>
> When Jolt Transformation Component is in Advanced mode in web-UI, it is 
> impossible to enter capital letter "F" into "Jolt Specification" field – this 
> results inplacing coursor to the beginning of the text field.
> !nifiFbug.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4957) Enable JoltTransformJSON to pickup a Jolt Spec file from a file location

2018-11-14 Thread Ryan Hendrickson (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687487#comment-16687487
 ] 

Ryan Hendrickson commented on NIFI-4957:


I'm actively working on this now.. I'm not sure the procedure of 'claiming' a 
ticket.. But I'd be happy to put in a Pull Request for this in a little bit.  I 
grabbed a fork off of GitHub.

> Enable JoltTransformJSON to pickup a Jolt Spec file from a file location
> 
>
> Key: NIFI-4957
> URL: https://issues.apache.org/jira/browse/NIFI-4957
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Ryan Hendrickson
>Priority: Minor
> Attachments: image-2018-03-09-23-56-43-912.png
>
>
> Add a property to allow the Jolt Spec to be read from a file on disk and/or 
> the classpath.
> !image-2018-03-09-23-56-43-912.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated MINIFICPP-675:
--
Fix Version/s: 0.6.0

> Parsed headers should not be searched case insensitively 
> -
>
> Key: MINIFICPP-675
> URL: https://issues.apache.org/jira/browse/MINIFICPP-675
> Project: NiFi MiNiFi C++
>  Issue Type: Documentation
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687144#comment-16687144
 ] 

ASF GitHub Bot commented on MINIFICPP-675:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/438


> Parsed headers should not be searched case insensitively 
> -
>
> Key: MINIFICPP-675
> URL: https://issues.apache.org/jira/browse/MINIFICPP-675
> Project: NiFi MiNiFi C++
>  Issue Type: Documentation
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated MINIFICPP-675:
--
Issue Type: Bug  (was: Documentation)

> Parsed headers should not be searched case insensitively 
> -
>
> Key: MINIFICPP-675
> URL: https://issues.apache.org/jira/browse/MINIFICPP-675
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-668) RPG does not correctly account for a non-port URI

2018-11-14 Thread Aldrin Piri (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687146#comment-16687146
 ] 

Aldrin Piri commented on MINIFICPP-668:
---

Resolved via 1257e529170bd5b8f857c1e911b6c77b031e0cb6


> RPG does not correctly account for a non-port URI
> -
>
> Key: MINIFICPP-668
> URL: https://issues.apache.org/jira/browse/MINIFICPP-668
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Critical
> Fix For: 0.6.0
>
>
> The utility to parse URLs allows there to be no port set; however, the RPG 
> code that does the peer lookup in the agent does not correctly deal with this 
> scenario and injects a port "0" into the URL, which is invalid. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-668) RPG does not correctly account for a non-port URI

2018-11-14 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-668.
---
   Resolution: Fixed
Fix Version/s: 0.6.0

> RPG does not correctly account for a non-port URI
> -
>
> Key: MINIFICPP-668
> URL: https://issues.apache.org/jira/browse/MINIFICPP-668
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Critical
> Fix For: 0.6.0
>
>
> The utility to parse URLs allows there to be no port set; however, the RPG 
> code that does the peer lookup in the agent does not correctly deal with this 
> scenario and injects a port "0" into the URL, which is invalid. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-675.
---
Resolution: Fixed

> Parsed headers should not be searched case insensitively 
> -
>
> Key: MINIFICPP-675
> URL: https://issues.apache.org/jira/browse/MINIFICPP-675
> Project: NiFi MiNiFi C++
>  Issue Type: Documentation
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #438: MINIFICPP-675: Fix issue with hearder eva...

2018-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/438


---


[jira] [Commented] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687140#comment-16687140
 ] 

ASF GitHub Bot commented on MINIFICPP-675:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/438#discussion_r233611389
  
--- Diff: extensions/http-curl/client/HTTPClient.h ---
@@ -147,6 +146,24 @@ class HTTPClient : public BaseHTTPClient, public 
core::Connectable {
 return header_response_.header_mapping_;
   }
 
+  /**
+   * Locates the header value ignoring case. This is differente than 
returning a mapping
--- End diff --

differente -> different


> Parsed headers should not be searched case insensitively 
> -
>
> Key: MINIFICPP-675
> URL: https://issues.apache.org/jira/browse/MINIFICPP-675
> Project: NiFi MiNiFi C++
>  Issue Type: Documentation
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687141#comment-16687141
 ] 

ASF GitHub Bot commented on MINIFICPP-675:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/438#discussion_r233613094
  
--- Diff: extensions/http-curl/client/HTTPClient.h ---
@@ -147,6 +146,24 @@ class HTTPClient : public BaseHTTPClient, public 
core::Connectable {
 return header_response_.header_mapping_;
   }
 
+  /**
+   * Locates the header value ignoring case. This is differente than 
returning a mapping
+   * of all parsed headers.
+   * This function acknowledges that header entries should searched case 
insensitively.
--- End diff --

be searched


> Parsed headers should not be searched case insensitively 
> -
>
> Key: MINIFICPP-675
> URL: https://issues.apache.org/jira/browse/MINIFICPP-675
> Project: NiFi MiNiFi C++
>  Issue Type: Documentation
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #438: MINIFICPP-675: Fix issue with hearder eva...

2018-11-14 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/438#discussion_r233613094
  
--- Diff: extensions/http-curl/client/HTTPClient.h ---
@@ -147,6 +146,24 @@ class HTTPClient : public BaseHTTPClient, public 
core::Connectable {
 return header_response_.header_mapping_;
   }
 
+  /**
+   * Locates the header value ignoring case. This is differente than 
returning a mapping
+   * of all parsed headers.
+   * This function acknowledges that header entries should searched case 
insensitively.
--- End diff --

be searched


---


[GitHub] nifi-minifi-cpp pull request #438: MINIFICPP-675: Fix issue with hearder eva...

2018-11-14 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/438#discussion_r233611389
  
--- Diff: extensions/http-curl/client/HTTPClient.h ---
@@ -147,6 +146,24 @@ class HTTPClient : public BaseHTTPClient, public 
core::Connectable {
 return header_response_.header_mapping_;
   }
 
+  /**
+   * Locates the header value ignoring case. This is differente than 
returning a mapping
--- End diff --

differente -> different


---


[jira] [Commented] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687135#comment-16687135
 ] 

ASF GitHub Bot commented on MINIFICPP-675:
--

Github user kevdoran commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/438
  
Thanks a lot for the quick turn around on this @phrocker!


> Parsed headers should not be searched case insensitively 
> -
>
> Key: MINIFICPP-675
> URL: https://issues.apache.org/jira/browse/MINIFICPP-675
> Project: NiFi MiNiFi C++
>  Issue Type: Documentation
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #438: MINIFICPP-675: Fix issue with hearder evaluation...

2018-11-14 Thread kevdoran
Github user kevdoran commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/438
  
Thanks a lot for the quick turn around on this @phrocker!


---


[jira] [Commented] (NIFI-3229) When a queue contains only Penalized FlowFile's the next processor Tasks/Time statistics becomes extremely large

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687117#comment-16687117
 ] 

ASF GitHub Bot commented on NIFI-3229:
--

Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/3131
  
@markap14 I built unit tests, but I'm having trouble running them at scale. 
I temporarily checked back in the original method so I could run side-by-side 
speed comparisons on the same `Connectable`. But if I exceed about 100k tests 
my unit tests seem to go out to lunch, even if I increase heap so they don't 
run out.

These are checked in right now to run 1 million iterations, but that has 
not succeeded for me... This is true of the unmodified method if run by itself 
also (at least on my poor little computer).


> When a queue contains only Penalized FlowFile's the next processor Tasks/Time 
> statistics becomes extremely large
> 
>
> Key: NIFI-3229
> URL: https://issues.apache.org/jira/browse/NIFI-3229
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Dmitry Lukyanov
>Assignee: Peter Wicks
>Priority: Minor
> Attachments: flow.xml.gz, nifi-stats.png, nifi-stats2.png
>
>
> fetchfile on `not.found` produces penalized flow file
> in this case i'm expecting the next processor will do one task execution when 
> flow file penalize time over.
> but according to stats it executes approximately 1-6 times.
> i understand that it could be a feature but stats became really unclear...
> maybe there should be two columns? 
> `All Task/Times` and `Committed Task/Times`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3131: NIFI-3229 When a queue contains only Penalized FlowFile's ...

2018-11-14 Thread patricker
Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/3131
  
@markap14 I built unit tests, but I'm having trouble running them at scale. 
I temporarily checked back in the original method so I could run side-by-side 
speed comparisons on the same `Connectable`. But if I exceed about 100k tests 
my unit tests seem to go out to lunch, even if I increase heap so they don't 
run out.

These are checked in right now to run 1 million iterations, but that has 
not succeeded for me... This is true of the unmodified method if run by itself 
also (at least on my poor little computer).


---


[jira] [Commented] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687089#comment-16687089
 ] 

ASF GitHub Bot commented on MINIFICPP-675:
--

Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/438
  
reviewing


> Parsed headers should not be searched case insensitively 
> -
>
> Key: MINIFICPP-675
> URL: https://issues.apache.org/jira/browse/MINIFICPP-675
> Project: NiFi MiNiFi C++
>  Issue Type: Documentation
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #438: MINIFICPP-675: Fix issue with hearder evaluation...

2018-11-14 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/438
  
reviewing


---


[jira] [Updated] (NIFI-5795) RedisDistributedMapCacheClientService put missing option

2018-11-14 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5795:
---
Affects Version/s: (was: 1.8.0)
   Status: Patch Available  (was: Open)

> RedisDistributedMapCacheClientService put missing option
> 
>
> Key: NIFI-5795
> URL: https://issues.apache.org/jira/browse/NIFI-5795
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Alex
>Priority: Major
>
> When you select on *PutDistributedMapCache CACHE_UPDATE_STRATEGY = 
> CACHE_UPDATE_REPLACE we execute "cache.put(cacheKey, cacheValue, 
> keySerializer, valueSerializer);"* 
> [LINK|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDistributedMapCache.java#L202]
> If you use redis as backend service this jumps to: 
> RedisDistributedMapCacheClientService.java -> 
> redisConnection.set(kv.getKey(), kv.getValue(), Expiration.seconds(ttl), 
> null); 
> [LINK|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/service/RedisDistributedMapCacheClientService.java#L191]
> Calling to spring-data/redis/ library, but we have a bug putting null as 
> Option parameter, causing an error "option cannot be null", because according 
> to library: "{{option}} - must not be null." [Library 
> Link|https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/RedisStringCommands.html#set-byte:A-byte:A-org.springframework.data.redis.core.types.Expiration-org.springframework.data.redis.connection.RedisStringCommands.SetOption-]
> If we want to update strategy we should use: 
> [{{RedisStringCommands.SetOption.upsert()}}|https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/RedisStringCommands.SetOption.html#upsert--]
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5604) Allow GenerateTableFetch to send empty flow files when no rows would be fetched

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687049#comment-16687049
 ] 

ASF GitHub Bot commented on NIFI-5604:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3075#discussion_r233598457
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -450,7 +450,33 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 
 // If there are no SQL statements to be generated, still 
output an empty flow file if specified by the user
 if (numberOfFetches == 0 && 
outputEmptyFlowFileOnZeroResults) {
-session.transfer((fileToProcess == null) ? 
session.create() : session.create(fileToProcess), REL_SUCCESS);
+FlowFile emptyFlowFile = (fileToProcess == null) ? 
session.create() : session.create(fileToProcess);
+Map attributesToAdd = new HashMap<>();
+
+attributesToAdd.put("generatetablefetch.tableName", 
tableName);
+if (columnNames != null) {
+
attributesToAdd.put("generatetablefetch.columnNames", columnNames);
+}
+whereClause = maxValueClauses.isEmpty() ? "1=1" : 
StringUtils.join(maxValueClauses, " AND ");
+if (StringUtils.isNotBlank(whereClause)) {
--- End diff --

Yeah that's probably left over from before we added the "1=1", I'll take a 
look and clean it up


> Allow GenerateTableFetch to send empty flow files when no rows would be 
> fetched
> ---
>
> Key: NIFI-5604
> URL: https://issues.apache.org/jira/browse/NIFI-5604
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently, GenerateTableFetch will not output a flow file if there are no 
> rows to be fetched. However, it may be desired (especially with incoming flow 
> files) that a flow file be sent out even if GTF does not generate any SQL. 
> This capability, along with the fragment attributes from NIFI-5601, would 
> allow the user to handle this downstream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3075: NIFI-5604: Added property to allow empty FlowFile w...

2018-11-14 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3075#discussion_r233598457
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -450,7 +450,33 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 
 // If there are no SQL statements to be generated, still 
output an empty flow file if specified by the user
 if (numberOfFetches == 0 && 
outputEmptyFlowFileOnZeroResults) {
-session.transfer((fileToProcess == null) ? 
session.create() : session.create(fileToProcess), REL_SUCCESS);
+FlowFile emptyFlowFile = (fileToProcess == null) ? 
session.create() : session.create(fileToProcess);
+Map attributesToAdd = new HashMap<>();
+
+attributesToAdd.put("generatetablefetch.tableName", 
tableName);
+if (columnNames != null) {
+
attributesToAdd.put("generatetablefetch.columnNames", columnNames);
+}
+whereClause = maxValueClauses.isEmpty() ? "1=1" : 
StringUtils.join(maxValueClauses, " AND ");
+if (StringUtils.isNotBlank(whereClause)) {
--- End diff --

Yeah that's probably left over from before we added the "1=1", I'll take a 
look and clean it up


---


[jira] [Commented] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687027#comment-16687027
 ] 

ASF GitHub Bot commented on MINIFICPP-675:
--

Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/438#discussion_r233590484
  
--- Diff: extensions/http-curl/tests/HTTPSiteToSiteTests.cpp ---
@@ -123,61 +123,66 @@ struct test_profile {
 void run_variance(std::string test_file_location, bool isSecure, 
std::string url, const struct test_profile ) {
   SiteToSiteTestHarness harness(isSecure);
 
-  SiteToSiteLocationResponder responder(isSecure);
+  SiteToSiteLocationResponder *responder = new 
SiteToSiteLocationResponder(isSecure);
--- End diff --

this is a short lived test, we don't care about memory leaks here. and we 
don't control stoppage of the web server, so we can avoid issues entirely by 
simply adding this to the heap and not concerning ourselves with scope. 


> Parsed headers should not be searched case insensitively 
> -
>
> Key: MINIFICPP-675
> URL: https://issues.apache.org/jira/browse/MINIFICPP-675
> Project: NiFi MiNiFi C++
>  Issue Type: Documentation
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #438: MINIFICPP-675: Fix issue with hearder eva...

2018-11-14 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/438#discussion_r233590484
  
--- Diff: extensions/http-curl/tests/HTTPSiteToSiteTests.cpp ---
@@ -123,61 +123,66 @@ struct test_profile {
 void run_variance(std::string test_file_location, bool isSecure, 
std::string url, const struct test_profile ) {
   SiteToSiteTestHarness harness(isSecure);
 
-  SiteToSiteLocationResponder responder(isSecure);
+  SiteToSiteLocationResponder *responder = new 
SiteToSiteLocationResponder(isSecure);
--- End diff --

this is a short lived test, we don't care about memory leaks here. and we 
don't control stoppage of the web server, so we can avoid issues entirely by 
simply adding this to the heap and not concerning ourselves with scope. 


---


[jira] [Commented] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687022#comment-16687022
 ] 

ASF GitHub Bot commented on MINIFICPP-675:
--

GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/438

MINIFICPP-675: Fix issue with hearder evaluation and re-enable test

MINIFICPP-668: don't append port if it is not valid

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-675

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/438.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #438


commit ede5ccebee81ef8af51e38828c7afaf7947cf992
Author: Marc Parisi 
Date:   2018-11-14T19:22:28Z

MINIFICPP-675: Fix issue with hearder evaluation and re-enable test

MINIFICPP-668: don't append port if it is not valid




> Parsed headers should not be searched case insensitively 
> -
>
> Key: MINIFICPP-675
> URL: https://issues.apache.org/jira/browse/MINIFICPP-675
> Project: NiFi MiNiFi C++
>  Issue Type: Documentation
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #438: MINIFICPP-675: Fix issue with hearder eva...

2018-11-14 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/438

MINIFICPP-675: Fix issue with hearder evaluation and re-enable test

MINIFICPP-668: don't append port if it is not valid

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-675

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/438.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #438


commit ede5ccebee81ef8af51e38828c7afaf7947cf992
Author: Marc Parisi 
Date:   2018-11-14T19:22:28Z

MINIFICPP-675: Fix issue with hearder evaluation and re-enable test

MINIFICPP-668: don't append port if it is not valid




---


[jira] [Updated] (NIFI-5820) NiFi built with Java 1.8 needs to run on Java 11

2018-11-14 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5820:
--
Summary: NiFi built with Java 1.8 needs to run on Java 11  (was: NiFi build 
with Java 1.8 needs to run on Java 11)

> NiFi built with Java 1.8 needs to run on Java 11
> 
>
> Key: NIFI-5820
> URL: https://issues.apache.org/jira/browse/NIFI-5820
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5064) Fixes and improvements to PutKudu processor

2018-11-14 Thread Boris Tyukin (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686955#comment-16686955
 ] 

Boris Tyukin commented on NIFI-5064:


[~cammach] I think you created this processor so wanted to see if you can 
clarify my question above

> Fixes and improvements to PutKudu processor
> ---
>
> Key: NIFI-5064
> URL: https://issues.apache.org/jira/browse/NIFI-5064
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: Junegunn Choi
>Priority: Major
> Fix For: 1.7.0
>
>
> 1. Currently, PutKudu fails with NPE on null or missing values.
> 2. {{IllegalArgumentException}} on 16-bit integer columns because of [a 
> missing {{break}} in case clause for INT16 
> columns|https://github.com/apache/nifi/blob/rel/nifi-1.6.0/nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/PutKudu.java#L112-L115].
> 3. Also, {{IllegalArgumentException}} on 8-bit integer columns. We need a 
> separate case clause for INT8 columns where {{PartialRow#addByte}} instead of 
> {{PartialRow#addShort}} is be used.
> 4. NIFI-4384 added batch size parameter, however, it only applies to 
> FlowFiles with multiple records. {{KuduSession}} is created and closed for 
> each FlowFile, so if a FlowFile contains only a single record, no batching 
> takes place. A workaround would be to use a preprocessor to concatenate 
> multiple FlowFiles, but since {{PutHBase}} and {{PutSQL}} use 
> {{session.get(batchSize)}} to handle multiple FlowFiles at once, I think we 
> can take the same approach here with PutKudu as it simplifies the data flow.
> 5. {{PutKudu}} depends on kudu-client 1.3.0. But we can safely update to 
> 1.7.0.
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/release_notes.adoc]
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/prior_release_notes.adoc]
> A notable change in Kudu 1.7.0 is the addition of Decimal type.
> 6. {{PutKudu}} has {{Skip head line}} property for ignoring the first record 
> in a FlowFile. I suppose this was added to handle header lines in CSV files, 
> but I really don't think it's something {{PutKudu}} should handle. 
> {{CSVReader}} already has {{Treat First Line as Header}} option, so we should 
> tell the users to use it instead as we don't want to have the same option 
> here and there. Also, the default value of {{Skip head line}} is {{true}}, 
> and I found it very confusing as my use case was to stream-process 
> single-record FlowFiles. We can keep this property for backward 
> compatibility, but we should at least deprecate it and change the default 
> value to {{false}}.
> 7. Server-side errors such as uniqueness constraint violation are not checked 
> and simply ignored. When flush mode is set to {{AUTO_FLUSH_SYNC}}, we should 
> check the return value of {{KuduSession#apply}} to see it has {{RowError}}, 
> but PutKudu currently ignores it. For example, on uniqueness constraint 
> violation, we get a {{RowError}} saying "_Already present: key already 
> present (error 0)_".
> On the other hand, when flush mode is set to {{AUTO_FLUSH_BACKGROUND}}, 
> {{KuduSession#apply}}, understandably, returns null, and we should check the 
> return value of {{KuduSession#getPendingErrors()}}. And when the mode is 
> {{MANUAL_FLUSH}}, we should examine the return value of 
> {{KuduSession#flush()}} or {{KuduSession#close()}}. In this case, we also 
> have to make sure that we don't overflow the mutation buffer of 
> {{KuduSession}} by calling {{flush()}} before too late.
> 
> I'll create a pull request on GitHub. Since there are multiple issues to be 
> addressed, I made separate commits for each issue mentioned above so that 
> it's easier to review. You might want to squash them into one, or cherry-pick 
> a subset of commits if you don't agree with some decisions I made.
> Please let me know what you think. We deployed the code to a production 
> server last week and it's been running since without any issues steadily 
> processing 20K records/second.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5064) Fixes and improvements to PutKudu processor

2018-11-14 Thread Boris Tyukin (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686951#comment-16686951
 ] 

Boris Tyukin commented on NIFI-5064:


 [~junegunn] awesome and much-needed changes, thanks a bunch! 

Is there any strong reason why processor cannot support dynamic table name? I 
do see that client and table name is initialized in onschedule method but 
technically we can move table init to ontrigger and make it use expression 

> Fixes and improvements to PutKudu processor
> ---
>
> Key: NIFI-5064
> URL: https://issues.apache.org/jira/browse/NIFI-5064
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: Junegunn Choi
>Priority: Major
> Fix For: 1.7.0
>
>
> 1. Currently, PutKudu fails with NPE on null or missing values.
> 2. {{IllegalArgumentException}} on 16-bit integer columns because of [a 
> missing {{break}} in case clause for INT16 
> columns|https://github.com/apache/nifi/blob/rel/nifi-1.6.0/nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/PutKudu.java#L112-L115].
> 3. Also, {{IllegalArgumentException}} on 8-bit integer columns. We need a 
> separate case clause for INT8 columns where {{PartialRow#addByte}} instead of 
> {{PartialRow#addShort}} is be used.
> 4. NIFI-4384 added batch size parameter, however, it only applies to 
> FlowFiles with multiple records. {{KuduSession}} is created and closed for 
> each FlowFile, so if a FlowFile contains only a single record, no batching 
> takes place. A workaround would be to use a preprocessor to concatenate 
> multiple FlowFiles, but since {{PutHBase}} and {{PutSQL}} use 
> {{session.get(batchSize)}} to handle multiple FlowFiles at once, I think we 
> can take the same approach here with PutKudu as it simplifies the data flow.
> 5. {{PutKudu}} depends on kudu-client 1.3.0. But we can safely update to 
> 1.7.0.
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/release_notes.adoc]
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/prior_release_notes.adoc]
> A notable change in Kudu 1.7.0 is the addition of Decimal type.
> 6. {{PutKudu}} has {{Skip head line}} property for ignoring the first record 
> in a FlowFile. I suppose this was added to handle header lines in CSV files, 
> but I really don't think it's something {{PutKudu}} should handle. 
> {{CSVReader}} already has {{Treat First Line as Header}} option, so we should 
> tell the users to use it instead as we don't want to have the same option 
> here and there. Also, the default value of {{Skip head line}} is {{true}}, 
> and I found it very confusing as my use case was to stream-process 
> single-record FlowFiles. We can keep this property for backward 
> compatibility, but we should at least deprecate it and change the default 
> value to {{false}}.
> 7. Server-side errors such as uniqueness constraint violation are not checked 
> and simply ignored. When flush mode is set to {{AUTO_FLUSH_SYNC}}, we should 
> check the return value of {{KuduSession#apply}} to see it has {{RowError}}, 
> but PutKudu currently ignores it. For example, on uniqueness constraint 
> violation, we get a {{RowError}} saying "_Already present: key already 
> present (error 0)_".
> On the other hand, when flush mode is set to {{AUTO_FLUSH_BACKGROUND}}, 
> {{KuduSession#apply}}, understandably, returns null, and we should check the 
> return value of {{KuduSession#getPendingErrors()}}. And when the mode is 
> {{MANUAL_FLUSH}}, we should examine the return value of 
> {{KuduSession#flush()}} or {{KuduSession#close()}}. In this case, we also 
> have to make sure that we don't overflow the mutation buffer of 
> {{KuduSession}} by calling {{flush()}} before too late.
> 
> I'll create a pull request on GitHub. Since there are multiple issues to be 
> addressed, I made separate commits for each issue mentioned above so that 
> it's easier to review. You might want to squash them into one, or cherry-pick 
> a subset of commits if you don't agree with some decisions I made.
> Please let me know what you think. We deployed the code to a production 
> server last week and it's been running since without any issues steadily 
> processing 20K records/second.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5820) NiFi build with Java 1.8 needs to run on Java 11

2018-11-14 Thread Jeff Storck (JIRA)
Jeff Storck created NIFI-5820:
-

 Summary: NiFi build with Java 1.8 needs to run on Java 11
 Key: NIFI-5820
 URL: https://issues.apache.org/jira/browse/NIFI-5820
 Project: Apache NiFi
  Issue Type: Sub-task
Reporter: Jeff Storck
Assignee: Jeff Storck






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-675) Parsed headers should not be searched case insensitively

2018-11-14 Thread Mr TheSegfault (JIRA)
Mr TheSegfault created MINIFICPP-675:


 Summary: Parsed headers should not be searched case insensitively 
 Key: MINIFICPP-675
 URL: https://issues.apache.org/jira/browse/MINIFICPP-675
 Project: NiFi MiNiFi C++
  Issue Type: Documentation
Reporter: Mr TheSegfault
Assignee: Mr TheSegfault


Parsed headers should not be searched case insensitively 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-665) Fix basestream references

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686897#comment-16686897
 ] 

ASF GitHub Bot commented on MINIFICPP-665:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/435


> Fix basestream references
> -
>
> Key: MINIFICPP-665
> URL: https://issues.apache.org/jira/browse/MINIFICPP-665
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> At some point basestream was changed to be a composable pointer ( hence the 
> name base stream ) as a wrapper for a variety of streams. this caused a bug 
> where a self reference could occur. We overload this stream to be a self 
> reference since it extends DataStream. This is a very likely usage so we can 
> avoid the bugs by simply checking if the composition is a reference to self. 
> This reference to self won't be removed in the interest of time and re-using 
> Serializable's interface. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #435: MINIFICPP-665: Add reference checks for s...

2018-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/435


---


[jira] [Assigned] (MINIFICPP-645) Move from new to malloc in CAPI to facilitate eventual change from C++ to C

2018-11-14 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault reassigned MINIFICPP-645:


Assignee: (was: Mr TheSegfault)

> Move from new to malloc in CAPI to facilitate eventual change from C++ to C
> ---
>
> Key: MINIFICPP-645
> URL: https://issues.apache.org/jira/browse/MINIFICPP-645
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Priority: Blocker
>  Labels: CAPI, nanofi
> Fix For: 0.6.0
>
>
> As gradually move to C we should move out of libminifi and remove the linter. 
> Nothing that is returned via the API that is not an opaque pointer should use 
> new



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3075: NIFI-5604: Added property to allow empty FlowFile w...

2018-11-14 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3075#discussion_r233524977
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -450,7 +450,33 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 
 // If there are no SQL statements to be generated, still 
output an empty flow file if specified by the user
 if (numberOfFetches == 0 && 
outputEmptyFlowFileOnZeroResults) {
-session.transfer((fileToProcess == null) ? 
session.create() : session.create(fileToProcess), REL_SUCCESS);
+FlowFile emptyFlowFile = (fileToProcess == null) ? 
session.create() : session.create(fileToProcess);
+Map attributesToAdd = new HashMap<>();
+
+attributesToAdd.put("generatetablefetch.tableName", 
tableName);
+if (columnNames != null) {
+
attributesToAdd.put("generatetablefetch.columnNames", columnNames);
+}
+whereClause = maxValueClauses.isEmpty() ? "1=1" : 
StringUtils.join(maxValueClauses, " AND ");
+if (StringUtils.isNotBlank(whereClause)) {
+
attributesToAdd.put("generatetablefetch.whereClause", whereClause);
+}
+final String maxColumnNames = 
StringUtils.join(maxValueColumnNameList, ", ");
+if (StringUtils.isNotBlank(maxColumnNames)) {
+
attributesToAdd.put("generatetablefetch.maxColumnNames", maxColumnNames);
+}
+attributesToAdd.put("generatetablefetch.limit", null);
--- End diff --

There is no existing test in `master`, but I think in normal circumstances 
`limit` actually gets written as `"null"`.

```
if ((i == numberOfFetches - 1) && useColumnValsForPaging && 
(maxValueClauses.isEmpty() || customWhereClause != null)) {
maxValueClauses.add(columnForPartitioning + " 
<= " + maxValueForPartitioning);
limit = null;
}
```

This value is then written using `String.valueOf(limit)`, which is going to 
translate the `null` to `"null"`.


---


[jira] [Commented] (NIFI-5604) Allow GenerateTableFetch to send empty flow files when no rows would be fetched

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686841#comment-16686841
 ] 

ASF GitHub Bot commented on NIFI-5604:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3075#discussion_r233524164
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -450,7 +450,33 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 
 // If there are no SQL statements to be generated, still 
output an empty flow file if specified by the user
 if (numberOfFetches == 0 && 
outputEmptyFlowFileOnZeroResults) {
-session.transfer((fileToProcess == null) ? 
session.create() : session.create(fileToProcess), REL_SUCCESS);
+FlowFile emptyFlowFile = (fileToProcess == null) ? 
session.create() : session.create(fileToProcess);
+Map attributesToAdd = new HashMap<>();
+
+attributesToAdd.put("generatetablefetch.tableName", 
tableName);
+if (columnNames != null) {
+
attributesToAdd.put("generatetablefetch.columnNames", columnNames);
+}
+whereClause = maxValueClauses.isEmpty() ? "1=1" : 
StringUtils.join(maxValueClauses, " AND ");
+if (StringUtils.isNotBlank(whereClause)) {
--- End diff --

I know it was already like this, but I'm not sure it's even possible for 
`whereClause` to be blank based on the logic here. Same for existing code down 
below for a normal run.


> Allow GenerateTableFetch to send empty flow files when no rows would be 
> fetched
> ---
>
> Key: NIFI-5604
> URL: https://issues.apache.org/jira/browse/NIFI-5604
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently, GenerateTableFetch will not output a flow file if there are no 
> rows to be fetched. However, it may be desired (especially with incoming flow 
> files) that a flow file be sent out even if GTF does not generate any SQL. 
> This capability, along with the fragment attributes from NIFI-5601, would 
> allow the user to handle this downstream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5604) Allow GenerateTableFetch to send empty flow files when no rows would be fetched

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686842#comment-16686842
 ] 

ASF GitHub Bot commented on NIFI-5604:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3075#discussion_r233524977
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -450,7 +450,33 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 
 // If there are no SQL statements to be generated, still 
output an empty flow file if specified by the user
 if (numberOfFetches == 0 && 
outputEmptyFlowFileOnZeroResults) {
-session.transfer((fileToProcess == null) ? 
session.create() : session.create(fileToProcess), REL_SUCCESS);
+FlowFile emptyFlowFile = (fileToProcess == null) ? 
session.create() : session.create(fileToProcess);
+Map attributesToAdd = new HashMap<>();
+
+attributesToAdd.put("generatetablefetch.tableName", 
tableName);
+if (columnNames != null) {
+
attributesToAdd.put("generatetablefetch.columnNames", columnNames);
+}
+whereClause = maxValueClauses.isEmpty() ? "1=1" : 
StringUtils.join(maxValueClauses, " AND ");
+if (StringUtils.isNotBlank(whereClause)) {
+
attributesToAdd.put("generatetablefetch.whereClause", whereClause);
+}
+final String maxColumnNames = 
StringUtils.join(maxValueColumnNameList, ", ");
+if (StringUtils.isNotBlank(maxColumnNames)) {
+
attributesToAdd.put("generatetablefetch.maxColumnNames", maxColumnNames);
+}
+attributesToAdd.put("generatetablefetch.limit", null);
--- End diff --

There is no existing test in `master`, but I think in normal circumstances 
`limit` actually gets written as `"null"`.

```
if ((i == numberOfFetches - 1) && useColumnValsForPaging && 
(maxValueClauses.isEmpty() || customWhereClause != null)) {
maxValueClauses.add(columnForPartitioning + " 
<= " + maxValueForPartitioning);
limit = null;
}
```

This value is then written using `String.valueOf(limit)`, which is going to 
translate the `null` to `"null"`.


> Allow GenerateTableFetch to send empty flow files when no rows would be 
> fetched
> ---
>
> Key: NIFI-5604
> URL: https://issues.apache.org/jira/browse/NIFI-5604
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently, GenerateTableFetch will not output a flow file if there are no 
> rows to be fetched. However, it may be desired (especially with incoming flow 
> files) that a flow file be sent out even if GTF does not generate any SQL. 
> This capability, along with the fragment attributes from NIFI-5601, would 
> allow the user to handle this downstream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3075: NIFI-5604: Added property to allow empty FlowFile w...

2018-11-14 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3075#discussion_r233524164
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -450,7 +450,33 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 
 // If there are no SQL statements to be generated, still 
output an empty flow file if specified by the user
 if (numberOfFetches == 0 && 
outputEmptyFlowFileOnZeroResults) {
-session.transfer((fileToProcess == null) ? 
session.create() : session.create(fileToProcess), REL_SUCCESS);
+FlowFile emptyFlowFile = (fileToProcess == null) ? 
session.create() : session.create(fileToProcess);
+Map attributesToAdd = new HashMap<>();
+
+attributesToAdd.put("generatetablefetch.tableName", 
tableName);
+if (columnNames != null) {
+
attributesToAdd.put("generatetablefetch.columnNames", columnNames);
+}
+whereClause = maxValueClauses.isEmpty() ? "1=1" : 
StringUtils.join(maxValueClauses, " AND ");
+if (StringUtils.isNotBlank(whereClause)) {
--- End diff --

I know it was already like this, but I'm not sure it's even possible for 
`whereClause` to be blank based on the logic here. Same for existing code down 
below for a normal run.


---


[jira] [Commented] (NIFI-1101) Processor support Openstack Swift

2018-11-14 Thread Laurenceau Julien (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686824#comment-16686824
 ] 

Laurenceau Julien commented on NIFI-1101:
-

I am confused because there is also a ticket on swift Jira that says that it 
should be compatible...

[https://review.openstack.org/#/c/571561/]

I'll try harder and keep you posted.

> Processor support Openstack Swift
> -
>
> Key: NIFI-1101
> URL: https://issues.apache.org/jira/browse/NIFI-1101
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: robinlin
>Priority: Minor
>
> Hi
> I think the current S3 processors are not available for Openstack Swift.
> So in order to support Openstack Swift functionalities, the following are 
> some processors might have to be implemented
> 1. GetContainer
> 2. PutContainer
> 3. DeleteContainer
> 4. GetSwiftObject
> 5. PutSwiftObject
> 6. DeleteSwiftObject



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-1101) Processor support Openstack Swift

2018-11-14 Thread Laurenceau Julien (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686824#comment-16686824
 ] 

Laurenceau Julien edited comment on NIFI-1101 at 11/14/18 4:31 PM:
---

I am confused because there is also a recent ticket on swift Jira that says 
that it should be compatible (merged in June 2018)

[https://review.openstack.org/#/c/571561/]

https://docs.openstack.org/swift/latest/s3_compat.html

I'll try harder and keep you posted.


was (Author: julienlau):
I am confused because there is also a ticket on swift Jira that says that it 
should be compatible...

[https://review.openstack.org/#/c/571561/]

I'll try harder and keep you posted.

> Processor support Openstack Swift
> -
>
> Key: NIFI-1101
> URL: https://issues.apache.org/jira/browse/NIFI-1101
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: robinlin
>Priority: Minor
>
> Hi
> I think the current S3 processors are not available for Openstack Swift.
> So in order to support Openstack Swift functionalities, the following are 
> some processors might have to be implemented
> 1. GetContainer
> 2. PutContainer
> 3. DeleteContainer
> 4. GetSwiftObject
> 5. PutSwiftObject
> 6. DeleteSwiftObject



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (MINIFICPP-645) Move from new to malloc in CAPI to facilitate eventual change from C++ to C

2018-11-14 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault reassigned MINIFICPP-645:


Assignee: Mr TheSegfault  (was: Arpad Boda)

> Move from new to malloc in CAPI to facilitate eventual change from C++ to C
> ---
>
> Key: MINIFICPP-645
> URL: https://issues.apache.org/jira/browse/MINIFICPP-645
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Blocker
>  Labels: CAPI, nanofi
> Fix For: 0.6.0
>
>
> As gradually move to C we should move out of libminifi and remove the linter. 
> Nothing that is returned via the API that is not an opaque pointer should use 
> new



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (MINIFICPP-645) Move from new to malloc in CAPI to facilitate eventual change from C++ to C

2018-11-14 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault reassigned MINIFICPP-645:


Assignee: Arpad Boda

> Move from new to malloc in CAPI to facilitate eventual change from C++ to C
> ---
>
> Key: MINIFICPP-645
> URL: https://issues.apache.org/jira/browse/MINIFICPP-645
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Arpad Boda
>Priority: Blocker
>  Labels: CAPI, nanofi
> Fix For: 0.6.0
>
>
> As gradually move to C we should move out of libminifi and remove the linter. 
> Nothing that is returned via the API that is not an opaque pointer should use 
> new



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-1101) Processor support Openstack Swift

2018-11-14 Thread Laurenceau Julien (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686784#comment-16686784
 ] 

Laurenceau Julien commented on NIFI-1101:
-

Hi,

I think this ticket should be re-open ?

The swift/S3 compatiblity chart is obsolete.

I did not try to hack the S3 processor, but the doc says that it needs now to 
install a proxy if one wants to use S3api for a Swift storage and this not fine.

https://docs.openstack.org/mitaka/config-reference/object-storage/configure-s3.html

Thanks and regards

PS: it said to be a duplicate ticket but I cannot find any reference to another 
ticket on this topic ?

> Processor support Openstack Swift
> -
>
> Key: NIFI-1101
> URL: https://issues.apache.org/jira/browse/NIFI-1101
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 0.3.0
>Reporter: robinlin
>Priority: Minor
>
> Hi
> I think the current S3 processors are not available for Openstack Swift.
> So in order to support Openstack Swift functionalities, the following are 
> some processors might have to be implemented
> 1. GetContainer
> 2. PutContainer
> 3. DeleteContainer
> 4. GetSwiftObject
> 5. PutSwiftObject
> 6. DeleteSwiftObject



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-1561) FetchS3Object should support non-AWS S3 locations

2018-11-14 Thread Laurenceau Julien (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686776#comment-16686776
 ] 

Laurenceau Julien commented on NIFI-1561:
-

Hi,

I am currently searching for a proper way to use openstack swift within Nifi.

It seems that in the past Swift API was compatible with S3 API, but from what I 
read this is not the case anymore. Thus, it may be more appropriate to create a 
new processor for openstack swift ?

Regards

Julien

> FetchS3Object should support non-AWS S3 locations
> -
>
> Key: NIFI-1561
> URL: https://issues.apache.org/jira/browse/NIFI-1561
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Andre F de Miranda
>Priority: Major
>
> FetchS3Object currently only supports AWS as a source, however, some NiFI 
> enterprise user have the interest to fetch files from S3 compatible APIs 
> (e.g. RIAK, OpenStack SWIFT)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686769#comment-16686769
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233507600
  
--- Diff: extensions/coap/protocols/CoapC2Protocol.cpp ---
@@ -0,0 +1,353 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "CoapC2Protocol.h"
+#include "c2/PayloadSerializer.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace c2 {
+
+#include "coap_functions.h"
+#include "io/BaseStream.h"
+
+CoapProtocol::CoapProtocol(std::string name, utils::Identifier uuid)
+: RESTSender(name, uuid),
+  require_registration_(false),
+  logger_(logging::LoggerFactory::getLogger()) {
+}
+
+CoapProtocol::~CoapProtocol() {
+}
+
+void CoapProtocol::initialize(const 
std::shared_ptr , const 
std::shared_ptr ) {
+  RESTSender::initialize(controller, configure);
+  if (configure->get("nifi.c2.coap.connector.service", 
controller_service_name_)) {
+auto service = 
controller->getControllerService(controller_service_name_);
+coap_service_ = 
std::static_pointer_cast(service);
+  } else {
+logger_->log_info("No CoAP connector configured, so using default 
service");
+coap_service_ = 
std::make_shared("cs", configure);
+coap_service_->onEnable();
+  }
+}
+
+C2Payload CoapProtocol::consumePayload(const std::string , const 
C2Payload , Direction direction, bool async) {
+  return RESTSender::consumePayload(url, payload, direction, false);
+}
+
+int CoapProtocol::writeAcknowledgement(io::BaseStream *stream, const 
C2Payload ) {
+  auto ident = payload.getIdentifier();
+  auto state = payload.getStatus().getState();
+  stream->writeUTF(ident);
+  uint8_t payloadState = 0;
+  switch (state) {
+case state::UpdateState::NESTED:
+case state::UpdateState::INITIATE:
+case state::UpdateState::FULLY_APPLIED:
+case state::UpdateState::READ_COMPLETE:
+  payloadState = 0;
+  break;
+case state::UpdateState::NOT_APPLIED:
+case state::UpdateState::PARTIALLY_APPLIED:
+  payloadState = 1;
+  break;
+case state::UpdateState::READ_ERROR:
+  payloadState = 2;
+  break;
+case state::UpdateState::SET_ERROR:
+  payloadState = 3;
+  break;
+  }
+  stream->write(, 1);
+  return 0;
+}
+
+int CoapProtocol::writeHeartbeat(io::BaseStream *stream, const C2Payload 
) {
+  bool byte;
+  uint16_t size = 0;
+
+  std::string deviceIdent;
+  // device identifier
+  auto deviceInfo = getPayload("deviceInfo", payload);
+  if (deviceInfo) {
+for (const auto  : deviceInfo->getContent()) {
+  if (!getString(, "identifier", )) {
+break;
+  }
+}
+  }
+  stream->writeUTF(deviceIdent, false);
+  std::string agentIdent;
+  // agent identifier
+  auto agentInfo = getPayload("agentInfo", payload);
+  if (agentInfo) {
+for (const auto  : agentInfo->getContent()) {
+  if (!getString(, "identifier", )) {
+break;
+  }
+}
+  }
+  if (agentIdent.empty()) {
+return -1;
+  }
+  stream->writeUTF(agentIdent, false);
+
+  auto flowInfo = getPayload("flowInfo", payload);
+
+  if (flowInfo != nullptr) {
+
+auto components = getPayload("components", flowInfo);
+
+auto queues = getPayload("queues", flowInfo);
+
+auto versionedFlowSnapshotURI = getPayload("versionedFlowSnapshotURI", 
flowInfo);
+
+if (components && queues && versionedFlowSnapshotURI) {
+  byte = true;
+  stream->write(byte);
+  size = 

[GitHub] nifi-minifi-cpp pull request #437: MINIFICPP-558: initial provisioning for C...

2018-11-14 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233507600
  
--- Diff: extensions/coap/protocols/CoapC2Protocol.cpp ---
@@ -0,0 +1,353 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "CoapC2Protocol.h"
+#include "c2/PayloadSerializer.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace c2 {
+
+#include "coap_functions.h"
+#include "io/BaseStream.h"
+
+CoapProtocol::CoapProtocol(std::string name, utils::Identifier uuid)
+: RESTSender(name, uuid),
+  require_registration_(false),
+  logger_(logging::LoggerFactory::getLogger()) {
+}
+
+CoapProtocol::~CoapProtocol() {
+}
+
+void CoapProtocol::initialize(const 
std::shared_ptr , const 
std::shared_ptr ) {
+  RESTSender::initialize(controller, configure);
+  if (configure->get("nifi.c2.coap.connector.service", 
controller_service_name_)) {
+auto service = 
controller->getControllerService(controller_service_name_);
+coap_service_ = 
std::static_pointer_cast(service);
+  } else {
+logger_->log_info("No CoAP connector configured, so using default 
service");
+coap_service_ = 
std::make_shared("cs", configure);
+coap_service_->onEnable();
+  }
+}
+
+C2Payload CoapProtocol::consumePayload(const std::string , const 
C2Payload , Direction direction, bool async) {
+  return RESTSender::consumePayload(url, payload, direction, false);
+}
+
+int CoapProtocol::writeAcknowledgement(io::BaseStream *stream, const 
C2Payload ) {
+  auto ident = payload.getIdentifier();
+  auto state = payload.getStatus().getState();
+  stream->writeUTF(ident);
+  uint8_t payloadState = 0;
+  switch (state) {
+case state::UpdateState::NESTED:
+case state::UpdateState::INITIATE:
+case state::UpdateState::FULLY_APPLIED:
+case state::UpdateState::READ_COMPLETE:
+  payloadState = 0;
+  break;
+case state::UpdateState::NOT_APPLIED:
+case state::UpdateState::PARTIALLY_APPLIED:
+  payloadState = 1;
+  break;
+case state::UpdateState::READ_ERROR:
+  payloadState = 2;
+  break;
+case state::UpdateState::SET_ERROR:
+  payloadState = 3;
+  break;
+  }
+  stream->write(, 1);
+  return 0;
+}
+
+int CoapProtocol::writeHeartbeat(io::BaseStream *stream, const C2Payload 
) {
+  bool byte;
+  uint16_t size = 0;
+
+  std::string deviceIdent;
+  // device identifier
+  auto deviceInfo = getPayload("deviceInfo", payload);
+  if (deviceInfo) {
+for (const auto  : deviceInfo->getContent()) {
+  if (!getString(, "identifier", )) {
+break;
+  }
+}
+  }
+  stream->writeUTF(deviceIdent, false);
+  std::string agentIdent;
+  // agent identifier
+  auto agentInfo = getPayload("agentInfo", payload);
+  if (agentInfo) {
+for (const auto  : agentInfo->getContent()) {
+  if (!getString(, "identifier", )) {
+break;
+  }
+}
+  }
+  if (agentIdent.empty()) {
+return -1;
+  }
+  stream->writeUTF(agentIdent, false);
+
+  auto flowInfo = getPayload("flowInfo", payload);
+
+  if (flowInfo != nullptr) {
+
+auto components = getPayload("components", flowInfo);
+
+auto queues = getPayload("queues", flowInfo);
+
+auto versionedFlowSnapshotURI = getPayload("versionedFlowSnapshotURI", 
flowInfo);
+
+if (components && queues && versionedFlowSnapshotURI) {
+  byte = true;
+  stream->write(byte);
+  size = components->getNestedPayloads().size();
+  stream->write(size);
+  // write statuses
+  for (const auto  : components->getNestedPayloads()) {
+stream->writeUTF(component.getLabel(), false);
+for (const auto  : 

[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686758#comment-16686758
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233504886
  
--- Diff: extensions/coap/protocols/CoapC2Protocol.cpp ---
@@ -0,0 +1,353 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "CoapC2Protocol.h"
+#include "c2/PayloadSerializer.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace c2 {
+
+#include "coap_functions.h"
+#include "io/BaseStream.h"
+
+CoapProtocol::CoapProtocol(std::string name, utils::Identifier uuid)
+: RESTSender(name, uuid),
+  require_registration_(false),
+  logger_(logging::LoggerFactory::getLogger()) {
+}
+
+CoapProtocol::~CoapProtocol() {
+}
+
+void CoapProtocol::initialize(const 
std::shared_ptr , const 
std::shared_ptr ) {
+  RESTSender::initialize(controller, configure);
+  if (configure->get("nifi.c2.coap.connector.service", 
controller_service_name_)) {
+auto service = 
controller->getControllerService(controller_service_name_);
+coap_service_ = 
std::static_pointer_cast(service);
+  } else {
+logger_->log_info("No CoAP connector configured, so using default 
service");
+coap_service_ = 
std::make_shared("cs", configure);
+coap_service_->onEnable();
+  }
+}
+
+C2Payload CoapProtocol::consumePayload(const std::string , const 
C2Payload , Direction direction, bool async) {
+  return RESTSender::consumePayload(url, payload, direction, false);
+}
+
+int CoapProtocol::writeAcknowledgement(io::BaseStream *stream, const 
C2Payload ) {
+  auto ident = payload.getIdentifier();
+  auto state = payload.getStatus().getState();
+  stream->writeUTF(ident);
+  uint8_t payloadState = 0;
+  switch (state) {
+case state::UpdateState::NESTED:
+case state::UpdateState::INITIATE:
+case state::UpdateState::FULLY_APPLIED:
+case state::UpdateState::READ_COMPLETE:
+  payloadState = 0;
+  break;
+case state::UpdateState::NOT_APPLIED:
+case state::UpdateState::PARTIALLY_APPLIED:
+  payloadState = 1;
+  break;
+case state::UpdateState::READ_ERROR:
+  payloadState = 2;
+  break;
+case state::UpdateState::SET_ERROR:
+  payloadState = 3;
+  break;
+  }
+  stream->write(, 1);
+  return 0;
+}
+
+int CoapProtocol::writeHeartbeat(io::BaseStream *stream, const C2Payload 
) {
+  bool byte;
+  uint16_t size = 0;
+
+  std::string deviceIdent;
+  // device identifier
+  auto deviceInfo = getPayload("deviceInfo", payload);
+  if (deviceInfo) {
+for (const auto  : deviceInfo->getContent()) {
+  if (!getString(, "identifier", )) {
+break;
+  }
+}
+  }
+  stream->writeUTF(deviceIdent, false);
+  std::string agentIdent;
+  // agent identifier
+  auto agentInfo = getPayload("agentInfo", payload);
+  if (agentInfo) {
+for (const auto  : agentInfo->getContent()) {
+  if (!getString(, "identifier", )) {
+break;
+  }
+}
+  }
+  if (agentIdent.empty()) {
+return -1;
+  }
+  stream->writeUTF(agentIdent, false);
+
+  auto flowInfo = getPayload("flowInfo", payload);
+
+  if (flowInfo != nullptr) {
+
+auto components = getPayload("components", flowInfo);
+
+auto queues = getPayload("queues", flowInfo);
+
+auto versionedFlowSnapshotURI = getPayload("versionedFlowSnapshotURI", 
flowInfo);
+
+if (components && queues && versionedFlowSnapshotURI) {
+  byte = true;
+  stream->write(byte);
+  size = 

[jira] [Commented] (NIFI-5652) LogMessage emits "transfer relationship not specified" error

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686757#comment-16686757
 ] 

ASF GitHub Bot commented on NIFI-5652:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/3170

NIFI-5652: Fixed LogMessage when logging level is disabled

I couldn't write a unit test as I can't change the log level in between 
unit tests. I reproduced the issue and verified it is no longer present with a 
live NiFi instance running with this patch.

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-5652

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3170.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3170


commit 6c08aea9a7394237d5e575086fe965001e3700d7
Author: Matthew Burgess 
Date:   2018-11-14T15:50:37Z

NIFI-5652: Fixed LogMessage when logging level is disabled




> LogMessage emits "transfer relationship not specified" error
> 
>
> Key: NIFI-5652
> URL: https://issues.apache.org/jira/browse/NIFI-5652
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
> Environment: 3 nodes cluster of CentOS 7.4 servers; JRE 1.8
>Reporter: Takeshi Koya
>Assignee: Matt Burgess
>Priority: Minor
>
> Hi, all NiFi developers and users.
> I upgraded our staging cluster from 1.5 to 1.7.1 recently, 
>  but I am in trouble with LogMessage errors.
> The combination of a log level and a bulletin level that is higher than the 
> log level seems to bring about the following "transfer relationship not 
> specified" error.
> {code:java}
> 2018-10-02 00:00:00,442 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.standard.LogMessage LogMessage[id=...] LogMessage[id=...] 
> failed to process session due to 
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified; Processor Administratively Yielded for 
> 1 sec: org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified
> {code}
> Errors disappear if we make a bulletin level equal to or lower than a log 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #437: MINIFICPP-558: initial provisioning for C...

2018-11-14 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233504886
  
--- Diff: extensions/coap/protocols/CoapC2Protocol.cpp ---
@@ -0,0 +1,353 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "CoapC2Protocol.h"
+#include "c2/PayloadSerializer.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace c2 {
+
+#include "coap_functions.h"
+#include "io/BaseStream.h"
+
+CoapProtocol::CoapProtocol(std::string name, utils::Identifier uuid)
+: RESTSender(name, uuid),
+  require_registration_(false),
+  logger_(logging::LoggerFactory::getLogger()) {
+}
+
+CoapProtocol::~CoapProtocol() {
+}
+
+void CoapProtocol::initialize(const 
std::shared_ptr , const 
std::shared_ptr ) {
+  RESTSender::initialize(controller, configure);
+  if (configure->get("nifi.c2.coap.connector.service", 
controller_service_name_)) {
+auto service = 
controller->getControllerService(controller_service_name_);
+coap_service_ = 
std::static_pointer_cast(service);
+  } else {
+logger_->log_info("No CoAP connector configured, so using default 
service");
+coap_service_ = 
std::make_shared("cs", configure);
+coap_service_->onEnable();
+  }
+}
+
+C2Payload CoapProtocol::consumePayload(const std::string , const 
C2Payload , Direction direction, bool async) {
+  return RESTSender::consumePayload(url, payload, direction, false);
+}
+
+int CoapProtocol::writeAcknowledgement(io::BaseStream *stream, const 
C2Payload ) {
+  auto ident = payload.getIdentifier();
+  auto state = payload.getStatus().getState();
+  stream->writeUTF(ident);
+  uint8_t payloadState = 0;
+  switch (state) {
+case state::UpdateState::NESTED:
+case state::UpdateState::INITIATE:
+case state::UpdateState::FULLY_APPLIED:
+case state::UpdateState::READ_COMPLETE:
+  payloadState = 0;
+  break;
+case state::UpdateState::NOT_APPLIED:
+case state::UpdateState::PARTIALLY_APPLIED:
+  payloadState = 1;
+  break;
+case state::UpdateState::READ_ERROR:
+  payloadState = 2;
+  break;
+case state::UpdateState::SET_ERROR:
+  payloadState = 3;
+  break;
+  }
+  stream->write(, 1);
+  return 0;
+}
+
+int CoapProtocol::writeHeartbeat(io::BaseStream *stream, const C2Payload 
) {
+  bool byte;
+  uint16_t size = 0;
+
+  std::string deviceIdent;
+  // device identifier
+  auto deviceInfo = getPayload("deviceInfo", payload);
+  if (deviceInfo) {
+for (const auto  : deviceInfo->getContent()) {
+  if (!getString(, "identifier", )) {
+break;
+  }
+}
+  }
+  stream->writeUTF(deviceIdent, false);
+  std::string agentIdent;
+  // agent identifier
+  auto agentInfo = getPayload("agentInfo", payload);
+  if (agentInfo) {
+for (const auto  : agentInfo->getContent()) {
+  if (!getString(, "identifier", )) {
+break;
+  }
+}
+  }
+  if (agentIdent.empty()) {
+return -1;
+  }
+  stream->writeUTF(agentIdent, false);
+
+  auto flowInfo = getPayload("flowInfo", payload);
+
+  if (flowInfo != nullptr) {
+
+auto components = getPayload("components", flowInfo);
+
+auto queues = getPayload("queues", flowInfo);
+
+auto versionedFlowSnapshotURI = getPayload("versionedFlowSnapshotURI", 
flowInfo);
+
+if (components && queues && versionedFlowSnapshotURI) {
+  byte = true;
+  stream->write(byte);
+  size = components->getNestedPayloads().size();
+  stream->write(size);
+  // write statuses
+  for (const auto  : components->getNestedPayloads()) {
+stream->writeUTF(component.getLabel(), false);
+for (const auto  : 

[jira] [Updated] (NIFI-5652) LogMessage emits "transfer relationship not specified" error

2018-11-14 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5652:
---
Affects Version/s: (was: 1.7.1)
   Status: Patch Available  (was: In Progress)

> LogMessage emits "transfer relationship not specified" error
> 
>
> Key: NIFI-5652
> URL: https://issues.apache.org/jira/browse/NIFI-5652
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
> Environment: 3 nodes cluster of CentOS 7.4 servers; JRE 1.8
>Reporter: Takeshi Koya
>Assignee: Matt Burgess
>Priority: Minor
>
> Hi, all NiFi developers and users.
> I upgraded our staging cluster from 1.5 to 1.7.1 recently, 
>  but I am in trouble with LogMessage errors.
> The combination of a log level and a bulletin level that is higher than the 
> log level seems to bring about the following "transfer relationship not 
> specified" error.
> {code:java}
> 2018-10-02 00:00:00,442 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.standard.LogMessage LogMessage[id=...] LogMessage[id=...] 
> failed to process session due to 
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified; Processor Administratively Yielded for 
> 1 sec: org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified
> {code}
> Errors disappear if we make a bulletin level equal to or lower than a log 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3170: NIFI-5652: Fixed LogMessage when logging level is d...

2018-11-14 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/3170

NIFI-5652: Fixed LogMessage when logging level is disabled

I couldn't write a unit test as I can't change the log level in between 
unit tests. I reproduced the issue and verified it is no longer present with a 
live NiFi instance running with this patch.

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-5652

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3170.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3170


commit 6c08aea9a7394237d5e575086fe965001e3700d7
Author: Matthew Burgess 
Date:   2018-11-14T15:50:37Z

NIFI-5652: Fixed LogMessage when logging level is disabled




---


[jira] [Assigned] (NIFI-5652) LogMessage emits "transfer relationship not specified" error

2018-11-14 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-5652:
--

Assignee: Matt Burgess

> LogMessage emits "transfer relationship not specified" error
> 
>
> Key: NIFI-5652
> URL: https://issues.apache.org/jira/browse/NIFI-5652
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.7.1
> Environment: 3 nodes cluster of CentOS 7.4 servers; JRE 1.8
>Reporter: Takeshi Koya
>Assignee: Matt Burgess
>Priority: Minor
>
> Hi, all NiFi developers and users.
> I upgraded our staging cluster from 1.5 to 1.7.1 recently, 
>  but I am in trouble with LogMessage errors.
> The combination of a log level and a bulletin level that is higher than the 
> log level seems to bring about the following "transfer relationship not 
> specified" error.
> {code:java}
> 2018-10-02 00:00:00,442 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.standard.LogMessage LogMessage[id=...] LogMessage[id=...] 
> failed to process session due to 
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified; Processor Administratively Yielded for 
> 1 sec: org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=...,claim=StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=1538373600261-460, container=default, 
> section=460], offset=7822560, 
> length=8434],offset=0,name=potaufeu_topics_static_ip-172-19-25-78_20181001162829878.tar.gz,size=8434]
>  transfer relationship not specified
> {code}
> Errors disappear if we make a bulletin level equal to or lower than a log 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5818) JDBCommon does not convert SQL result sets to avro when columns have no name

2018-11-14 Thread Charlie Meyer (JIRA)
Charlie Meyer created NIFI-5818:
---

 Summary: JDBCommon does not convert SQL result sets to avro when 
columns have no name
 Key: NIFI-5818
 URL: https://issues.apache.org/jira/browse/NIFI-5818
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.8.0
Reporter: Charlie Meyer


I'm using executesqlrecord, but this seems to happen on any processor that 
relies on JDBCommon for converting avro under the covers.

If I run a query like {{SELECT 1}} I get a stack trace:
{code:java}
{ "cause": { "cause": null, "stackTrace": [ { "methodName": "validateName", 
"fileName": "Schema.java", "lineNumber": 1144, "className": 
"org.apache.avro.Schema", "nativeMethod": false }, { "methodName": 
"access$200", "fileName": "Schema.java", "lineNumber": 81, "className": 
"org.apache.avro.Schema", "nativeMethod": false }, { "methodName": "", 
"fileName": "Schema.java", "lineNumber": 403, "className": 
"org.apache.avro.Schema$Field", "nativeMethod": false }, { "methodName": 
"completeField", "fileName": "SchemaBuilder.java", "lineNumber": 2124, 
"className": "org.apache.avro.SchemaBuilder$FieldBuilder", "nativeMethod": 
false }, { "methodName": "completeField", "fileName": "SchemaBuilder.java", 
"lineNumber": 2120, "className": "org.apache.avro.SchemaBuilder$FieldBuilder", 
"nativeMethod": false }, { "methodName": "access$5200", "fileName": 
"SchemaBuilder.java", "lineNumber": 2034, "className": 
"org.apache.avro.SchemaBuilder$FieldBuilder", "nativeMethod": false }, { 
"methodName": "noDefault", "fileName": "SchemaBuilder.java", "lineNumber": 
2146, "className": "org.apache.avro.SchemaBuilder$FieldDefault", 
"nativeMethod": false }, { "methodName": "createSchema", "fileName": 
"JdbcCommon.java", "lineNumber": 577, "className": 
"org.apache.nifi.processors.standard.util.JdbcCommon", "nativeMethod": false }, 
{ "methodName": "writeResultSet", "fileName": "RecordSqlWriter.java", 
"lineNumber": 68, "className": 
"org.apache.nifi.processors.standard.sql.RecordSqlWriter", "nativeMethod": 
false }, { "methodName": "lambda$onTrigger$1", "fileName": 
"ExecuteSqlRecordWithoutSwallowingErrors.java", "lineNumber": 362, "className": 
"com.civitaslearning.collect.nifi.processor.ExecuteSqlRecordWithoutSwallowingErrors",
 "nativeMethod": false }, { "methodName": "write", "fileName": 
"StandardProcessSession.java", "lineNumber": 2648, "className": 
"org.apache.nifi.controller.repository.StandardProcessSession", "nativeMethod": 
false }, { "methodName": "onTrigger", "fileName": 
"ExecuteSqlRecordWithoutSwallowingErrors.java", "lineNumber": 360, "className": 
"com.civitaslearning.collect.nifi.processor.ExecuteSqlRecordWithoutSwallowingErrors",
 "nativeMethod": false }, { "methodName": "onTrigger", "fileName": 
"AbstractProcessor.java", "lineNumber": 27, "className": 
"org.apache.nifi.processor.AbstractProcessor", "nativeMethod": false }, { 
"methodName": "onTrigger", "fileName": "StandardProcessorNode.java", 
"lineNumber": 1165, "className": 
"org.apache.nifi.controller.StandardProcessorNode", "nativeMethod": false }, { 
"methodName": "invoke", "fileName": "ConnectableTask.java", "lineNumber": 203, 
"className": "org.apache.nifi.controller.tasks.ConnectableTask", 
"nativeMethod": false }, { "methodName": "run", "fileName": 
"TimerDrivenSchedulingAgent.java", "lineNumber": 117, "className": 
"org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1", 
"nativeMethod": false }, { "methodName": "call", "fileName": "Executors.java", 
"lineNumber": 511, "className": 
"java.util.concurrent.Executors$RunnableAdapter", "nativeMethod": false }, { 
"methodName": "runAndReset", "fileName": "FutureTask.java", "lineNumber": 308, 
"className": "java.util.concurrent.FutureTask", "nativeMethod": false }, { 
"methodName": "access$301", "fileName": "ScheduledThreadPoolExecutor.java", 
"lineNumber": 180, "className": 
"java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask", 
"nativeMethod": false }, { "methodName": "run", "fileName": 
"ScheduledThreadPoolExecutor.java", "lineNumber": 294, "className": 
"java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask", 
"nativeMethod": false }, { "methodName": "runWorker", "fileName": 
"ThreadPoolExecutor.java", "lineNumber": 1149, "className": 
"java.util.concurrent.ThreadPoolExecutor", "nativeMethod": false }, { 
"methodName": "run", "fileName": "ThreadPoolExecutor.java", "lineNumber": 624, 
"className": "java.util.concurrent.ThreadPoolExecutor$Worker", "nativeMethod": 
false }, { "methodName": "run", "fileName": "Thread.java", "lineNumber": 748, 
"className": "java.lang.Thread", "nativeMethod": false } ], "message": "Empty 
name", "localizedMessage": "Empty name", "suppressed": [] }, "stackTrace": [ { 
"methodName": "lambda$onTrigger$1", "fileName": 
"ExecuteSqlRecordWithoutSwallowingErrors.java", "lineNumber": 364, "className": 

[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686743#comment-16686743
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233501997
  
--- Diff: extensions/coap/nanofi/coap_functions.c ---
@@ -0,0 +1,175 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "coap_functions.h"
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Initialize the API access. Not thread safe.
+ */
+void init_coap_api(void *rcvr, callback_pointers *ptrs) {
+  global_ptrs.data_received = ptrs->data_received;
+  global_ptrs.received_error = ptrs->received_error;
+  receiver = rcvr;
+}
+
+
+int create_session(coap_context_t **ctx, coap_session_t **session, const 
char *node, const char *port, coap_address_t *dst_addr) {
+  int s;
+  struct addrinfo hints;
+  coap_proto_t proto = COAP_PROTO_UDP;
+  struct addrinfo *result, *rp;
+
+  memset(, 0, sizeof(struct addrinfo));
+  hints.ai_family = AF_UNSPEC; // ipv4 or ipv6
+  hints.ai_socktype = COAP_PROTO_RELIABLE(proto) ? SOCK_STREAM : 
SOCK_DGRAM;
+  hints.ai_flags = AI_PASSIVE | AI_NUMERICHOST | AI_NUMERICSERV | AI_ALL;
+
+  s = getaddrinfo(node, port, , );
+  if (s != 0) {
+fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
+return -1;
+  }
+
+  for (rp = result; rp != NULL; rp = rp->ai_next) {
+coap_address_t addr;
+
+if (rp->ai_addrlen <= sizeof(addr.addr)) {
+  coap_address_init();
+  addr.size = rp->ai_addrlen;
+  memcpy(, rp->ai_addr, rp->ai_addrlen);
+
+  *ctx = coap_new_context(0x00);
+
+  *session = coap_new_client_session(*ctx, , dst_addr, proto);
+  if (*ctx && *session) {
+freeaddrinfo(result);
+return 0;
+  }
+}
+  }
+
+  fprintf(stderr, "no context available for interface '%s'\n", node);
+
+  freeaddrinfo(result);
+  return -1;
+}
+
+struct coap_pdu_t *create_request(struct coap_context_t *ctx,struct 
coap_session_t *session,coap_optlist_t **optlist, unsigned char code, 
coap_str_const_t *ptr) {
+  coap_pdu_t *pdu;
+
+  if (!(pdu = coap_new_pdu(session)))
+return NULL;
+
+  pdu->type = COAP_MESSAGE_CON;
+  pdu->tid = coap_new_message_id(session);
+  pdu->code = code;
+
+  if (optlist){
+coap_add_optlist_pdu(pdu, optlist);
+  }
+
+  int flags = 0;
+  coap_add_data(pdu, ptr->length, ptr->s);
+  return pdu;
+}
+
+int coap_event(struct coap_context_t *ctx, coap_event_t event, struct 
coap_session_t *session){
+if (event == COAP_EVENT_SESSION_FAILED && global_ptrs.received_error){
+global_ptrs.received_error(receiver, ctx, -1);
+}
+return 0;
+}
+
+void no_acknowledgement(struct coap_context_t *ctx, coap_session_t 
*session, coap_pdu_t *sent, coap_nack_reason_t reason, const coap_tid_t id){
+  if (global_ptrs.received_error){
+  global_ptrs.received_error(receiver, ctx, -1);
+  }
+}
+
+void response_handler(struct coap_context_t *ctx, struct coap_session_t 
*session, coap_pdu_t *sent, coap_pdu_t *received, const coap_tid_t id) {
+  unsigned char* data;
+  size_t data_len;
+  coap_opt_iterator_t opt_iter;
+  coap_opt_t * block_opt = coap_check_option(received, COAP_OPTION_BLOCK1, 
_iter);
+  if (block_opt) {
+  } else {
+if (!global_ptrs.data_received){
+  return;
+}
+
+if (COAP_RESPONSE_CLASS(received->code) == 2 || received->code == 
COAP_RESPONSE_400) {
+  if (coap_get_data(received, _len, )) {
+if (global_ptrs.data_received)
+global_ptrs.data_received(receiver, ctx, received->code, data, 
_len);
+  }
+}
   

[GitHub] nifi-minifi-cpp pull request #437: MINIFICPP-558: initial provisioning for C...

2018-11-14 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233501997
  
--- Diff: extensions/coap/nanofi/coap_functions.c ---
@@ -0,0 +1,175 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "coap_functions.h"
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Initialize the API access. Not thread safe.
+ */
+void init_coap_api(void *rcvr, callback_pointers *ptrs) {
+  global_ptrs.data_received = ptrs->data_received;
+  global_ptrs.received_error = ptrs->received_error;
+  receiver = rcvr;
+}
+
+
+int create_session(coap_context_t **ctx, coap_session_t **session, const 
char *node, const char *port, coap_address_t *dst_addr) {
+  int s;
+  struct addrinfo hints;
+  coap_proto_t proto = COAP_PROTO_UDP;
+  struct addrinfo *result, *rp;
+
+  memset(, 0, sizeof(struct addrinfo));
+  hints.ai_family = AF_UNSPEC; // ipv4 or ipv6
+  hints.ai_socktype = COAP_PROTO_RELIABLE(proto) ? SOCK_STREAM : 
SOCK_DGRAM;
+  hints.ai_flags = AI_PASSIVE | AI_NUMERICHOST | AI_NUMERICSERV | AI_ALL;
+
+  s = getaddrinfo(node, port, , );
+  if (s != 0) {
+fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
+return -1;
+  }
+
+  for (rp = result; rp != NULL; rp = rp->ai_next) {
+coap_address_t addr;
+
+if (rp->ai_addrlen <= sizeof(addr.addr)) {
+  coap_address_init();
+  addr.size = rp->ai_addrlen;
+  memcpy(, rp->ai_addr, rp->ai_addrlen);
+
+  *ctx = coap_new_context(0x00);
+
+  *session = coap_new_client_session(*ctx, , dst_addr, proto);
+  if (*ctx && *session) {
+freeaddrinfo(result);
+return 0;
+  }
+}
+  }
+
+  fprintf(stderr, "no context available for interface '%s'\n", node);
+
+  freeaddrinfo(result);
+  return -1;
+}
+
+struct coap_pdu_t *create_request(struct coap_context_t *ctx,struct 
coap_session_t *session,coap_optlist_t **optlist, unsigned char code, 
coap_str_const_t *ptr) {
+  coap_pdu_t *pdu;
+
+  if (!(pdu = coap_new_pdu(session)))
+return NULL;
+
+  pdu->type = COAP_MESSAGE_CON;
+  pdu->tid = coap_new_message_id(session);
+  pdu->code = code;
+
+  if (optlist){
+coap_add_optlist_pdu(pdu, optlist);
+  }
+
+  int flags = 0;
+  coap_add_data(pdu, ptr->length, ptr->s);
+  return pdu;
+}
+
+int coap_event(struct coap_context_t *ctx, coap_event_t event, struct 
coap_session_t *session){
+if (event == COAP_EVENT_SESSION_FAILED && global_ptrs.received_error){
+global_ptrs.received_error(receiver, ctx, -1);
+}
+return 0;
+}
+
+void no_acknowledgement(struct coap_context_t *ctx, coap_session_t 
*session, coap_pdu_t *sent, coap_nack_reason_t reason, const coap_tid_t id){
+  if (global_ptrs.received_error){
+  global_ptrs.received_error(receiver, ctx, -1);
+  }
+}
+
+void response_handler(struct coap_context_t *ctx, struct coap_session_t 
*session, coap_pdu_t *sent, coap_pdu_t *received, const coap_tid_t id) {
+  unsigned char* data;
+  size_t data_len;
+  coap_opt_iterator_t opt_iter;
+  coap_opt_t * block_opt = coap_check_option(received, COAP_OPTION_BLOCK1, 
_iter);
+  if (block_opt) {
+  } else {
+if (!global_ptrs.data_received){
+  return;
+}
+
+if (COAP_RESPONSE_CLASS(received->code) == 2 || received->code == 
COAP_RESPONSE_400) {
+  if (coap_get_data(received, _len, )) {
+if (global_ptrs.data_received)
+global_ptrs.data_received(receiver, ctx, received->code, data, 
_len);
+  }
+}
+else{
+  if (global_ptrs.received_error)
+global_ptrs.received_error(receiver, ctx, received->code);
+}
+  }
+
+}
+
+int resolve_address(const struct coap_str_const_t *server, struct sockaddr 

[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686739#comment-16686739
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233500950
  
--- Diff: extensions/coap/nanofi/coap_functions.c ---
@@ -0,0 +1,175 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "coap_functions.h"
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Initialize the API access. Not thread safe.
+ */
+void init_coap_api(void *rcvr, callback_pointers *ptrs) {
+  global_ptrs.data_received = ptrs->data_received;
+  global_ptrs.received_error = ptrs->received_error;
+  receiver = rcvr;
+}
+
+
+int create_session(coap_context_t **ctx, coap_session_t **session, const 
char *node, const char *port, coap_address_t *dst_addr) {
+  int s;
+  struct addrinfo hints;
+  coap_proto_t proto = COAP_PROTO_UDP;
+  struct addrinfo *result, *rp;
+
+  memset(, 0, sizeof(struct addrinfo));
+  hints.ai_family = AF_UNSPEC; // ipv4 or ipv6
+  hints.ai_socktype = COAP_PROTO_RELIABLE(proto) ? SOCK_STREAM : 
SOCK_DGRAM;
+  hints.ai_flags = AI_PASSIVE | AI_NUMERICHOST | AI_NUMERICSERV | AI_ALL;
+
+  s = getaddrinfo(node, port, , );
+  if (s != 0) {
+fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
+return -1;
+  }
+
+  for (rp = result; rp != NULL; rp = rp->ai_next) {
+coap_address_t addr;
+
+if (rp->ai_addrlen <= sizeof(addr.addr)) {
+  coap_address_init();
+  addr.size = rp->ai_addrlen;
+  memcpy(, rp->ai_addr, rp->ai_addrlen);
+
+  *ctx = coap_new_context(0x00);
+
+  *session = coap_new_client_session(*ctx, , dst_addr, proto);
+  if (*ctx && *session) {
+freeaddrinfo(result);
+return 0;
+  }
+}
+  }
+
+  fprintf(stderr, "no context available for interface '%s'\n", node);
+
+  freeaddrinfo(result);
+  return -1;
+}
+
+struct coap_pdu_t *create_request(struct coap_context_t *ctx,struct 
coap_session_t *session,coap_optlist_t **optlist, unsigned char code, 
coap_str_const_t *ptr) {
+  coap_pdu_t *pdu;
+
+  if (!(pdu = coap_new_pdu(session)))
+return NULL;
+
+  pdu->type = COAP_MESSAGE_CON;
+  pdu->tid = coap_new_message_id(session);
+  pdu->code = code;
+
+  if (optlist){
+coap_add_optlist_pdu(pdu, optlist);
+  }
+
+  int flags = 0;
+  coap_add_data(pdu, ptr->length, ptr->s);
+  return pdu;
+}
+
+int coap_event(struct coap_context_t *ctx, coap_event_t event, struct 
coap_session_t *session){
+if (event == COAP_EVENT_SESSION_FAILED && global_ptrs.received_error){
+global_ptrs.received_error(receiver, ctx, -1);
+}
+return 0;
+}
+
+void no_acknowledgement(struct coap_context_t *ctx, coap_session_t 
*session, coap_pdu_t *sent, coap_nack_reason_t reason, const coap_tid_t id){
+  if (global_ptrs.received_error){
+  global_ptrs.received_error(receiver, ctx, -1);
+  }
+}
+
+void response_handler(struct coap_context_t *ctx, struct coap_session_t 
*session, coap_pdu_t *sent, coap_pdu_t *received, const coap_tid_t id) {
+  unsigned char* data;
+  size_t data_len;
+  coap_opt_iterator_t opt_iter;
+  coap_opt_t * block_opt = coap_check_option(received, COAP_OPTION_BLOCK1, 
_iter);
+  if (block_opt) {
--- End diff --

Why not
```
if(!block_opt)
```
?


> Move PayloadSerializer in preparation for Coap
> --
>
> Key: MINIFICPP-558
> URL: https://issues.apache.org/jira/browse/MINIFICPP-558
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault

[GitHub] nifi-minifi-cpp pull request #437: MINIFICPP-558: initial provisioning for C...

2018-11-14 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233500950
  
--- Diff: extensions/coap/nanofi/coap_functions.c ---
@@ -0,0 +1,175 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "coap_functions.h"
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Initialize the API access. Not thread safe.
+ */
+void init_coap_api(void *rcvr, callback_pointers *ptrs) {
+  global_ptrs.data_received = ptrs->data_received;
+  global_ptrs.received_error = ptrs->received_error;
+  receiver = rcvr;
+}
+
+
+int create_session(coap_context_t **ctx, coap_session_t **session, const 
char *node, const char *port, coap_address_t *dst_addr) {
+  int s;
+  struct addrinfo hints;
+  coap_proto_t proto = COAP_PROTO_UDP;
+  struct addrinfo *result, *rp;
+
+  memset(, 0, sizeof(struct addrinfo));
+  hints.ai_family = AF_UNSPEC; // ipv4 or ipv6
+  hints.ai_socktype = COAP_PROTO_RELIABLE(proto) ? SOCK_STREAM : 
SOCK_DGRAM;
+  hints.ai_flags = AI_PASSIVE | AI_NUMERICHOST | AI_NUMERICSERV | AI_ALL;
+
+  s = getaddrinfo(node, port, , );
+  if (s != 0) {
+fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
+return -1;
+  }
+
+  for (rp = result; rp != NULL; rp = rp->ai_next) {
+coap_address_t addr;
+
+if (rp->ai_addrlen <= sizeof(addr.addr)) {
+  coap_address_init();
+  addr.size = rp->ai_addrlen;
+  memcpy(, rp->ai_addr, rp->ai_addrlen);
+
+  *ctx = coap_new_context(0x00);
+
+  *session = coap_new_client_session(*ctx, , dst_addr, proto);
+  if (*ctx && *session) {
+freeaddrinfo(result);
+return 0;
+  }
+}
+  }
+
+  fprintf(stderr, "no context available for interface '%s'\n", node);
+
+  freeaddrinfo(result);
+  return -1;
+}
+
+struct coap_pdu_t *create_request(struct coap_context_t *ctx,struct 
coap_session_t *session,coap_optlist_t **optlist, unsigned char code, 
coap_str_const_t *ptr) {
+  coap_pdu_t *pdu;
+
+  if (!(pdu = coap_new_pdu(session)))
+return NULL;
+
+  pdu->type = COAP_MESSAGE_CON;
+  pdu->tid = coap_new_message_id(session);
+  pdu->code = code;
+
+  if (optlist){
+coap_add_optlist_pdu(pdu, optlist);
+  }
+
+  int flags = 0;
+  coap_add_data(pdu, ptr->length, ptr->s);
+  return pdu;
+}
+
+int coap_event(struct coap_context_t *ctx, coap_event_t event, struct 
coap_session_t *session){
+if (event == COAP_EVENT_SESSION_FAILED && global_ptrs.received_error){
+global_ptrs.received_error(receiver, ctx, -1);
+}
+return 0;
+}
+
+void no_acknowledgement(struct coap_context_t *ctx, coap_session_t 
*session, coap_pdu_t *sent, coap_nack_reason_t reason, const coap_tid_t id){
+  if (global_ptrs.received_error){
+  global_ptrs.received_error(receiver, ctx, -1);
+  }
+}
+
+void response_handler(struct coap_context_t *ctx, struct coap_session_t 
*session, coap_pdu_t *sent, coap_pdu_t *received, const coap_tid_t id) {
+  unsigned char* data;
+  size_t data_len;
+  coap_opt_iterator_t opt_iter;
+  coap_opt_t * block_opt = coap_check_option(received, COAP_OPTION_BLOCK1, 
_iter);
+  if (block_opt) {
--- End diff --

Why not
```
if(!block_opt)
```
?


---


[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686737#comment-16686737
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233500619
  
--- Diff: extensions/coap/nanofi/coap_functions.c ---
@@ -0,0 +1,175 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "coap_functions.h"
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Initialize the API access. Not thread safe.
+ */
+void init_coap_api(void *rcvr, callback_pointers *ptrs) {
+  global_ptrs.data_received = ptrs->data_received;
+  global_ptrs.received_error = ptrs->received_error;
+  receiver = rcvr;
+}
+
+
+int create_session(coap_context_t **ctx, coap_session_t **session, const 
char *node, const char *port, coap_address_t *dst_addr) {
+  int s;
+  struct addrinfo hints;
+  coap_proto_t proto = COAP_PROTO_UDP;
+  struct addrinfo *result, *rp;
+
+  memset(, 0, sizeof(struct addrinfo));
+  hints.ai_family = AF_UNSPEC; // ipv4 or ipv6
+  hints.ai_socktype = COAP_PROTO_RELIABLE(proto) ? SOCK_STREAM : 
SOCK_DGRAM;
+  hints.ai_flags = AI_PASSIVE | AI_NUMERICHOST | AI_NUMERICSERV | AI_ALL;
+
+  s = getaddrinfo(node, port, , );
+  if (s != 0) {
+fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
+return -1;
+  }
+
+  for (rp = result; rp != NULL; rp = rp->ai_next) {
+coap_address_t addr;
+
+if (rp->ai_addrlen <= sizeof(addr.addr)) {
+  coap_address_init();
+  addr.size = rp->ai_addrlen;
+  memcpy(, rp->ai_addr, rp->ai_addrlen);
+
+  *ctx = coap_new_context(0x00);
+
+  *session = coap_new_client_session(*ctx, , dst_addr, proto);
+  if (*ctx && *session) {
+freeaddrinfo(result);
+return 0;
+  }
+}
+  }
+
+  fprintf(stderr, "no context available for interface '%s'\n", node);
+
+  freeaddrinfo(result);
+  return -1;
+}
+
+struct coap_pdu_t *create_request(struct coap_context_t *ctx,struct 
coap_session_t *session,coap_optlist_t **optlist, unsigned char code, 
coap_str_const_t *ptr) {
+  coap_pdu_t *pdu;
+
+  if (!(pdu = coap_new_pdu(session)))
+return NULL;
+
+  pdu->type = COAP_MESSAGE_CON;
+  pdu->tid = coap_new_message_id(session);
+  pdu->code = code;
+
+  if (optlist){
+coap_add_optlist_pdu(pdu, optlist);
+  }
+
+  int flags = 0;
+  coap_add_data(pdu, ptr->length, ptr->s);
+  return pdu;
+}
+
+int coap_event(struct coap_context_t *ctx, coap_event_t event, struct 
coap_session_t *session){
+if (event == COAP_EVENT_SESSION_FAILED && global_ptrs.received_error){
--- End diff --

Please indent with 2 spaces


> Move PayloadSerializer in preparation for Coap
> --
>
> Key: MINIFICPP-558
> URL: https://issues.apache.org/jira/browse/MINIFICPP-558
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Move PayloadSerializer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686735#comment-16686735
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233500462
  
--- Diff: extensions/coap/nanofi/coap_functions.c ---
@@ -0,0 +1,175 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "coap_functions.h"
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Initialize the API access. Not thread safe.
+ */
+void init_coap_api(void *rcvr, callback_pointers *ptrs) {
+  global_ptrs.data_received = ptrs->data_received;
+  global_ptrs.received_error = ptrs->received_error;
+  receiver = rcvr;
+}
+
+
+int create_session(coap_context_t **ctx, coap_session_t **session, const 
char *node, const char *port, coap_address_t *dst_addr) {
+  int s;
+  struct addrinfo hints;
+  coap_proto_t proto = COAP_PROTO_UDP;
+  struct addrinfo *result, *rp;
+
+  memset(, 0, sizeof(struct addrinfo));
+  hints.ai_family = AF_UNSPEC; // ipv4 or ipv6
+  hints.ai_socktype = COAP_PROTO_RELIABLE(proto) ? SOCK_STREAM : 
SOCK_DGRAM;
+  hints.ai_flags = AI_PASSIVE | AI_NUMERICHOST | AI_NUMERICSERV | AI_ALL;
+
+  s = getaddrinfo(node, port, , );
+  if (s != 0) {
+fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
+return -1;
+  }
+
+  for (rp = result; rp != NULL; rp = rp->ai_next) {
+coap_address_t addr;
+
+if (rp->ai_addrlen <= sizeof(addr.addr)) {
+  coap_address_init();
+  addr.size = rp->ai_addrlen;
+  memcpy(, rp->ai_addr, rp->ai_addrlen);
+
+  *ctx = coap_new_context(0x00);
+
+  *session = coap_new_client_session(*ctx, , dst_addr, proto);
+  if (*ctx && *session) {
+freeaddrinfo(result);
+return 0;
+  }
+}
+  }
+
+  fprintf(stderr, "no context available for interface '%s'\n", node);
+
+  freeaddrinfo(result);
+  return -1;
+}
+
+struct coap_pdu_t *create_request(struct coap_context_t *ctx,struct 
coap_session_t *session,coap_optlist_t **optlist, unsigned char code, 
coap_str_const_t *ptr) {
+  coap_pdu_t *pdu;
+
+  if (!(pdu = coap_new_pdu(session)))
+return NULL;
+
+  pdu->type = COAP_MESSAGE_CON;
+  pdu->tid = coap_new_message_id(session);
+  pdu->code = code;
+
+  if (optlist){
+coap_add_optlist_pdu(pdu, optlist);
+  }
+
+  int flags = 0;
+  coap_add_data(pdu, ptr->length, ptr->s);
+  return pdu;
+}
+
+int coap_event(struct coap_context_t *ctx, coap_event_t event, struct 
coap_session_t *session){
+if (event == COAP_EVENT_SESSION_FAILED && global_ptrs.received_error){
+global_ptrs.received_error(receiver, ctx, -1);
+}
+return 0;
+}
+
+void no_acknowledgement(struct coap_context_t *ctx, coap_session_t 
*session, coap_pdu_t *sent, coap_nack_reason_t reason, const coap_tid_t id){
+  if (global_ptrs.received_error){
+  global_ptrs.received_error(receiver, ctx, -1);
--- End diff --

Indentation


> Move PayloadSerializer in preparation for Coap
> --
>
> Key: MINIFICPP-558
> URL: https://issues.apache.org/jira/browse/MINIFICPP-558
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Move PayloadSerializer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #437: MINIFICPP-558: initial provisioning for C...

2018-11-14 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233500619
  
--- Diff: extensions/coap/nanofi/coap_functions.c ---
@@ -0,0 +1,175 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "coap_functions.h"
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Initialize the API access. Not thread safe.
+ */
+void init_coap_api(void *rcvr, callback_pointers *ptrs) {
+  global_ptrs.data_received = ptrs->data_received;
+  global_ptrs.received_error = ptrs->received_error;
+  receiver = rcvr;
+}
+
+
+int create_session(coap_context_t **ctx, coap_session_t **session, const 
char *node, const char *port, coap_address_t *dst_addr) {
+  int s;
+  struct addrinfo hints;
+  coap_proto_t proto = COAP_PROTO_UDP;
+  struct addrinfo *result, *rp;
+
+  memset(, 0, sizeof(struct addrinfo));
+  hints.ai_family = AF_UNSPEC; // ipv4 or ipv6
+  hints.ai_socktype = COAP_PROTO_RELIABLE(proto) ? SOCK_STREAM : 
SOCK_DGRAM;
+  hints.ai_flags = AI_PASSIVE | AI_NUMERICHOST | AI_NUMERICSERV | AI_ALL;
+
+  s = getaddrinfo(node, port, , );
+  if (s != 0) {
+fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
+return -1;
+  }
+
+  for (rp = result; rp != NULL; rp = rp->ai_next) {
+coap_address_t addr;
+
+if (rp->ai_addrlen <= sizeof(addr.addr)) {
+  coap_address_init();
+  addr.size = rp->ai_addrlen;
+  memcpy(, rp->ai_addr, rp->ai_addrlen);
+
+  *ctx = coap_new_context(0x00);
+
+  *session = coap_new_client_session(*ctx, , dst_addr, proto);
+  if (*ctx && *session) {
+freeaddrinfo(result);
+return 0;
+  }
+}
+  }
+
+  fprintf(stderr, "no context available for interface '%s'\n", node);
+
+  freeaddrinfo(result);
+  return -1;
+}
+
+struct coap_pdu_t *create_request(struct coap_context_t *ctx,struct 
coap_session_t *session,coap_optlist_t **optlist, unsigned char code, 
coap_str_const_t *ptr) {
+  coap_pdu_t *pdu;
+
+  if (!(pdu = coap_new_pdu(session)))
+return NULL;
+
+  pdu->type = COAP_MESSAGE_CON;
+  pdu->tid = coap_new_message_id(session);
+  pdu->code = code;
+
+  if (optlist){
+coap_add_optlist_pdu(pdu, optlist);
+  }
+
+  int flags = 0;
+  coap_add_data(pdu, ptr->length, ptr->s);
+  return pdu;
+}
+
+int coap_event(struct coap_context_t *ctx, coap_event_t event, struct 
coap_session_t *session){
+if (event == COAP_EVENT_SESSION_FAILED && global_ptrs.received_error){
--- End diff --

Please indent with 2 spaces


---


[GitHub] nifi-minifi-cpp pull request #437: MINIFICPP-558: initial provisioning for C...

2018-11-14 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233500462
  
--- Diff: extensions/coap/nanofi/coap_functions.c ---
@@ -0,0 +1,175 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "coap_functions.h"
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Initialize the API access. Not thread safe.
+ */
+void init_coap_api(void *rcvr, callback_pointers *ptrs) {
+  global_ptrs.data_received = ptrs->data_received;
+  global_ptrs.received_error = ptrs->received_error;
+  receiver = rcvr;
+}
+
+
+int create_session(coap_context_t **ctx, coap_session_t **session, const 
char *node, const char *port, coap_address_t *dst_addr) {
+  int s;
+  struct addrinfo hints;
+  coap_proto_t proto = COAP_PROTO_UDP;
+  struct addrinfo *result, *rp;
+
+  memset(, 0, sizeof(struct addrinfo));
+  hints.ai_family = AF_UNSPEC; // ipv4 or ipv6
+  hints.ai_socktype = COAP_PROTO_RELIABLE(proto) ? SOCK_STREAM : 
SOCK_DGRAM;
+  hints.ai_flags = AI_PASSIVE | AI_NUMERICHOST | AI_NUMERICSERV | AI_ALL;
+
+  s = getaddrinfo(node, port, , );
+  if (s != 0) {
+fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
+return -1;
+  }
+
+  for (rp = result; rp != NULL; rp = rp->ai_next) {
+coap_address_t addr;
+
+if (rp->ai_addrlen <= sizeof(addr.addr)) {
+  coap_address_init();
+  addr.size = rp->ai_addrlen;
+  memcpy(, rp->ai_addr, rp->ai_addrlen);
+
+  *ctx = coap_new_context(0x00);
+
+  *session = coap_new_client_session(*ctx, , dst_addr, proto);
+  if (*ctx && *session) {
+freeaddrinfo(result);
+return 0;
+  }
+}
+  }
+
+  fprintf(stderr, "no context available for interface '%s'\n", node);
+
+  freeaddrinfo(result);
+  return -1;
+}
+
+struct coap_pdu_t *create_request(struct coap_context_t *ctx,struct 
coap_session_t *session,coap_optlist_t **optlist, unsigned char code, 
coap_str_const_t *ptr) {
+  coap_pdu_t *pdu;
+
+  if (!(pdu = coap_new_pdu(session)))
+return NULL;
+
+  pdu->type = COAP_MESSAGE_CON;
+  pdu->tid = coap_new_message_id(session);
+  pdu->code = code;
+
+  if (optlist){
+coap_add_optlist_pdu(pdu, optlist);
+  }
+
+  int flags = 0;
+  coap_add_data(pdu, ptr->length, ptr->s);
+  return pdu;
+}
+
+int coap_event(struct coap_context_t *ctx, coap_event_t event, struct 
coap_session_t *session){
+if (event == COAP_EVENT_SESSION_FAILED && global_ptrs.received_error){
+global_ptrs.received_error(receiver, ctx, -1);
+}
+return 0;
+}
+
+void no_acknowledgement(struct coap_context_t *ctx, coap_session_t 
*session, coap_pdu_t *sent, coap_nack_reason_t reason, const coap_tid_t id){
+  if (global_ptrs.received_error){
+  global_ptrs.received_error(receiver, ctx, -1);
--- End diff --

Indentation


---


[jira] [Created] (NIFI-5819) JDBCCommon does not handle converting SQLServer sql_variant type to avro

2018-11-14 Thread Charlie Meyer (JIRA)
Charlie Meyer created NIFI-5819:
---

 Summary: JDBCCommon does not handle converting SQLServer 
sql_variant type to avro
 Key: NIFI-5819
 URL: https://issues.apache.org/jira/browse/NIFI-5819
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.8.0
Reporter: Charlie Meyer


If i run a query using executesqlrecord against a mssql database such as 
{{SELECT SERVERPROPERTY ('ProductVersion') AS MajorVersion}} nifi logs the 
following stack trace:
{code:java}
{ "cause": { "cause": null, "stackTrace": [ { "methodName": "createSchema", 
"fileName": "JdbcCommon.java", "lineNumber": 677, "className": 
"org.apache.nifi.processors.standard.util.JdbcCommon", "nativeMethod": false }, 
{ "methodName": "writeResultSet", "fileName": "RecordSqlWriter.java", 
"lineNumber": 68, "className": 
"org.apache.nifi.processors.standard.sql.RecordSqlWriter", "nativeMethod": 
false }, { "methodName": "lambda$onTrigger$1", "fileName": 
"ExecuteSqlRecordWithoutSwallowingErrors.java", "lineNumber": 362, "className": 
"com.civitaslearning.collect.nifi.processor.ExecuteSqlRecordWithoutSwallowingErrors",
 "nativeMethod": false }, { "methodName": "write", "fileName": 
"StandardProcessSession.java", "lineNumber": 2648, "className": 
"org.apache.nifi.controller.repository.StandardProcessSession", "nativeMethod": 
false }, { "methodName": "onTrigger", "fileName": 
"ExecuteSqlRecordWithoutSwallowingErrors.java", "lineNumber": 360, "className": 
"com.civitaslearning.collect.nifi.processor.ExecuteSqlRecordWithoutSwallowingErrors",
 "nativeMethod": false }, { "methodName": "onTrigger", "fileName": 
"AbstractProcessor.java", "lineNumber": 27, "className": 
"org.apache.nifi.processor.AbstractProcessor", "nativeMethod": false }, { 
"methodName": "onTrigger", "fileName": "StandardProcessorNode.java", 
"lineNumber": 1165, "className": 
"org.apache.nifi.controller.StandardProcessorNode", "nativeMethod": false }, { 
"methodName": "invoke", "fileName": "ConnectableTask.java", "lineNumber": 203, 
"className": "org.apache.nifi.controller.tasks.ConnectableTask", 
"nativeMethod": false }, { "methodName": "run", "fileName": 
"TimerDrivenSchedulingAgent.java", "lineNumber": 117, "className": 
"org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1", 
"nativeMethod": false }, { "methodName": "call", "fileName": "Executors.java", 
"lineNumber": 511, "className": 
"java.util.concurrent.Executors$RunnableAdapter", "nativeMethod": false }, { 
"methodName": "runAndReset", "fileName": "FutureTask.java", "lineNumber": 308, 
"className": "java.util.concurrent.FutureTask", "nativeMethod": false }, { 
"methodName": "access$301", "fileName": "ScheduledThreadPoolExecutor.java", 
"lineNumber": 180, "className": 
"java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask", 
"nativeMethod": false }, { "methodName": "run", "fileName": 
"ScheduledThreadPoolExecutor.java", "lineNumber": 294, "className": 
"java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask", 
"nativeMethod": false }, { "methodName": "runWorker", "fileName": 
"ThreadPoolExecutor.java", "lineNumber": 1149, "className": 
"java.util.concurrent.ThreadPoolExecutor", "nativeMethod": false }, { 
"methodName": "run", "fileName": "ThreadPoolExecutor.java", "lineNumber": 624, 
"className": "java.util.concurrent.ThreadPoolExecutor$Worker", "nativeMethod": 
false }, { "methodName": "run", "fileName": "Thread.java", "lineNumber": 748, 
"className": "java.lang.Thread", "nativeMethod": false } ], "message": 
"createSchema: Unknown SQL type -156 / sql_variant (table: 
NiFi_ExecuteSQL_Record, column: MajorVersion) cannot be converted to Avro 
type", "localizedMessage": "createSchema: Unknown SQL type -156 / sql_variant 
(table: NiFi_ExecuteSQL_Record, column: MajorVersion) cannot be converted to 
Avro type", "suppressed": [] }, "stackTrace": [ { "methodName": 
"lambda$onTrigger$1", "fileName": 
"ExecuteSqlRecordWithoutSwallowingErrors.java", "lineNumber": 364, "className": 
"com.civitaslearning.collect.nifi.processor.ExecuteSqlRecordWithoutSwallowingErrors",
 "nativeMethod": false }, { "methodName": "write", "fileName": 
"StandardProcessSession.java", "lineNumber": 2648, "className": 
"org.apache.nifi.controller.repository.StandardProcessSession", "nativeMethod": 
false }, { "methodName": "onTrigger", "fileName": 
"ExecuteSqlRecordWithoutSwallowingErrors.java", "lineNumber": 360, "className": 
"com.civitaslearning.collect.nifi.processor.ExecuteSqlRecordWithoutSwallowingErrors",
 "nativeMethod": false }, { "methodName": "onTrigger", "fileName": 
"AbstractProcessor.java", "lineNumber": 27, "className": 
"org.apache.nifi.processor.AbstractProcessor", "nativeMethod": false }, { 
"methodName": "onTrigger", "fileName": "StandardProcessorNode.java", 
"lineNumber": 1165, "className": 
"org.apache.nifi.controller.StandardProcessorNode", 

[GitHub] nifi-minifi-cpp pull request #437: MINIFICPP-558: initial provisioning for C...

2018-11-14 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233499744
  
--- Diff: extensions/coap/controllerservice/CoapConnector.h ---
@@ -0,0 +1,207 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef LIBMINIFI_INCLUDE_CONTROLLERS_COAPCONNECTOR_H_
+#define LIBMINIFI_INCLUDE_CONTROLLERS_COAPCONNECTOR_H_
+
+
+#include "core/logging/LoggerConfiguration.h"
+#include "coap_functions.h"
+#include "core/controller/ControllerService.h"
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace controllers {
+
+/**
+ * Purpose and Justification: Controller services function as a layerable 
way to provide
+ * services to internal services. While a controller service is generally 
configured from the flow,
+ * we want to follow the open closed principle and provide CoAP services 
to other components.
+ *
+ *
+ */
+class CoapConnectorService : public core::controller::ControllerService {
+ public:
+
+  /**
+   * CoapMessage is in internal message format that is sent to and from 
consumers of this controller service.
+   */
+  class CoAPMessage {
+   public:
+
+explicit CoAPMessage(unsigned int code = 0)
+: code_(code) {
+}
+
+explicit CoAPMessage(unsigned int code, unsigned char *data, size_t 
dataLen)
+: code_(code) {
+  if (data && dataLen > 0)
+std::copy(data, data + dataLen, std::back_inserter(data_));
+}
+
+CoAPMessage(const CoAPMessage ) = delete;
+
+CoAPMessage(CoAPMessage &) = default;
+
+~CoAPMessage() {
+}
+
+size_t getSize() const {
+  return data_.size();
+}
+unsigned char const *getData() const {
+  return data_.data();
+}
+
+bool isRegistrationRequest() {
+  if (data_.size() != 8) {
+return false;
+  }
+  return code_ == COAP_RESPONSE_400 && std::string((char*) 
data_.data(), data_.size()) == "register";
+}
+CoAPMessage =(const CoAPMessage ) = delete;
+CoAPMessage =(CoAPMessage &) = default;
+   private:
+unsigned int code_;
+std::vector data_;
+  };
+
+  /**
+   * Constructors for the controller service.
+   */
+  explicit CoapConnectorService(const std::string , const std::string 
)
+  : ControllerService(name, id),
+port_(0),
+initialized_(false),
+logger_(logging::LoggerFactory::getLogger()) 
{
+initialize();
+  }
+
+  explicit CoapConnectorService(const std::string , utils::Identifier 
uuid = utils::Identifier())
+  : ControllerService(name, uuid),
+port_(0),
+initialized_(false),
+logger_(logging::LoggerFactory::getLogger()) 
{
+initialize();
+  }
+
+  explicit CoapConnectorService(const std::string , const 
std::shared_ptr )
+  : ControllerService(name),
+port_(0),
+initialized_(false),
+logger_(logging::LoggerFactory::getLogger()) 
{
+setConfiguration(configuration);
+initialize();
+  }
+
+  /**
+   * Parameters needed.
+   */
+  static core::Property RemoteServer;
+  static core::Property Port;
+  static core::Property MaxQueueSize;
+
+  virtual void initialize();
+
+  void yield() {
+
+  }
+
+  bool isRunning() {
+return getState() == core::controller::ControllerServiceState::ENABLED;
+  }
+
+  bool isWorkAvailable() {
+return false;
+  }
+
+  virtual void onEnable();
+
+  /**
+   * Sends the payload to the endpoint, returning the response as we 
await. Will retry transmission
+   * @param type type of payload to endpoint interaction ( GET, POST, PUT, 
DELETE ).
+   * @param end endpoint is the connecting 

[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686730#comment-16686730
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233499744
  
--- Diff: extensions/coap/controllerservice/CoapConnector.h ---
@@ -0,0 +1,207 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef LIBMINIFI_INCLUDE_CONTROLLERS_COAPCONNECTOR_H_
+#define LIBMINIFI_INCLUDE_CONTROLLERS_COAPCONNECTOR_H_
+
+
+#include "core/logging/LoggerConfiguration.h"
+#include "coap_functions.h"
+#include "core/controller/ControllerService.h"
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace controllers {
+
+/**
+ * Purpose and Justification: Controller services function as a layerable 
way to provide
+ * services to internal services. While a controller service is generally 
configured from the flow,
+ * we want to follow the open closed principle and provide CoAP services 
to other components.
+ *
+ *
+ */
+class CoapConnectorService : public core::controller::ControllerService {
+ public:
+
+  /**
+   * CoapMessage is in internal message format that is sent to and from 
consumers of this controller service.
+   */
+  class CoAPMessage {
+   public:
+
+explicit CoAPMessage(unsigned int code = 0)
+: code_(code) {
+}
+
+explicit CoAPMessage(unsigned int code, unsigned char *data, size_t 
dataLen)
+: code_(code) {
+  if (data && dataLen > 0)
+std::copy(data, data + dataLen, std::back_inserter(data_));
+}
+
+CoAPMessage(const CoAPMessage ) = delete;
+
+CoAPMessage(CoAPMessage &) = default;
+
+~CoAPMessage() {
+}
+
+size_t getSize() const {
+  return data_.size();
+}
+unsigned char const *getData() const {
+  return data_.data();
+}
+
+bool isRegistrationRequest() {
+  if (data_.size() != 8) {
+return false;
+  }
+  return code_ == COAP_RESPONSE_400 && std::string((char*) 
data_.data(), data_.size()) == "register";
+}
+CoAPMessage =(const CoAPMessage ) = delete;
+CoAPMessage =(CoAPMessage &) = default;
+   private:
+unsigned int code_;
+std::vector data_;
+  };
+
+  /**
+   * Constructors for the controller service.
+   */
+  explicit CoapConnectorService(const std::string , const std::string 
)
+  : ControllerService(name, id),
+port_(0),
+initialized_(false),
+logger_(logging::LoggerFactory::getLogger()) 
{
+initialize();
+  }
+
+  explicit CoapConnectorService(const std::string , utils::Identifier 
uuid = utils::Identifier())
+  : ControllerService(name, uuid),
+port_(0),
+initialized_(false),
+logger_(logging::LoggerFactory::getLogger()) 
{
+initialize();
+  }
+
+  explicit CoapConnectorService(const std::string , const 
std::shared_ptr )
+  : ControllerService(name),
+port_(0),
+initialized_(false),
+logger_(logging::LoggerFactory::getLogger()) 
{
+setConfiguration(configuration);
+initialize();
+  }
+
+  /**
+   * Parameters needed.
+   */
+  static core::Property RemoteServer;
+  static core::Property Port;
+  static core::Property MaxQueueSize;
+
+  virtual void initialize();
+
+  void yield() {
+
+  }
+
+  bool isRunning() {
+return getState() == core::controller::ControllerServiceState::ENABLED;
+  }
+
+  bool isWorkAvailable() {
+return false;
+  }
+
+  virtual void onEnable();
+
+  /**
  

[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686721#comment-16686721
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233497401
  
--- Diff: docker/test/integration/minifi/test/__init__.py ---
@@ -42,6 +42,8 @@ def __init__(self, output_validator):
 
 self.segfault = False
 
+self.segfault = False
--- End diff --

This is already there two lines above. 


> Move PayloadSerializer in preparation for Coap
> --
>
> Key: MINIFICPP-558
> URL: https://issues.apache.org/jira/browse/MINIFICPP-558
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Move PayloadSerializer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #437: MINIFICPP-558: initial provisioning for C...

2018-11-14 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233497401
  
--- Diff: docker/test/integration/minifi/test/__init__.py ---
@@ -42,6 +42,8 @@ def __init__(self, output_validator):
 
 self.segfault = False
 
+self.segfault = False
--- End diff --

This is already there two lines above. 


---


[jira] [Commented] (MINIFICPP-665) Fix basestream references

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686699#comment-16686699
 ] 

ASF GitHub Bot commented on MINIFICPP-665:
--

Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/435#discussion_r232960242
  
--- Diff: libminifi/src/io/BaseStream.cpp ---
@@ -143,15 +147,19 @@ int BaseStream::read(uint8_t *value, int len) {
  * @param buflen
  */
 int BaseStream::readData(std::vector , int buflen) {
-  return Serializable::read([0], buflen, 
reinterpret_cast(composable_stream_));
+  return Serializable::read([0], buflen, composable_stream_);
 }
 /**
  * Reads data and places it into buf
  * @param buf buffer in which we extract data
  * @param buflen
  */
 int BaseStream::readData(uint8_t *buf, int buflen) {
-  return Serializable::read(buf, buflen, 
reinterpret_cast(composable_stream_));
+  if (composable_stream_ == this) {
--- End diff --

Likely?


> Fix basestream references
> -
>
> Key: MINIFICPP-665
> URL: https://issues.apache.org/jira/browse/MINIFICPP-665
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> At some point basestream was changed to be a composable pointer ( hence the 
> name base stream ) as a wrapper for a variety of streams. this caused a bug 
> where a self reference could occur. We overload this stream to be a self 
> reference since it extends DataStream. This is a very likely usage so we can 
> avoid the bugs by simply checking if the composition is a reference to self. 
> This reference to self won't be removed in the interest of time and re-using 
> Serializable's interface. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #435: MINIFICPP-665: Add reference checks for s...

2018-11-14 Thread arpadboda
Github user arpadboda commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/435#discussion_r232960242
  
--- Diff: libminifi/src/io/BaseStream.cpp ---
@@ -143,15 +147,19 @@ int BaseStream::read(uint8_t *value, int len) {
  * @param buflen
  */
 int BaseStream::readData(std::vector , int buflen) {
-  return Serializable::read([0], buflen, 
reinterpret_cast(composable_stream_));
+  return Serializable::read([0], buflen, composable_stream_);
 }
 /**
  * Reads data and places it into buf
  * @param buf buffer in which we extract data
  * @param buflen
  */
 int BaseStream::readData(uint8_t *buf, int buflen) {
-  return Serializable::read(buf, buflen, 
reinterpret_cast(composable_stream_));
+  if (composable_stream_ == this) {
--- End diff --

Likely?


---


[jira] [Commented] (NIFI-4130) TransformXml - provide a way to define XSLT without external files

2018-11-14 Thread Corey Fritz (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686624#comment-16686624
 ] 

Corey Fritz commented on NIFI-4130:
---

Thanks, guys!

> TransformXml - provide a way to define XSLT without external files
> --
>
> Key: NIFI-4130
> URL: https://issues.apache.org/jira/browse/NIFI-4130
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.9.0
>
>
> In cluster deployments the need to reference external configuration files can 
> be annoying since it requires to access to all the NiFi nodes and to 
> correctly deploy the files. It would be interesting to leverage the lookup 
> controller services in TransformXml to provide a way to define XSLT directly 
> from the UI without external configuration files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4130) TransformXml - provide a way to define XSLT without external files

2018-11-14 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4130:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> TransformXml - provide a way to define XSLT without external files
> --
>
> Key: NIFI-4130
> URL: https://issues.apache.org/jira/browse/NIFI-4130
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.9.0
>
>
> In cluster deployments the need to reference external configuration files can 
> be annoying since it requires to access to all the NiFi nodes and to 
> correctly deploy the files. It would be interesting to leverage the lookup 
> controller services in TransformXml to provide a way to define XSLT directly 
> from the UI without external configuration files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4130) TransformXml - provide a way to define XSLT without external files

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686617#comment-16686617
 ] 

ASF GitHub Bot commented on NIFI-4130:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1953


> TransformXml - provide a way to define XSLT without external files
> --
>
> Key: NIFI-4130
> URL: https://issues.apache.org/jira/browse/NIFI-4130
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.9.0
>
>
> In cluster deployments the need to reference external configuration files can 
> be annoying since it requires to access to all the NiFi nodes and to 
> correctly deploy the files. It would be interesting to leverage the lookup 
> controller services in TransformXml to provide a way to define XSLT directly 
> from the UI without external configuration files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #1953: NIFI-4130 Add lookup controller service in Transfor...

2018-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1953


---


[jira] [Commented] (NIFI-4130) TransformXml - provide a way to define XSLT without external files

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686614#comment-16686614
 ] 

ASF GitHub Bot commented on NIFI-4130:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1953
  
+1 LGTM, thanks for the review @bdesert and the improvement @pvillard31 ! 
Merging to master


> TransformXml - provide a way to define XSLT without external files
> --
>
> Key: NIFI-4130
> URL: https://issues.apache.org/jira/browse/NIFI-4130
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.9.0
>
>
> In cluster deployments the need to reference external configuration files can 
> be annoying since it requires to access to all the NiFi nodes and to 
> correctly deploy the files. It would be interesting to leverage the lookup 
> controller services in TransformXml to provide a way to define XSLT directly 
> from the UI without external configuration files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4130) TransformXml - provide a way to define XSLT without external files

2018-11-14 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4130:
---
Fix Version/s: 1.9.0

> TransformXml - provide a way to define XSLT without external files
> --
>
> Key: NIFI-4130
> URL: https://issues.apache.org/jira/browse/NIFI-4130
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.9.0
>
>
> In cluster deployments the need to reference external configuration files can 
> be annoying since it requires to access to all the NiFi nodes and to 
> correctly deploy the files. It would be interesting to leverage the lookup 
> controller services in TransformXml to provide a way to define XSLT directly 
> from the UI without external configuration files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4130) TransformXml - provide a way to define XSLT without external files

2018-11-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686615#comment-16686615
 ] 

ASF subversion and git services commented on NIFI-4130:
---

Commit 4112af013d1b1f49e83f881d85ebe66e097840b5 in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=4112af0 ]

NIFI-4130 Add lookup controller service in TransformXML to define XSLT from the 
UI

addressed review comments

Signed-off-by: Matthew Burgess 

This closes #1953


> TransformXml - provide a way to define XSLT without external files
> --
>
> Key: NIFI-4130
> URL: https://issues.apache.org/jira/browse/NIFI-4130
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.9.0
>
>
> In cluster deployments the need to reference external configuration files can 
> be annoying since it requires to access to all the NiFi nodes and to 
> correctly deploy the files. It would be interesting to leverage the lookup 
> controller services in TransformXml to provide a way to define XSLT directly 
> from the UI without external configuration files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #1953: NIFI-4130 Add lookup controller service in TransformXML to...

2018-11-14 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1953
  
+1 LGTM, thanks for the review @bdesert and the improvement @pvillard31 ! 
Merging to master


---


[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686605#comment-16686605
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233474220
  
--- Diff: extensions/coap/CMakeLists.txt ---
@@ -0,0 +1,92 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+
+include(${CMAKE_SOURCE_DIR}/extensions/ExtensionHeader.txt)
+include_directories(protocols nanofi controllerservice)
+include_directories(../http-curl/)
+
+file(GLOB CSOURCES "nanofi/*.c")
+file(GLOB SOURCES "*.cpp" "protocols/*.cpp" "processors/*.cpp" 
"controllerservice/*.cpp" )
+
+add_library(nanofi-coap-c STATIC ${CSOURCES})
+add_library(minifi-coap STATIC ${SOURCES})
+set_property(TARGET minifi-coap PROPERTY POSITION_INDEPENDENT_CODE ON)
+
+if(CMAKE_THREAD_LIBS_INIT)
+  target_link_libraries(minifi-coap "${CMAKE_THREAD_LIBS_INIT}")
+endif()
+
+  set(BASE_DIR "${CMAKE_CURRENT_BINARY_DIR}/extensions/coap")
+  if (APPLE)
+  set(BYPRODUCT 
"${BASE_DIR}/extensions/coap/thirdparty/libcoap-src/.libs/libcoap-2-gnutls.a")
--- End diff --

this may need a little modification as artifacts can vary depending on what 
is used for building TLS


> Move PayloadSerializer in preparation for Coap
> --
>
> Key: MINIFICPP-558
> URL: https://issues.apache.org/jira/browse/MINIFICPP-558
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Move PayloadSerializer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #437: MINIFICPP-558: initial provisioning for C...

2018-11-14 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437#discussion_r233474220
  
--- Diff: extensions/coap/CMakeLists.txt ---
@@ -0,0 +1,92 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+
+include(${CMAKE_SOURCE_DIR}/extensions/ExtensionHeader.txt)
+include_directories(protocols nanofi controllerservice)
+include_directories(../http-curl/)
+
+file(GLOB CSOURCES "nanofi/*.c")
+file(GLOB SOURCES "*.cpp" "protocols/*.cpp" "processors/*.cpp" 
"controllerservice/*.cpp" )
+
+add_library(nanofi-coap-c STATIC ${CSOURCES})
+add_library(minifi-coap STATIC ${SOURCES})
+set_property(TARGET minifi-coap PROPERTY POSITION_INDEPENDENT_CODE ON)
+
+if(CMAKE_THREAD_LIBS_INIT)
+  target_link_libraries(minifi-coap "${CMAKE_THREAD_LIBS_INIT}")
+endif()
+
+  set(BASE_DIR "${CMAKE_CURRENT_BINARY_DIR}/extensions/coap")
+  if (APPLE)
+  set(BYPRODUCT 
"${BASE_DIR}/extensions/coap/thirdparty/libcoap-src/.libs/libcoap-2-gnutls.a")
--- End diff --

this may need a little modification as artifacts can vary depending on what 
is used for building TLS


---


[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686602#comment-16686602
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/437
  
Adding tests presently, but providing a wip view. 


> Move PayloadSerializer in preparation for Coap
> --
>
> Key: MINIFICPP-558
> URL: https://issues.apache.org/jira/browse/MINIFICPP-558
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Move PayloadSerializer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-558) Move PayloadSerializer in preparation for Coap

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686601#comment-16686601
 ] 

ASF GitHub Bot commented on MINIFICPP-558:
--

GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437

MINIFICPP-558: initial provisioning for CoAP

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-558

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/437.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #437


commit bee4aad6c91b1926e90ac9ce5646e04e865410cc
Author: Marc Parisi 
Date:   2018-10-23T15:51:19Z

MINIFICPP-558: initial provisioning for CoAP




> Move PayloadSerializer in preparation for Coap
> --
>
> Key: MINIFICPP-558
> URL: https://issues.apache.org/jira/browse/MINIFICPP-558
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> Move PayloadSerializer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #437: MINIFICPP-558: initial provisioning for CoAP

2018-11-14 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/437
  
Adding tests presently, but providing a wip view. 


---


[GitHub] nifi-minifi-cpp pull request #437: MINIFICPP-558: initial provisioning for C...

2018-11-14 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/437

MINIFICPP-558: initial provisioning for CoAP

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-558

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/437.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #437


commit bee4aad6c91b1926e90ac9ce5646e04e865410cc
Author: Marc Parisi 
Date:   2018-10-23T15:51:19Z

MINIFICPP-558: initial provisioning for CoAP




---


[jira] [Created] (NIFIREG-211) Add extension bundles as a type of versioned item

2018-11-14 Thread Bryan Bende (JIRA)
Bryan Bende created NIFIREG-211:
---

 Summary: Add extension bundles as a type of versioned item
 Key: NIFIREG-211
 URL: https://issues.apache.org/jira/browse/NIFIREG-211
 Project: NiFi Registry
  Issue Type: Improvement
Reporter: Bryan Bende
Assignee: Bryan Bende
 Fix For: 0.4.0


This ticket is to capture the work for adding extension bundles to NiFi 
Registry.

This work may require several follow on tickets, but at a high-level will 
include some of the following:

- Add a new type of item called an extension bundle, where each bundle
 can contain one ore extensions or APIs
 
 - Support bundles for traditional NiFi (aka NARs) and also bundles for
 MiNiFi CPP
 
 - Ability to upload the binary artifact for a bundle and extract the
 metadata about the bundle, and metadata about the extensions contained
 in the bundle (more on this later)
 
 - Provide a pluggable storage provider for saving the content of each
 extension bundle so that we can have different implementations like
 local fileysystem, S3, and other object stores
 
 - Provide a REST API for listing and retrieving available bundles,
 integrate this into the registry Java client and NiFi CLI

- Security considerations such as checksums and cryptographic signatures for 
bundles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #1953: NIFI-4130 Add lookup controller service in TransformXML to...

2018-11-14 Thread bdesert
Github user bdesert commented on the issue:

https://github.com/apache/nifi/pull/1953
  
+1 LGTM. Tested on local env, with XSLT as a file (regression), with lookup 
service, cache size 0 and >0, all works as expected. Ready for merge. 
@mattyb149 , please could you please give a final look?


---


[jira] [Commented] (NIFI-4130) TransformXml - provide a way to define XSLT without external files

2018-11-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686562#comment-16686562
 ] 

ASF GitHub Bot commented on NIFI-4130:
--

Github user bdesert commented on the issue:

https://github.com/apache/nifi/pull/1953
  
+1 LGTM. Tested on local env, with XSLT as a file (regression), with lookup 
service, cache size 0 and >0, all works as expected. Ready for merge. 
@mattyb149 , please could you please give a final look?


> TransformXml - provide a way to define XSLT without external files
> --
>
> Key: NIFI-4130
> URL: https://issues.apache.org/jira/browse/NIFI-4130
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> In cluster deployments the need to reference external configuration files can 
> be annoying since it requires to access to all the NiFi nodes and to 
> correctly deploy the files. It would be interesting to leverage the lookup 
> controller services in TransformXml to provide a way to define XSLT directly 
> from the UI without external configuration files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-674) Better document building blocks of each ECU

2018-11-14 Thread Mr TheSegfault (JIRA)
Mr TheSegfault created MINIFICPP-674:


 Summary: Better document building blocks of each ECU
 Key: MINIFICPP-674
 URL: https://issues.apache.org/jira/browse/MINIFICPP-674
 Project: NiFi MiNiFi C++
  Issue Type: Documentation
Reporter: Mr TheSegfault


ECUs will define forward facing functionality. The ECU will be the unit of work 
that exists on edge devices, whether that be a server or small device. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-621) Crate log aggregator ECU for CAPI

2018-11-14 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-621:
-
Summary: Crate log aggregator ECU for CAPI   (was: Crate log aggregator 
example for CAPI )

> Crate log aggregator ECU for CAPI 
> --
>
> Key: MINIFICPP-621
> URL: https://issues.apache.org/jira/browse/MINIFICPP-621
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: Mr TheSegfault
>Priority: Major
>  Labels: CAPI, ECU, nanofi
>
> Create examples that can tail log files ( perhaps using TailFile ) that works 
> in windows and *nix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-621) Crate log aggregator example for CAPI

2018-11-14 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-621:
-
Labels: CAPI ECU nanofi  (was: CAPI nanofi)

> Crate log aggregator example for CAPI 
> --
>
> Key: MINIFICPP-621
> URL: https://issues.apache.org/jira/browse/MINIFICPP-621
> Project: NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Mr TheSegfault
>Priority: Major
>  Labels: CAPI, ECU, nanofi
>
> Create examples that can tail log files ( perhaps using TailFile ) that works 
> in windows and *nix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-673) Define ECU (Edge Collector Unit )

2018-11-14 Thread Mr TheSegfault (JIRA)
Mr TheSegfault created MINIFICPP-673:


 Summary: Define ECU (Edge Collector Unit )
 Key: MINIFICPP-673
 URL: https://issues.apache.org/jira/browse/MINIFICPP-673
 Project: NiFi MiNiFi C++
  Issue Type: Epic
Reporter: Mr TheSegfault


Edge Collector Units ( ECU ) will be minimized agents whose sole focus is 
collection and retrieval of data. Defined as feature facing constructs, ECUs 
will define functionality that is agnostic of system. We'll build the 
portability behind the library. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-621) Crate log aggregator example for CAPI

2018-11-14 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-621:
-
Issue Type: Sub-task  (was: New Feature)
Parent: MINIFICPP-673

> Crate log aggregator example for CAPI 
> --
>
> Key: MINIFICPP-621
> URL: https://issues.apache.org/jira/browse/MINIFICPP-621
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: Mr TheSegfault
>Priority: Major
>  Labels: CAPI, ECU, nanofi
>
> Create examples that can tail log files ( perhaps using TailFile ) that works 
> in windows and *nix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >