[GitHub] [nifi] ijokarumawak commented on issue #3394: NIFI-6159 - Add BigQuery processor using the Streaming API

2019-10-02 Thread GitBox
ijokarumawak commented on issue #3394: NIFI-6159 - Add BigQuery processor using 
the Streaming API
URL: https://github.com/apache/nifi/pull/3394#issuecomment-537729832
 
 
   Hi @pvillard31, I was trying to cherry-pick/squash the commits and merge it 
to master, but since the PR has multiple 'merge' operation in between, it 
causes merge conflicts when I do so.
   
   I think it's safer for you to squash the commits into one and merge it to 
master yourself. Or if you update this PR to have the final squashed commit, 
then I can easily merge it for you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] ijokarumawak commented on issue #3394: NIFI-6159 - Add BigQuery processor using the Streaming API

2019-10-02 Thread GitBox
ijokarumawak commented on issue #3394: NIFI-6159 - Add BigQuery processor using 
the Streaming API
URL: https://github.com/apache/nifi/pull/3394#issuecomment-537728013
 
 
   Changes look good. Backed by the 2 +1s, I'm going to merge this into master. 
Thanks @pvillard31 for the improvement!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (NIFIREG-323) CLONE - Updated bootstrap port handling causes restarts to fail

2019-10-02 Thread Kevin Doran (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFIREG-323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran resolved NIFIREG-323.
-
Resolution: Fixed

> CLONE - Updated bootstrap port handling causes restarts to fail
> ---
>
> Key: NIFIREG-323
> URL: https://issues.apache.org/jira/browse/NIFIREG-323
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Aldrin Piri
>Assignee: Bryan Bende
>Priority: Blocker
> Fix For: 1.0.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> NIFI-6653 introduced a way to avoid the port being changed to prevent a 
> possible hijacking of that port to provide other commands than those 
> explicitly between bootstrap and the nifi process.  This causes issues when 
> the NiFi process dies and precludes restart (the default functionality 
> enabled out of the box).
> To recreate, build/run NiFi off of current master and kill the NiFi process.
> This will result in the following (when an additional nifi.sh status is 
> carried out).
> {quote}2019-09-25 17:10:55,601 WARN [main] org.apache.nifi.bootstrap.RunNiFi 
> Apache NiFi appears to have died. Restarting...
> 2019-09-25 17:10:55,620 INFO [main] org.apache.nifi.bootstrap.Command 
> Launched Apache NiFi with Process ID 2088
> 2019-09-25 17:10:55,621 INFO [main] org.apache.nifi.bootstrap.RunNiFi 
> Successfully started Apache NiFi with PID 2088
> 2019-09-25 17:10:56,174 WARN [NiFi Bootstrap Command Listener] 
> org.apache.nifi.bootstrap.RunNiFi Blocking attempt to change NiFi command 
> port and secret after they have already been initialized. requestedPort=37871
> 2019-09-25 17:11:50,783 INFO [main] o.a.n.b.NotificationServiceManager 
> Successfully loaded the following 0 services: []
> 2019-09-25 17:11:50,785 INFO [main] org.apache.nifi.bootstrap.RunNiFi 
> Registered no Notification Services for Notification Type NIFI_STARTED
> 2019-09-25 17:11:50,786 INFO [main] org.apache.nifi.bootstrap.RunNiFi 
> Registered no Notification Services for Notification Type NIFI_STOPPED
> 2019-09-25 17:11:50,786 INFO [main] org.apache.nifi.bootstrap.RunNiFi 
> Registered no Notification Services for Notification Type NIFI_DIED
> 2019-09-25 17:11:50,809 INFO [main] org.apache.nifi.bootstrap.Command Apache 
> NiFi is running at PID 2088 but is not responding to ping requests{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6738) ListenBeats receives partial messages

2019-10-02 Thread John Black (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Black updated NIFI-6738:
-
Description: 
Hi!

I receive windows events in ListenBeats from winlogbeats, however some messages 
(usually those which bigger than 3 KB) arrive truncated. I observe it in Data 
Provenance. I checked the content of outgoing packets on winlogbeat side - the 
messages are shipped properly. Sometimes one FlowFile in Data Provenance has 
part of one message followed by non-printable symbols (probably winlogbeat's 
header) combined with truncated itself. Please find attached a few screenshots 
below.

!image-2019-10-02-22-17-04-236.png!

!image-2019-10-02-22-19-45-813.png!

!image-2019-10-02-22-21-07-982.png!

  
!https://aws1.discourse-cdn.com/elastic/original/3X/9/f/9fd25b03f089e2c19e29b34244c18519a12b8b1f.jpeg!

  was:
Hi!

I receive windows events in ListenBeats from winlogbeats, however some messages 
(usually those which bigger than 3 KB) arrive truncated. I observe it in Data 
Provenance. I checked the content of outgoing packets on winlogbeat side - the 
messages are shipped properly. Sometimes one FlowFile in Data Provenance has 
part of one message followed by non-printable symbols (probably winlogbeat's 
header) combined with truncated itself. Please find attached a 
 few screenshots below.

!image-2019-10-02-22-17-04-236.png!

!image-2019-10-02-22-19-45-813.png!

!image-2019-10-02-22-21-07-982.png!

  
!https://aws1.discourse-cdn.com/elastic/original/3X/9/f/9fd25b03f089e2c19e29b34244c18519a12b8b1f.jpeg!


> ListenBeats receives partial messages
> -
>
> Key: NIFI-6738
> URL: https://issues.apache.org/jira/browse/NIFI-6738
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.9.2
> Environment: Ubuntu 16.04.6 LTS (GNU/Linux 4.4.0-159-generic x86_64)
>Reporter: John Black
>Priority: Major
> Attachments: image-2019-10-02-22-17-04-236.png, 
> image-2019-10-02-22-19-45-813.png, image-2019-10-02-22-21-07-982.png
>
>
> Hi!
> I receive windows events in ListenBeats from winlogbeats, however some 
> messages (usually those which bigger than 3 KB) arrive truncated. I observe 
> it in Data Provenance. I checked the content of outgoing packets on 
> winlogbeat side - the messages are shipped properly. Sometimes one FlowFile 
> in Data Provenance has part of one message followed by non-printable symbols 
> (probably winlogbeat's header) combined with truncated itself. Please find 
> attached a few screenshots below.
> !image-2019-10-02-22-17-04-236.png!
> !image-2019-10-02-22-19-45-813.png!
> !image-2019-10-02-22-21-07-982.png!
>   
> !https://aws1.discourse-cdn.com/elastic/original/3X/9/f/9fd25b03f089e2c19e29b34244c18519a12b8b1f.jpeg!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-6738) ListenBeats receives partial messages

2019-10-02 Thread John Black (Jira)
John Black created NIFI-6738:


 Summary: ListenBeats receives partial messages
 Key: NIFI-6738
 URL: https://issues.apache.org/jira/browse/NIFI-6738
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.9.2
 Environment: Ubuntu 16.04.6 LTS (GNU/Linux 4.4.0-159-generic x86_64)
Reporter: John Black
 Attachments: image-2019-10-02-22-17-04-236.png, 
image-2019-10-02-22-19-45-813.png, image-2019-10-02-22-21-07-982.png

Hi!

I receive windows events in ListenBeats from winlogbeats, however some messages 
(usually those which bigger than 3 KB) arrive truncated. I observe it in Data 
Provenance. I checked the content of outgoing packets on winlogbeat side - the 
messages are shipped properly. Sometimes one FlowFile in Data Provenance has 
part of one message followed by non-printable symbols (probably winlogbeat's 
header) combined with truncated itself. Please find attached a 
 few screenshots below.

!image-2019-10-02-22-17-04-236.png!

!image-2019-10-02-22-19-45-813.png!

!image-2019-10-02-22-21-07-982.png!

  
!https://aws1.discourse-cdn.com/elastic/original/3X/9/f/9fd25b03f089e2c19e29b34244c18519a12b8b1f.jpeg!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6619) RouteOnAttribute: Create new Routing Strategy to route only first rule that is true

2019-10-02 Thread Raymond (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943079#comment-16943079
 ] 

Raymond commented on NIFI-6619:
---

[~hondawei]. Could be part of the solution as long as this reflected in the UI 
as well. Now the properties are sorted alphabetically (both in the processor as 
in the relationships).

I'm cautious though to make big changes in such often used processor. Maybe we 
could first make a separate processor with a "RouteByOrder" strategy to see how 
it behaves. I would like to test/contribute if needed.

> RouteOnAttribute: Create new Routing Strategy to route only first rule that 
> is true
> ---
>
> Key: NIFI-6619
> URL: https://issues.apache.org/jira/browse/NIFI-6619
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.9.2
>Reporter: Raymond
>Priority: Major
> Attachments: image-2019-09-12-22-15-51-027.png
>
>
> Currently the RouteOnAttribute has the strategy  "Route to Property name". 
> The behavior is that for each rule that is true a message (clone of flowfile) 
> will be sent to the next step. 
> I would like to have another strategy:
> "Route to first matched Property name" or "Route to Property name by first 
> match" or Route to first Property name which evaluates true".
> This will ensure that next step gets exactly one message.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6736) If not given enough threads, Load Balanced Connections may block for long periods of time without making progress

2019-10-02 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-6736:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> If not given enough threads, Load Balanced Connections may block for long 
> periods of time without making progress
> -
>
> Key: NIFI-6736
> URL: https://issues.apache.org/jira/browse/NIFI-6736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.10.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When load-balanced connections are used, we have a few different properties 
> that we can configure. Specifically, the properties with their default values 
> are:
> nifi.cluster.load.balance.connections.per.node=4
> nifi.cluster.load.balance.max.thread.count=8
> nifi.cluster.load.balance.comms.timeout=30 sec
> If the max thread count is below the number of connections per node * number 
> of nodes in the cluster, everything still works well when there are 
> reasonably high data volumes across all connections that are load-balanced. 
> However, if one of the connections has a low data volume, we can get into a 
> situation where the load balanced connections stop pushing data for some 
> period of time, typically approximately some multiple of the "comms.timeout" 
> property.
> This appears to be due to the fact that the server is using Socket IO and not 
> NIO and once data has been received, it will check if more data is available. 
> If it does not receive any indication for some period of time, it will time 
> out. Only then does it add the socket connection back to a pool of 
> connections to read from. This means that the thread can be stuck, waiting to 
> receive more data, and blocking any progress from other connections on that 
> thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6736) If not given enough threads, Load Balanced Connections may block for long periods of time without making progress

2019-10-02 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943063#comment-16943063
 ] 

ASF subversion and git services commented on NIFI-6736:
---

Commit 99cf87c330a2b27757cb188a4e806a46c31ecd1b in nifi's branch 
refs/heads/master from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=99cf87c ]

NIFI-6736: Create thread on demand to handle incoming request from client for 
load balancing. This allows us to avoid situations where we don't have enough 
threads and we block on the server side, waiting for data, when clients are 
trying to send data in another connection

This closes #3784.

Signed-off-by: Bryan Bende 


> If not given enough threads, Load Balanced Connections may block for long 
> periods of time without making progress
> -
>
> Key: NIFI-6736
> URL: https://issues.apache.org/jira/browse/NIFI-6736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When load-balanced connections are used, we have a few different properties 
> that we can configure. Specifically, the properties with their default values 
> are:
> nifi.cluster.load.balance.connections.per.node=4
> nifi.cluster.load.balance.max.thread.count=8
> nifi.cluster.load.balance.comms.timeout=30 sec
> If the max thread count is below the number of connections per node * number 
> of nodes in the cluster, everything still works well when there are 
> reasonably high data volumes across all connections that are load-balanced. 
> However, if one of the connections has a low data volume, we can get into a 
> situation where the load balanced connections stop pushing data for some 
> period of time, typically approximately some multiple of the "comms.timeout" 
> property.
> This appears to be due to the fact that the server is using Socket IO and not 
> NIO and once data has been received, it will check if more data is available. 
> If it does not receive any indication for some period of time, it will time 
> out. Only then does it add the socket connection back to a pool of 
> connections to read from. This means that the thread can be stuck, waiting to 
> receive more data, and blocking any progress from other connections on that 
> thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #3784: NIFI-6736: Create thread on demand to handle incoming request from cl…

2019-10-02 Thread GitBox
asfgit closed pull request #3784: NIFI-6736: Create thread on demand to handle 
incoming request from cl…
URL: https://github.com/apache/nifi/pull/3784
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] bbende commented on issue #3784: NIFI-6736: Create thread on demand to handle incoming request from cl…

2019-10-02 Thread GitBox
bbende commented on issue #3784: NIFI-6736: Create thread on demand to handle 
incoming request from cl…
URL: https://github.com/apache/nifi/pull/3784#issuecomment-537621036
 
 
   Looks good, will merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6734) S3EncryptionService fixes and improvements

2019-10-02 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-6734:
--
Description: 
I found some issues while I was setting up S3 encryption controller service.
 I think these should be addressed before the initial release of the CS.

Bugs:
 - multipart upload not works in case of SSE S3 encryption
 - multipart upload not works in case of CSE* encryptions
 - SSE S3 and SSE KMS strategies don't do anything in case of FetchS3Object (it 
is not needed to configure them, the decryption handled implicitly). On the 
other hand, if SSE S3 is set for an SSE KMS (or a CSE*) encrypted object, it 
won't cause any error (CSE encrypted object won't be decrypted though) and SSE 
S3 will be set on the outgoing FlowFile (s3.encryptionStrategy attribute) which 
is false info => SSE S3 and SSE KMS should be disabled for FetchS3Object
 - StandardS3EncryptionService.customValidate() runs on wrong 
encryptionStrategy instance (it must be retrieved from ValidationContext)

Code cleanup:
 - CSE CMK encryption strategy sets the KMS region, but it will not be used (as 
the key does not come from KMS, but will be specified by the client) => setting 
the KMS region is not necessary / misleading in the code
 - CSE* encryption strategies set the KMS region on the client, but the client 
needs the bucket region (which can be different than the KMS region) and it 
will be set later in the code flow => setting the KMS region on the client is 
not necessary / misleading in the code

Documentation enhancements:
 - 'Key ID or Key Material' property: document in the property description that 
it is not used (should be empty) in case of SSE S3, for other encryption types 
use the same names as in the Encryption Strategy combo (eg. 'Server-side 
Customer Key' instead of 'Server-side CEK')
 - 'region' property: add display name + description, document in the property 
description that it is the KMS region and is only used in case of Client-side 
KMS
 - documentation of PutS3Object and FetchS3Object should be separated: eg. 
FetchS3Object does not have 'Server Side Encryption' property referred in the 
docs and the controller service is not needed for fetching SSE S3 and SSE KMS 
encrypted objects
 - add 'aws' and 's3' tags to the CS
 - additionalDetails not linked properly (not accessible)
 - key alias does not work for KMS keys, only key id => remove alias from docs
 - add validator with informative error messages to help configuration

Renaming:
 - 'Client-side Customer Master Key' property value: CMK (Customer Master Key) 
is generally used for the client side encryption keys in the [AWS 
docs|https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html],
 regardless that the key provided by the client or stored in KMS. For this 
reason, 'Client-side KMS' vs 'Client-side Customer Master Key' is a bit 
confusing for me, I would use 'Client-side Customer Key' for the latter 
(similar to 'Server-side KMS' and 'Server-side Customer Key')
 - 'region' property: should be renamed to kms-region (to avoid confusion with 
the bucket region in the code)

  was:
I found some issues while I was setting up S3 encryption controller service.
 I think these should be addressed before the initial release of the CS.

Bugs:
 - multipart upload not works in case of SSE S3 encryption
 - multipart upload not works in case of CSE* encryptions
 - SSE S3 and SSE KMS strategies don't do anything in case of FetchS3Object (it 
is not needed to configure them, the decryption handled implicitly). On the 
other hand, if SSE S3 is set for an SSE KMS (or a CSE*) encrypted object, it 
won't cause any error (CSE encrypted object won't be decrypted though) and SSE 
S3 will be set on the outgoing FlowFile (s3.encryptionStrategy attribute) which 
is false info => SSE S3 and SSE KMS should be disabled for FetchS3Object

Code cleanup:
 - CSE CMK encryption strategy sets the KMS region, but it will not be used (as 
the key does not come from KMS, but will be specified by the client) => setting 
the KMS region is not necessary / misleading in the code
 - CSE* encryption strategies set the KMS region on the client, but the client 
needs the bucket region (which can be different than the KMS region) and it 
will be set later in the code flow => setting the KMS region on the client is 
not necessary / misleading in the code

Documentation enhancements:
 - 'Key ID or Key Material' property: document in the property description that 
it is not used (should be empty) in case of SSE S3, for other encryption types 
use the same names as in the Encryption Strategy combo (eg. 'Server-side 
Customer Key' instead of 'Server-side CEK')
 - 'region' property: add display name + description, document in the property 
description that it is the KMS region and is only used in case of Client-side 
KMS
 - documentation of PutS3Object and FetchS3Object shou

[GitHub] [nifi] SandishKumarHN commented on issue #3450: NIFI-1642 : Kafka Processors Topic Name Validations

2019-10-02 Thread GitBox
SandishKumarHN commented on issue #3450: NIFI-1642 : Kafka Processors Topic 
Name Validations
URL: https://github.com/apache/nifi/pull/3450#issuecomment-537612006
 
 
   @tpalfy can you please review this also? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6737) CSVRecordLookupService failed with key field of type NOT String

2019-10-02 Thread Remoleav (Jira)
Remoleav created NIFI-6737:
--

 Summary: CSVRecordLookupService failed with key field of type NOT 
String
 Key: NIFI-6737
 URL: https://issues.apache.org/jira/browse/NIFI-6737
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.9.2
Reporter: Remoleav
 Attachments: csvlookupfailure.png, nifi-app.log

CSVRecordLookupService failed when key field configured in LookupRecord 
processor of type NOT String, according to avro scheme. Obviously, this could 
be fixed by calling method toString() or changing original Map 
to Map



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi] asfgit merged pull request #173: MINIFI-515 Use a repository location with long lived versions for the…

2019-10-02 Thread GitBox
asfgit merged pull request #173: MINIFI-515 Use a repository location with long 
lived versions for the…
URL: https://github.com/apache/nifi-minifi/pull/173
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] sburges commented on a change in pull request #3676: NIFI-6597 Azure Event Hub Version Update

2019-10-02 Thread GitBox
sburges commented on a change in pull request #3676: NIFI-6597 Azure Event Hub 
Version Update
URL: https://github.com/apache/nifi/pull/3676#discussion_r330668199
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/eventhub/GetAzureEventHub.java
 ##
 @@ -130,7 +132,7 @@
 .name("Event Hub Message Enqueue Time")
 .description("A timestamp (ISO-8061 Instant) formatted as 
-MM-DDThhmmss.sssZ (2016-01-01T01:01:01.000Z) from which messages "
 + "should have been enqueued in the EventHub to start 
reading from")
-.addValidator(StandardValidators.ISO8061_INSTANT_VALIDATOR)
+.addValidator(StandardValidators.ISO8601_INSTANT_VALIDATOR)
 
 Review comment:
   @pvillard31 Fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi] asfgit merged pull request #171: MINIFI-508 Make Windows service binary inclusions optional via profile.

2019-10-02 Thread GitBox
asfgit merged pull request #171: MINIFI-508 Make Windows service binary 
inclusions optional via profile.
URL: https://github.com/apache/nifi-minifi/pull/171
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6275) ListHDFS with Full Path filter mode regex does not work as intended

2019-10-02 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-6275:
--
Fix Version/s: 1.10.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> ListHDFS with Full Path filter mode regex does not work as intended
> ---
>
> Key: NIFI-6275
> URL: https://issues.apache.org/jira/browse/NIFI-6275
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website, Extensions
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.9.2
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Minor
> Fix For: 1.10.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When using the *{{Full Path}}* filter mode, the regex is applied to the URI 
> returned for each file which includes the scheme and authority (hostname, HA 
> namespace, port).  For the filter to work across multiple HDFS installations 
> (such as a flow used on multiple environments that is retrieved from NiFi 
> Registry), the regex filter would have to account for the scheme and 
> authority by matching possible scheme and authority values.
> To make it easier for the user, the *{{Full Path}}* filter mode's filter 
> regex should only be applied to the path components of the URI, without the 
> scheme and authority.  This can be done by updating the filter for *{{Full 
> Path}}* mode to use: 
> [Path.getPathWithoutSchemeAndAuthority(Path)|https://hadoop.apache.org/docs/r3.0.0/api/org/apache/hadoop/fs/Path.html#getPathWithoutSchemeAndAuthority-org.apache.hadoop.fs.Path-].
>   This will bring the regex values in line with the other modes, since those 
> are only applied to the value of *{{Path.getName()}}*.
> Migration guidance will be needed when this improvement is released.  
> Existing regex values for *{{Full Path}}* filter mode that accepted any 
> scheme and authority will still work. 
>  Those that specify a scheme and authority will *_not_* work, and will have 
> to be updated to specify only path components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] SandishKumarHN commented on issue #3611: NIFI-6009 ScanKudu Processor

2019-10-02 Thread GitBox
SandishKumarHN commented on issue #3611: NIFI-6009 ScanKudu Processor
URL: https://github.com/apache/nifi/pull/3611#issuecomment-537563822
 
 
   @tpalfy thank you so much for thorough testing and adding more tests this. 
really appreciate. made changes as per your code snippets. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6735) Updating parameter used by running processor can result in the processor not restarting because it doesn't wait for validation to complete.

2019-10-02 Thread Rob Fellows (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942933#comment-16942933
 ] 

Rob Fellows commented on NIFI-6735:
---

A better way to test this is to use the DebugFlow processor and set the 
CustomValidate pause time to a parameter. Then you can change the parameter 
value to exercise the bug/fix.

> Updating parameter used by running processor can result in the processor not 
> restarting because it doesn't wait for validation to complete.
> ---
>
> Key: NIFI-6735
> URL: https://issues.apache.org/jira/browse/NIFI-6735
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Rob Fellows
>Assignee: Rob Fellows
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: Processor fails to restart.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There is a timing issue when updating a parameter that is referenced by a 
> running processor. Updating the parameter context stops the processor and 
> tries to restart it. However, validation of the processor might not be 
> complete by the time it is attempted to be restarted and you can get an error 
> (see attached image) and the processor does not restart automatically.
>  
> Steps to reproduce:
> 1) Create a process group with a ListFile and LogAttribute processor .
> 2) Have the input dir be a parameter on the ListFile.  Set the parameter on 
> the PG.
> 3) Have that running.  Then go change the parameter value on the PG level.  
> It will auto-restart for you but fail because validation wasn't done yet for 
> ListFIle resulting in a now not running processor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] sburges commented on a change in pull request #3676: NIFI-6597 Azure Event Hub Version Update

2019-10-02 Thread GitBox
sburges commented on a change in pull request #3676: NIFI-6597 Azure Event Hub 
Version Update
URL: https://github.com/apache/nifi/pull/3676#discussion_r330634335
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/eventhub/GetAzureEventHub.java
 ##
 @@ -130,7 +132,7 @@
 .name("Event Hub Message Enqueue Time")
 .description("A timestamp (ISO-8061 Instant) formatted as 
-MM-DDThhmmss.sssZ (2016-01-01T01:01:01.000Z) from which messages "
 + "should have been enqueued in the EventHub to start 
reading from")
-.addValidator(StandardValidators.ISO8061_INSTANT_VALIDATOR)
+.addValidator(StandardValidators.ISO8601_INSTANT_VALIDATOR)
 
 Review comment:
   8061 is a deprecated validators that looks like it was a typo 
(https://github.com/apache/nifi/commit/595835f6ee4fa024799827f89d9b55dfabcced23)
 - it should be 8601. I missed updating the description. Will fix that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-registry] thenatog merged pull request #233: NIFIREG-323 Clear secret key when auto restarting in order to obtain …

2019-10-02 Thread GitBox
thenatog merged pull request #233: NIFIREG-323 Clear secret key when auto 
restarting in order to obtain …
URL: https://github.com/apache/nifi-registry/pull/233
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-registry] thenatog commented on issue #233: NIFIREG-323 Clear secret key when auto restarting in order to obtain …

2019-10-02 Thread GitBox
thenatog commented on issue #233: NIFIREG-323 Clear secret key when auto 
restarting in order to obtain …
URL: https://github.com/apache/nifi-registry/pull/233#issuecomment-537555890
 
 
   Works as expected. I tested that when killing the nifi-registry process (not 
the bootstrap process), the daemon does restart nifi-registry but bootstrap 
commands like ./nifi-registry.sh [status|stop] do not work. Once applying this 
fix, the restarted process will now communicate the new key back to the 
bootstrap daemon and the above commands will continue to work on the newly 
spawned process. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #658: MINIFICPP-1025 Code review.

2019-10-02 Thread GitBox
arpadboda closed pull request #658: MINIFICPP-1025 Code review.
URL: https://github.com/apache/nifi-minifi-cpp/pull/658
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] pvillard31 commented on a change in pull request #3676: NIFI-6597 Azure Event Hub Version Update

2019-10-02 Thread GitBox
pvillard31 commented on a change in pull request #3676: NIFI-6597 Azure Event 
Hub Version Update
URL: https://github.com/apache/nifi/pull/3676#discussion_r330620083
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/eventhub/GetAzureEventHub.java
 ##
 @@ -130,7 +132,7 @@
 .name("Event Hub Message Enqueue Time")
 .description("A timestamp (ISO-8061 Instant) formatted as 
-MM-DDThhmmss.sssZ (2016-01-01T01:01:01.000Z) from which messages "
 + "should have been enqueued in the EventHub to start 
reading from")
-.addValidator(StandardValidators.ISO8061_INSTANT_VALIDATOR)
+.addValidator(StandardValidators.ISO8601_INSTANT_VALIDATOR)
 
 Review comment:
   Why are you doing this change? The description says we expect ISO-8061.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] Riduidel commented on a change in pull request #3570: Fix for NIFI-6422

2019-10-02 Thread GitBox
Riduidel commented on a change in pull request #3570: Fix for NIFI-6422
URL: https://github.com/apache/nifi/pull/3570#discussion_r330621230
 
 

 ##
 File path: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java
 ##
 @@ -110,7 +111,7 @@ private static String pad3Places(final long val) {
  */
 public static String formatDataSize(final double dataSize) {
 // initialize the formatter
-final NumberFormat format = NumberFormat.getNumberInstance();
+final NumberFormat format = NumberFormat.getNumberInstance(Locale.US);
 
 Review comment:
   No, it's not the reason, I can't remember why. I will rollback that part.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] Riduidel commented on a change in pull request #3570: Fix for NIFI-6422

2019-10-02 Thread GitBox
Riduidel commented on a change in pull request #3570: Fix for NIFI-6422
URL: https://github.com/apache/nifi/pull/3570#discussion_r330620085
 
 

 ##
 File path: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java
 ##
 @@ -110,7 +111,7 @@ private static String pad3Places(final long val) {
  */
 public static String formatDataSize(final double dataSize) {
 // initialize the formatter
-final NumberFormat format = NumberFormat.getNumberInstance();
+final NumberFormat format = NumberFormat.getNumberInstance(Locale.US);
 
 Review comment:
   yes it's because it seems like Google BigQuery requries dates in US format 
(how funny) ... Well, at least, that's why I remember having changed that 
globally (which is obviously not a good thing). Maybe i should use a different 
number format for the BigQuery part ...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-registry] thenatog commented on issue #233: NIFIREG-323 Clear secret key when auto restarting in order to obtain …

2019-10-02 Thread GitBox
thenatog commented on issue #233: NIFIREG-323 Clear secret key when auto 
restarting in order to obtain …
URL: https://github.com/apache/nifi-registry/pull/233#issuecomment-537547570
 
 
   Reviewing..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6597) Azure Event Hub processor API version update

2019-10-02 Thread Marc Parisi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marc Parisi updated NIFI-6597:
--
Issue Type: Bug  (was: Improvement)

> Azure Event Hub processor API version update
> 
>
> Key: NIFI-6597
> URL: https://issues.apache.org/jira/browse/NIFI-6597
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Sunny Zhang
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current processors for Azure Event Hub is using an outdated version API 
> of 0.14.4, which hinders the functionality. Updates needed for the Azure 
> Event Hub processors to use the newest API of Event Hub version of  2.3.2 and 
> EPH version of 2.5.2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] sburges commented on issue #3676: NIFI-6597 Azure Event Hub Version Update

2019-10-02 Thread GitBox
sburges commented on issue #3676: NIFI-6597 Azure Event Hub Version Update
URL: https://github.com/apache/nifi/pull/3676#issuecomment-537544008
 
 
   @pvillard31 Can you take another look at the change? I believe you comments 
are addressed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6734) S3EncryptionService fixes and improvements

2019-10-02 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-6734:
--
Description: 
I found some issues while I was setting up S3 encryption controller service.
 I think these should be addressed before the initial release of the CS.

Bugs:
 - multipart upload not works in case of SSE S3 encryption
 - multipart upload not works in case of CSE* encryptions
 - SSE S3 and SSE KMS strategies don't do anything in case of FetchS3Object (it 
is not needed to configure them, the decryption handled implicitly). On the 
other hand, if SSE S3 is set for an SSE KMS (or a CSE*) encrypted object, it 
won't cause any error (CSE encrypted object won't be decrypted though) and SSE 
S3 will be set on the outgoing FlowFile (s3.encryptionStrategy attribute) which 
is false info => SSE S3 and SSE KMS should be disabled for FetchS3Object

Code cleanup:
 - CSE CMK encryption strategy sets the KMS region, but it will not be used (as 
the key does not come from KMS, but will be specified by the client) => setting 
the KMS region is not necessary / misleading in the code
 - CSE* encryption strategies set the KMS region on the client, but the client 
needs the bucket region (which can be different than the KMS region) and it 
will be set later in the code flow => setting the KMS region on the client is 
not necessary / misleading in the code

Documentation enhancements:
 - 'Key ID or Key Material' property: document in the property description that 
it is not used (should be empty) in case of SSE S3, for other encryption types 
use the same names as in the Encryption Strategy combo (eg. 'Server-side 
Customer Key' instead of 'Server-side CEK')
 - 'region' property: add display name + description, document in the property 
description that it is the KMS region and is only used in case of Client-side 
KMS
 - documentation of PutS3Object and FetchS3Object should be separated: eg. 
FetchS3Object does not have 'Server Side Encryption' property referred in the 
docs and the controller service is not needed for fetching SSE S3 and SSE KMS 
encrypted objects
 - add 'aws' and 's3' tags to the CS
 - additionalDetails not linked properly (not accessible)
 - key alias does not work for KMS keys, only key id => remove alias from docs
 - add validator with informative error messages to help configuration

Renaming:
 - 'Client-side Customer Master Key' property value: CMK (Customer Master Key) 
is generally used for the client side encryption keys in the [AWS 
docs|https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html],
 regardless that the key provided by the client or stored in KMS. For this 
reason, 'Client-side KMS' vs 'Client-side Customer Master Key' is a bit 
confusing for me, I would use 'Client-side Customer Key' for the latter 
(similar to 'Server-side KMS' and 'Server-side Customer Key')
 - 'region' property: should be renamed to kms-region (to avoid confusion with 
the bucket region in the code)

  was:
I found some issues while I was setting up S3 encryption controller service.
 I think these should be addressed before the initial release of the CS.

Bugs:
 - multipart upload not works in case of SSE S3 encryption
 - multipart upload not works in case of CSE* encryptions

Code cleanup:
 - CSE CMK encryption strategy sets the KMS region, but it will not be used (as 
the key does not come from KMS, but will be specified by the client) => setting 
the KMS region is not necessary / misleading in the code
 - CSE* encryption strategies set the KMS region on the client, but the client 
needs the bucket region (which can be different than the KMS region) and it 
will be set later in the code flow => setting the KMS region on the client is 
not necessary / misleading in the code

Documentation enhancements:
 - 'Key ID or Key Material' property: document in the property description that 
it is not used (should be empty) in case of SSE S3, for other encryption types 
use the same names as in the Encryption Strategy combo (eg. 'Server-side 
Customer Key' instead of 'Server-side CEK')
 - 'region' property: add display name + description, document in the property 
description that it is the KMS region and is only used in case of Client-side 
KMS
 - documentation of PutS3Object and FetchS3Object should be separated: eg. 
FetchS3Object does not have 'Server Side Encryption' property referred in the 
docs and the controller service is not needed for fetching SSE S3 and SSE KMS 
encrypted objects
 - add 'aws' and 's3' tags to the CS
 - additionalDetails not linked properly (not accessible)

Renaming:
 - 'Client-side Customer Master Key' property value: CMK (Customer Master Key) 
is generally used for the client side encryption keys in the [AWS 
docs|https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html],
 regardless that the key provided by the client or stored in KMS. For 

[GitHub] [nifi] bbende commented on issue #3784: NIFI-6736: Create thread on demand to handle incoming request from cl…

2019-10-02 Thread GitBox
bbende commented on issue #3784: NIFI-6736: Create thread on demand to handle 
incoming request from cl…
URL: https://github.com/apache/nifi/pull/3784#issuecomment-537533351
 
 
   Will review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (NIFI-6589) Leader Election should cache results obtained from ZooKeeper

2019-10-02 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-6589.
---
Resolution: Fixed

> Leader Election should cache results obtained from ZooKeeper
> 
>
> Key: NIFI-6589
> URL: https://issues.apache.org/jira/browse/NIFI-6589
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Minor
> Fix For: 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In order to determine which node in a cluster is the Cluster Coordinator, a 
> node must make a request to ZooKeeper. That means that if we have N nodes in 
> a cluster, then we must ask ZooKeeper for each request at least (N+1) times 
> (and no more than N+2) who is the Cluster Coordinator. This is done because 
> when the request comes in, the node must determine whether or not it is the 
> Cluster Coordinator. If so, it must replicate the request to each node. If 
> not, it must forward the request to the Cluster Coordinator, which will then 
> do so. When the request is replicated, it will again check if it is the 
> cluster coordinator. If we instead cache the result of querying ZooKeeper for 
> a short period of time, say 1 minute, we can dramatically decrease the number 
> of times that we hit ZooKeeper. If the Coordinator / Primary Node changes in 
> the mean time, it should still be notified of the change asynchronously.
> The polling is done currently because we've seen situations where the 
> asynchronous notification did not happen. But if we update the code so that 
> we cache the results, this means that we will also update the code for 
> caching results of which node is Primary Node. This is a benefit as well, 
> because currently we don't poll for this and as a result, if we do happen to 
> miss the notification, we could theoretically have 2 nodes running processors 
> should only run on Primary Node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6589) Leader Election should cache results obtained from ZooKeeper

2019-10-02 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942882#comment-16942882
 ] 

ASF subversion and git services commented on NIFI-6589:
---

Commit 2f6f8529157ac6c7434eae0ab4004a78fd49d547 in nifi's branch 
refs/heads/master from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2f6f852 ]

NIFI-6589: Addressed NPE


> Leader Election should cache results obtained from ZooKeeper
> 
>
> Key: NIFI-6589
> URL: https://issues.apache.org/jira/browse/NIFI-6589
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Minor
> Fix For: 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In order to determine which node in a cluster is the Cluster Coordinator, a 
> node must make a request to ZooKeeper. That means that if we have N nodes in 
> a cluster, then we must ask ZooKeeper for each request at least (N+1) times 
> (and no more than N+2) who is the Cluster Coordinator. This is done because 
> when the request comes in, the node must determine whether or not it is the 
> Cluster Coordinator. If so, it must replicate the request to each node. If 
> not, it must forward the request to the Cluster Coordinator, which will then 
> do so. When the request is replicated, it will again check if it is the 
> cluster coordinator. If we instead cache the result of querying ZooKeeper for 
> a short period of time, say 1 minute, we can dramatically decrease the number 
> of times that we hit ZooKeeper. If the Coordinator / Primary Node changes in 
> the mean time, it should still be notified of the change asynchronously.
> The polling is done currently because we've seen situations where the 
> asynchronous notification did not happen. But if we update the code so that 
> we cache the results, this means that we will also update the code for 
> caching results of which node is Primary Node. This is a benefit as well, 
> because currently we don't poll for this and as a result, if we do happen to 
> miss the notification, we could theoretically have 2 nodes running processors 
> should only run on Primary Node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] tpalfy commented on a change in pull request #3611: NIFI-6009 ScanKudu Processor

2019-10-02 Thread GitBox
tpalfy commented on a change in pull request #3611: NIFI-6009 ScanKudu Processor
URL: https://github.com/apache/nifi/pull/3611#discussion_r330467418
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/test/java/org/apache/nifi/processors/kudu/TestScanKudu.java
 ##
 @@ -0,0 +1,449 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import org.apache.kudu.client.KuduException;
+import org.apache.kudu.test.KuduTestHarness;
+import org.apache.kudu.test.cluster.MiniKuduCluster;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+
+public class TestScanKudu {
+
+private MockScanKudu kuduScan;
+private TestRunner runner;
+public static final String DEFAULT_TABLE_NAME = "Nifi-Kudu-Table";
+public static final String DEFAULT_MASTERS = "testLocalHost:7051";
+
+@Rule
+public KuduTestHarness harness = new KuduTestHarness(
+new MiniKuduCluster.MiniKuduClusterBuilder()
+.addMasterServerFlag("--use_hybrid_clock=false")
+.addTabletServerFlag("--use_hybrid_clock=false")
+);
+
+@Before
+public void setup() throws InitializationException {
+kuduScan = new MockScanKudu();
+kuduScan.kuduClient = harness.getClient();
+runner = TestRunners.newTestRunner(kuduScan);
+
+setUpTestRunner(runner);
+}
+
+private void setUpTestRunner(TestRunner testRunner) throws 
InitializationException {
+testRunner.setProperty(PutKudu.KUDU_MASTERS, DEFAULT_MASTERS);
+testRunner.setProperty(ScanKudu.TABLE_NAME, DEFAULT_TABLE_NAME);
+}
+
+@Test
+public void testKuduProjectedColumnsValidation() {
+runner.setProperty(ScanKudu.TABLE_NAME, DEFAULT_TABLE_NAME);
+runner.setProperty(ScanKudu.PREDICATES, "column1=val1");
+runner.assertValid();
+
+runner.setProperty(ScanKudu.PROJECTED_COLUMNS, "c1,c2");
+runner.assertValid();
+
+runner.setProperty(ScanKudu.PROJECTED_COLUMNS, "c1");
+runner.assertValid();
+
+runner.setProperty(ScanKudu.PROJECTED_COLUMNS, "c1 c2,c3");
+runner.assertNotValid();
+
+runner.setProperty(ScanKudu.PROJECTED_COLUMNS, "c1:,c2,c3");
+runner.assertNotValid();
+
+runner.setProperty(ScanKudu.PROJECTED_COLUMNS, "c1,c1,");
+runner.assertNotValid();
+}
+
+@Test
+public void testKuduPredicatesValidation() {
+runner.setProperty(ScanKudu.TABLE_NAME, DEFAULT_TABLE_NAME);
+runner.setProperty(ScanKudu.PREDICATES, "column1=val1");
+runner.assertValid();
+
+runner.setProperty(ScanKudu.PREDICATES, "column1>val1");
+runner.assertValid();
+
+runner.setProperty(ScanKudu.PREDICATES, "column1=val1");
+runner.assertValid();
+
+runner.setProperty(ScanKudu.PREDICATES, "column1=>=val1");
+runner.assertNotValid();
+
+runner.setProperty(ScanKudu.PREDICATES, "column1:val1");
+runner.assertNotValid();
+}
+
+@Test
+public void testNoIncomingFlowFile() {
+runner.setProperty(ScanKudu.TABLE_NAME, DEFAULT_TABLE_NAME);
+runner.setProperty(ScanKudu.PREDICATES, "column1=val1");
+
+runner.run(1, false);
+runner.assertTransferCount(ScanKudu.REL_FAILURE, 0);
+runner.assertTransferCount(ScanKudu.REL_SUCCESS, 0);
+runner.assertTransferCount(ScanKudu.REL_ORIGINAL, 0);
+}
+
+@Test
+public void testInvalidKuduTableName() throws KuduException {
+final Map rows = new HashMap<>();
+rows.put("key", "val1");
+rows.put("key1", "val1");
+
+kuduScan.insertTestRecordsToKuduTable(DEFAULT_TABLE_NAME, rows);
+
+runner.setProperty(ScanKudu.TABLE_NAME, "${table1}");
+runner.setProperty(ScanKudu.PREDICATES, "key1=val1");
+

[GitHub] [nifi] tpalfy commented on a change in pull request #3611: NIFI-6009 ScanKudu Processor

2019-10-02 Thread GitBox
tpalfy commented on a change in pull request #3611: NIFI-6009 ScanKudu Processor
URL: https://github.com/apache/nifi/pull/3611#discussion_r330593180
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/test/java/org/apache/nifi/processors/kudu/TestScanKudu.java
 ##
 @@ -0,0 +1,449 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import org.apache.kudu.client.KuduException;
+import org.apache.kudu.test.KuduTestHarness;
+import org.apache.kudu.test.cluster.MiniKuduCluster;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import java.util.List;
+import java.util.Map;
+import java.util.HashMap;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+
+public class TestScanKudu {
 
 Review comment:
   The current tests only work with String columns so I wrote one that covers 
(mostly) all the currently supported types.
   Sure enough, there are some small bugs in handling some of the other types.
   
   Here's the unit test (will comment the bugs in the corresponding class):
   ```java
   @Test
   public void testScanKuduWithMultipleTypes() throws KuduException {
   List columns = Arrays.asList(
   new ColumnSchema.ColumnSchemaBuilder("key", 
Type.INT8).key(true).build(),
   new ColumnSchema.ColumnSchemaBuilder(Type.INT16.getName(), 
Type.INT16).key(true).build(),
   new ColumnSchema.ColumnSchemaBuilder(Type.INT32.getName(), 
Type.INT32).key(true).build(),
   new ColumnSchema.ColumnSchemaBuilder(Type.INT64.getName(), 
Type.INT64).key(true).build(),
   new ColumnSchema.ColumnSchemaBuilder(Type.BINARY.getName(), 
Type.BINARY).key(false).build(),
   new ColumnSchema.ColumnSchemaBuilder(Type.STRING.getName(), 
Type.STRING).key(false).build(),
   new ColumnSchema.ColumnSchemaBuilder(Type.BOOL.getName(), 
Type.BOOL).key(false).build(),
   new ColumnSchema.ColumnSchemaBuilder(Type.FLOAT.getName(), 
Type.FLOAT).key(false).build(),
   new ColumnSchema.ColumnSchemaBuilder(Type.DOUBLE.getName(), 
Type.DOUBLE).key(false).build(),
   new 
ColumnSchema.ColumnSchemaBuilder(Type.UNIXTIME_MICROS.getName(), 
Type.UNIXTIME_MICROS).key(false).build(),
   new ColumnSchema.ColumnSchemaBuilder(Type.DECIMAL.getName(), 
Type.DECIMAL).typeAttributes(
   new 
ColumnTypeAttributes.ColumnTypeAttributesBuilder()
   .precision(20)
   .scale(4)
   .build()
   ).key(false).build()
   );
   
   Instant now = Instant.now();
   
   KuduTable kuduTable = kuduScan.getKuduTable(DEFAULT_TABLE_NAME, 
columns);
   Insert insert = kuduTable.newInsert();
   PartialRow rows = insert.getRow();
   rows.addByte("key", (byte) 1);
   rows.addShort(Type.INT16.getName(), (short)20);
   rows.addInt(Type.INT32.getName(), 300);
   rows.addLong(Type.INT64.getName(), 4000L);
   rows.addBinary(Type.BINARY.getName(), new byte[]{55, 89});
   rows.addString(Type.STRING.getName(), "stringValue");
   rows.addBoolean(Type.BOOL.getName(), true);
   rows.addFloat(Type.FLOAT.getName(), 1.5F);
   rows.addDouble(Type.DOUBLE.getName(), 10.28);
   rows.addTimestamp(Type.UNIXTIME_MICROS.getName(), 
Timestamp.from(now));
   rows.addDecimal(Type.DECIMAL.getName(), new BigDecimal("3.1415"));
   
   KuduSession kuduSession = kuduScan.kuduClient.newSession();
   kuduSession.apply(insert);
   kuduSession.close();
   
   runner.setProperty(ScanKudu.TABLE_NAME, DEFAULT_TABLE_NAME);
   
   runner.setIncomingConnection(false);
   runner.enqueue();
   runner.run(1, false);
   
   runner.a

[GitHub] [nifi] tpalfy commented on a change in pull request #3611: NIFI-6009 ScanKudu Processor

2019-10-02 Thread GitBox
tpalfy commented on a change in pull request #3611: NIFI-6009 ScanKudu Processor
URL: https://github.com/apache/nifi/pull/3611#discussion_r330521946
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/ScanKudu.java
 ##
 @@ -0,0 +1,392 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import java.util.concurrent.atomic.AtomicLong;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.kudu.ColumnSchema;
+import org.apache.kudu.client.KuduPredicate;
+import org.apache.kudu.client.KuduScanner;
+import org.apache.kudu.client.KuduTable;
+import org.apache.kudu.client.RowResult;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processors.kudu.io.ResultHandler;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import javax.security.auth.login.LoginException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"kudu", "scan", "fetch", "get"})
+@CapabilityDescription("Scans rows from a Kudu table with an optional list of 
predicates")
+@WritesAttributes({
+@WritesAttribute(attribute = "kudu.table", description = "The name of 
the Kudu table that the row was fetched from"),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "kudu.rows.count", description = "Number 
of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scankudu.results.found", description = 
"Indicates whether at least one row has been found in given Kudu table with 
provided predicates. "
++ "Could be null (not present) if transfered to FAILURE")})
+public class ScanKudu extends AbstractKuduProcessor {
+
+static final Pattern PREDICATES_PATTERN = 
Pattern.compile("\\w+((<=|>=|[=<>])(\\w|-)+)?(?:,\\w+((<=|>=|[=<>])(\\w|-)+)?)*");
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+((\\w)+)?(?:,\\w+((\\w)+)?)*");
+
+protected static final PropertyDescriptor TABLE_NAME = new 
PropertyDescriptor.Builder()
+.name("table-name")
+.displayName("Table Name")
+.description("The name of the Kudu Table to put data into")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+  static final PropertyDescriptor PREDICATES =
+  new PropertyDescriptor.Builder()
+  .name("Predicates")
+  .description("A comma-separated list of Predicates,"
+  + "EQUALS: \"(colName)=(value)\","
+  + "GREATER: \"(colName)<(value)\","
+  + "LESS: \"(colName)>(value)\","
+  + "GREATER_EQUAL: \"(colName)>=(value)\","
+  + "LESS_EQUAL: \"(colName)<=(value)\"")
+  .required(false)
+  
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+  
.addValidator(StandardValidators.createRegexMatchingV

[GitHub] [nifi] tpalfy commented on a change in pull request #3611: NIFI-6009 ScanKudu Processor

2019-10-02 Thread GitBox
tpalfy commented on a change in pull request #3611: NIFI-6009 ScanKudu Processor
URL: https://github.com/apache/nifi/pull/3611#discussion_r330586334
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/ScanKudu.java
 ##
 @@ -0,0 +1,392 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import java.util.concurrent.atomic.AtomicLong;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.kudu.ColumnSchema;
+import org.apache.kudu.client.KuduPredicate;
+import org.apache.kudu.client.KuduScanner;
+import org.apache.kudu.client.KuduTable;
+import org.apache.kudu.client.RowResult;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processors.kudu.io.ResultHandler;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import javax.security.auth.login.LoginException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"kudu", "scan", "fetch", "get"})
+@CapabilityDescription("Scans rows from a Kudu table with an optional list of 
predicates")
+@WritesAttributes({
+@WritesAttribute(attribute = "kudu.table", description = "The name of 
the Kudu table that the row was fetched from"),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "kudu.rows.count", description = "Number 
of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scankudu.results.found", description = 
"Indicates whether at least one row has been found in given Kudu table with 
provided predicates. "
++ "Could be null (not present) if transfered to FAILURE")})
+public class ScanKudu extends AbstractKuduProcessor {
+
+static final Pattern PREDICATES_PATTERN = 
Pattern.compile("\\w+((<=|>=|[=<>])(\\w|-)+)?(?:,\\w+((<=|>=|[=<>])(\\w|-)+)?)*");
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+((\\w)+)?(?:,\\w+((\\w)+)?)*");
+
+protected static final PropertyDescriptor TABLE_NAME = new 
PropertyDescriptor.Builder()
+.name("table-name")
+.displayName("Table Name")
+.description("The name of the Kudu Table to put data into")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+  static final PropertyDescriptor PREDICATES =
+  new PropertyDescriptor.Builder()
+  .name("Predicates")
+  .description("A comma-separated list of Predicates,"
+  + "EQUALS: \"(colName)=(value)\","
+  + "GREATER: \"(colName)<(value)\","
+  + "LESS: \"(colName)>(value)\","
+  + "GREATER_EQUAL: \"(colName)>=(value)\","
+  + "LESS_EQUAL: \"(colName)<=(value)\"")
+  .required(false)
+  
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+  
.addValidator(StandardValidators.createRegexMatchingV

[GitHub] [nifi] tpalfy commented on a change in pull request #3611: NIFI-6009 ScanKudu Processor

2019-10-02 Thread GitBox
tpalfy commented on a change in pull request #3611: NIFI-6009 ScanKudu Processor
URL: https://github.com/apache/nifi/pull/3611#discussion_r330597920
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKuduProcessor.java
 ##
 @@ -260,4 +284,72 @@ protected Update updateRecordToKudu(KuduTable kuduTable, 
Record record, List columns = 
row.getSchema().getColumns().iterator();
   while (columns.hasNext()) {
   ColumnSchema col = columns.next();
   jsonBuilder.append("\"" + col.getName() + "\":");
   switch (col.getType()) {
   case STRING:
   jsonBuilder.append("\"" + row.getString(col.getName()) + 
"\"");
   break;
   case INT8:
   jsonBuilder.append("\"" + row.getByte(col.getName()) + 
"\"");
   break;
   case INT16:
   jsonBuilder.append("\"" + row.getShort(col.getName()) + 
"\"");
   break;
   case INT32:
   jsonBuilder.append("\"" + row.getInt(col.getName()) + 
"\"");
   break;
   case INT64:
   jsonBuilder.append("\"" + row.getLong(col.getName()) + 
"\"");
   break;
   case BOOL:
   jsonBuilder.append("\"" + row.getBoolean(col.getName()) 
+ "\"");
   break;
   case DECIMAL:
   jsonBuilder.append("\"" + row.getDecimal(col.getName()) 
+ "\"");
   break;
   case FLOAT:
   jsonBuilder.append("\"" + row.getFloat(col.getName()) + 
"\"");
   break;
   case DOUBLE:
   jsonBuilder.append("\"" + row.getDouble(col.getName()) + 
"\"");
   break;
   case UNIXTIME_MICROS:
   jsonBuilder.append("\"" + row.getLong(col.getName()) + 
"\"");
   break;
   case BINARY:
   jsonBuilder.append("\"0x" + 
Hex.encodeHexString(row.getBinaryCopy(col.getName())) + "\"");
   break;
   default:
   break;
   }
   if(columns.hasNext())
   jsonBuilder.append(",");
   }
   jsonBuilder.append("}]}");
   return jsonBuilder.toString();
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] bbende commented on issue #3785: NIFI-6589: Addressed NPE

2019-10-02 Thread GitBox
bbende commented on issue #3785: NIFI-6589: Addressed NPE
URL: https://github.com/apache/nifi/pull/3785#issuecomment-537532339
 
 
   Looks good, will merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] bbende merged pull request #3785: NIFI-6589: Addressed NPE

2019-10-02 Thread GitBox
bbende merged pull request #3785: NIFI-6589: Addressed NPE
URL: https://github.com/apache/nifi/pull/3785
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] markap14 opened a new pull request #3785: NIFI-6589: Addressed NPE

2019-10-02 Thread GitBox
markap14 opened a new pull request #3785: NIFI-6589: Addressed NPE
URL: https://github.com/apache/nifi/pull/3785
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Reopened] (NIFI-6589) Leader Election should cache results obtained from ZooKeeper

2019-10-02 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne reopened NIFI-6589:
--

Reopened because I encountered a NPE. Depending on timing this can occur during 
startup but is easily addressed.

> Leader Election should cache results obtained from ZooKeeper
> 
>
> Key: NIFI-6589
> URL: https://issues.apache.org/jira/browse/NIFI-6589
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Minor
> Fix For: 1.10.0
>
>
> In order to determine which node in a cluster is the Cluster Coordinator, a 
> node must make a request to ZooKeeper. That means that if we have N nodes in 
> a cluster, then we must ask ZooKeeper for each request at least (N+1) times 
> (and no more than N+2) who is the Cluster Coordinator. This is done because 
> when the request comes in, the node must determine whether or not it is the 
> Cluster Coordinator. If so, it must replicate the request to each node. If 
> not, it must forward the request to the Cluster Coordinator, which will then 
> do so. When the request is replicated, it will again check if it is the 
> cluster coordinator. If we instead cache the result of querying ZooKeeper for 
> a short period of time, say 1 minute, we can dramatically decrease the number 
> of times that we hit ZooKeeper. If the Coordinator / Primary Node changes in 
> the mean time, it should still be notified of the change asynchronously.
> The polling is done currently because we've seen situations where the 
> asynchronous notification did not happen. But if we update the code so that 
> we cache the results, this means that we will also update the code for 
> caching results of which node is Primary Node. This is a benefit as well, 
> because currently we don't poll for this and as a result, if we do happen to 
> miss the notification, we could theoretically have 2 nodes running processors 
> should only run on Primary Node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6736) If not given enough threads, Load Balanced Connections may block for long periods of time without making progress

2019-10-02 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-6736:
-
Status: Patch Available  (was: Open)

> If not given enough threads, Load Balanced Connections may block for long 
> periods of time without making progress
> -
>
> Key: NIFI-6736
> URL: https://issues.apache.org/jira/browse/NIFI-6736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When load-balanced connections are used, we have a few different properties 
> that we can configure. Specifically, the properties with their default values 
> are:
> nifi.cluster.load.balance.connections.per.node=4
> nifi.cluster.load.balance.max.thread.count=8
> nifi.cluster.load.balance.comms.timeout=30 sec
> If the max thread count is below the number of connections per node * number 
> of nodes in the cluster, everything still works well when there are 
> reasonably high data volumes across all connections that are load-balanced. 
> However, if one of the connections has a low data volume, we can get into a 
> situation where the load balanced connections stop pushing data for some 
> period of time, typically approximately some multiple of the "comms.timeout" 
> property.
> This appears to be due to the fact that the server is using Socket IO and not 
> NIO and once data has been received, it will check if more data is available. 
> If it does not receive any indication for some period of time, it will time 
> out. Only then does it add the socket connection back to a pool of 
> connections to read from. This means that the thread can be stuck, waiting to 
> receive more data, and blocking any progress from other connections on that 
> thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 opened a new pull request #3784: NIFI-6736: Create thread on demand to handle incoming request from cl…

2019-10-02 Thread GitBox
markap14 opened a new pull request #3784: NIFI-6736: Create thread on demand to 
handle incoming request from cl…
URL: https://github.com/apache/nifi/pull/3784
 
 
   …ient for load balancing. This allows us to avoid situations where we don't 
have enough threads and we block on the server side, waiting for data, when 
clients are trying to send data in another connection
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6001) Nested versioned PGs can cause CS references to be incorrect on import

2019-10-02 Thread Bryan Bende (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942836#comment-16942836
 ] 

Bryan Bende commented on NIFI-6001:
---

This will be fixed by the same changes for NIFI-5910.

> Nested versioned PGs can cause CS references to be incorrect on import
> --
>
> Key: NIFI-6001
> URL: https://issues.apache.org/jira/browse/NIFI-6001
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0, 1.6.0, 1.8.0, 1.7.1
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>  Labels: SDLC
>
> Steps to reproduce...
> 1) Create a PG named PG1
> 2) Inside PG1 create another PG named PG2
> 3) Inside PG2 create two controller services where one service references the 
> other, example: CsvReader referencing AvroSchemaRegistry
> 4) Start version control on PG2
> 5) Start version control on PG1
> 6) On the root canvas, create a new PG and import PG1
> At this point if you go into the second instance of PG1 and look at the 
> services inside the second instance of PG2, the CsvReader no longer has the 
> correct reference to the AvroSchemaRegistry and says "Incompatible Controller 
> Service Configured".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-6001) Nested versioned PGs can cause CS references to be incorrect on import

2019-10-02 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende reassigned NIFI-6001:
-

Assignee: Bryan Bende

> Nested versioned PGs can cause CS references to be incorrect on import
> --
>
> Key: NIFI-6001
> URL: https://issues.apache.org/jira/browse/NIFI-6001
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0, 1.6.0, 1.8.0, 1.7.1
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>  Labels: SDLC
>
> Steps to reproduce...
> 1) Create a PG named PG1
> 2) Inside PG1 create another PG named PG2
> 3) Inside PG2 create two controller services where one service references the 
> other, example: CsvReader referencing AvroSchemaRegistry
> 4) Start version control on PG2
> 5) Start version control on PG1
> 6) On the root canvas, create a new PG and import PG1
> At this point if you go into the second instance of PG1 and look at the 
> services inside the second instance of PG2, the CsvReader no longer has the 
> correct reference to the AvroSchemaRegistry and says "Incompatible Controller 
> Service Configured".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-6314) Nested versioned process groups do not update properly

2019-10-02 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-6314.
---
Fix Version/s: 1.10.0
   Resolution: Fixed

> Nested versioned process groups do not update properly
> --
>
> Key: NIFI-6314
> URL: https://issues.apache.org/jira/browse/NIFI-6314
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.9.0, 1.9.1, 1.9.2
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>  Labels: SDLC
> Fix For: 1.10.0
>
>
> Steps to reproduce:
>  # NiFi#1 Create PGA
>  # NiFI#1 Create PGB inside PGA
>  # NiFI#1 Create some processors inside PGB
>  # NIFI#1 Start version control PGB
>  # NIFI#1 Start version control PGA
>  # NIFI#2 Import a new PG and select PGA v1 (at this point the same exact 
> flow is now in both NiFi's)
>  # NIFI#1 Go into PGB and modify the properties of some processors
>  # NIFI#1 Commit changes on PGB
>  # NIFI#1 Commit changes on PGA
>  # NIFI#2 Change version on PGA from v1 to v2 (caused PGB to be updated to v2 
> since PGA v2 points to PGB v2)
> At this point PGB in NIFI#2 thinks it has been updated to v2 according to the 
> version info in flow.xml.gz, but it the actual changes from v2 have not been 
> applied, and it shows local changes that looks like they undid what should be 
> the real changes. Choosing to revert the local changes will actually get back 
> to the real v2 state.
> You can also reproduce this using a single NiFi and having two instances of 
> the same versioned process group described above, or by having a single 
> instance of the versioned process group and changing the outer PGA back and 
> forth between v2 and v1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] MikeThomsen commented on a change in pull request #3771: NIFI-6723: Enrich processor-related and JVM GC metrics in Prometheus Reporting Task

2019-10-02 Thread GitBox
MikeThomsen commented on a change in pull request #3771: NIFI-6723: Enrich 
processor-related and JVM GC metrics in Prometheus Reporting Task
URL: https://github.com/apache/nifi/pull/3771#discussion_r330549206
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/api/PrometheusMetricsUtil.java
 ##
 @@ -249,6 +257,18 @@
 .labelNames("instance")
 .register(JVM_REGISTRY);
 
+private static final Gauge JVM_GC_RUNS = Gauge.build()
+.name("nifi_jvm_gc_runs")
+.help("NiFi JVM GC number of runs")
+.labelNames("inctance", "gc_name")
 
 Review comment:
   I think you meant `instance` for that label name.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] MikeThomsen commented on a change in pull request #3771: NIFI-6723: Enrich processor-related and JVM GC metrics in Prometheus Reporting Task

2019-10-02 Thread GitBox
MikeThomsen commented on a change in pull request #3771: NIFI-6723: Enrich 
processor-related and JVM GC metrics in Prometheus Reporting Task
URL: https://github.com/apache/nifi/pull/3771#discussion_r330549252
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/api/PrometheusMetricsUtil.java
 ##
 @@ -249,6 +257,18 @@
 .labelNames("instance")
 .register(JVM_REGISTRY);
 
+private static final Gauge JVM_GC_RUNS = Gauge.build()
+.name("nifi_jvm_gc_runs")
+.help("NiFi JVM GC number of runs")
+.labelNames("inctance", "gc_name")
+.register(JVM_REGISTRY);
+
+private static final Gauge JVM_GC_TIME = Gauge.build()
+.name("nifi_jvm_gc_time")
+.help("NiFi JVM GC time in milliseconds")
+.labelNames("inctance", "gc_name")
 
 Review comment:
   Same here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified 
Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330455722
 
 

 ##
 File path: extensions/opc/src/putopc.cpp
 ##
 @@ -0,0 +1,466 @@
+/**
+ * PutOPC class definition
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "putopc.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+  core::Property PutOPCProcessor::ParentNodeID(
+  core::PropertyBuilder::createProperty("Parent node ID")
+  ->withDescription("Specifies the ID of the root node to traverse")
+  ->isRequired(true)->build());
+
+
+  core::Property PutOPCProcessor::ParentNodeIDType(
+  core::PropertyBuilder::createProperty("Parent node ID type")
+  ->withDescription("Specifies the type of the provided node ID")
+  ->isRequired(true)
+  ->withAllowableValues({"Path", "Int", 
"String"})->build());
+
+  core::Property PutOPCProcessor::ParentNameSpaceIndex(
+  core::PropertyBuilder::createProperty("Parent node namespace index")
+  ->withDescription("The index of the namespace. Used only if node ID 
type is not path.")
+  ->withDefaultValue(0)->build());
+
+  core::Property PutOPCProcessor::ValueType(
+  core::PropertyBuilder::createProperty("Value type")
+  ->withDescription("Set the OPC value type of the created nodes")
+  ->isRequired(true)->build());
+
+  core::Property PutOPCProcessor::TargetNodeIDType(
 
 Review comment:
   😿 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC 
Unified Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330455091
 
 

 ##
 File path: extensions/opc/src/opc.cpp
 ##
 @@ -0,0 +1,567 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+//OPC includes
+#include "opc.h"
+
+//MiNiFi includes
+#include "utils/ScopeGuard.h"
+#include "utils/StringUtils.h"
+#include "logging/Logger.h"
+#include "Exception.h"
+
+//Standard includes
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace opc {
+
+/*
+ * The following functions are only used internally in OPC lib, not to be 
exported
+ */
+
+namespace {
+
+  void add_value_to_variant(UA_Variant *variant, std::string &value) {
+UA_String ua_value = UA_STRING(&value[0]);
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_STRING]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, const char *value) {
+std::string strvalue(value);
+add_value_to_variant(variant, strvalue);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, int64_t value) {
+UA_Int64 ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_INT64]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, uint64_t value) {
+UA_UInt64 ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_UINT64]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, int32_t value) {
+UA_Int32 ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_INT32]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, uint32_t value) {
+UA_UInt32 ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_UINT32]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, bool value) {
+UA_Boolean ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_BOOLEAN]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, float value) {
+UA_Float ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_FLOAT]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, double value) {
+UA_Double ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_DOUBLE]);
+  }
+
+  core::logging::LOG_LEVEL MapOPCLogLevel(UA_LogLevel ualvl) {
+switch (ualvl) {
+  case UA_LOGLEVEL_TRACE:
+return core::logging::trace;
+  case UA_LOGLEVEL_DEBUG:
+return core::logging::debug;
+  case UA_LOGLEVEL_INFO:
+return core::logging::info;
+  case UA_LOGLEVEL_WARNING:
+return core::logging::warn;
+  case UA_LOGLEVEL_ERROR:
+return core::logging::err;
+  case UA_LOGLEVEL_FATAL:
+return core::logging::critical;
+  default:
+return core::logging::critical;
+}
+  }
+}
+
+/*
+ * End of internal functions
+ */
+
+Client::Client(std::shared_ptr logger, const 
std::string& applicationURI,
+   const std::vector& certBuffer, const std::vector& 
keyBuffer,
+   const std::vector>& trustBuffers) {
+
+  client_ = UA_Client_new();
+  if (certBuffer.empty()) {
+UA_ClientConfig_setDefault(UA_Client_getConfig(client_));
+  } else {
+UA_ClientConfig *cc = UA_Client_getConfig(client_);
+cc->securityMode = UA_MESSAGESECURITYMODE_SIGNANDENCRYPT;
+
+// Certificate
+UA_ByteString certByteString = UA_STRING_NULL;
+certByteString.length = certBuffer.size();
+certByteString.data = (UA_Byte*)UA_malloc(certByteString.length * 
sizeof(UA_Byte));
+memcpy(certByteString.data, certBuffer.data(), certByteString.length);
+
+// Key
+UA_ByteString keyByteString = UA_STRING_NULL;
+keyByteString.length = keyBuffer.size();
+keyByteString.data = (UA_Byte*)UA_malloc(keyByteString.length * 
sizeof(UA_Byte));
+memcpy(keyByteString.data, keyBuffer.data(), keyByteString.length);
+
+// Trusted certificates
+UA_STACKARRAY(UA_ByteString, trustList, trustBuffers.size());
+for (size_t i = 0; i < trustBuffers.size(); 

[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC 
Unified Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330454532
 
 

 ##
 File path: extensions/opc/include/fetchopc.h
 ##
 @@ -0,0 +1,109 @@
+/**
+ * FetchOPC class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef NIFI_MINIFI_CPP_FetchOPCProcessor_H
+#define NIFI_MINIFI_CPP_FetchOPCProcessor_H
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "opcbase.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+class FetchOPCProcessor : public BaseOPCProcessor {
+public:
+  static constexpr char const* ProcessorName = "FetchOPC";
+  // Supported Properties
+  static core::Property NodeIDType;
+  static core::Property NodeID;
+  static core::Property NameSpaceIndex;
+  static core::Property MaxDepth;
+
+  // Supported Relationships
+  static core::Relationship Success;
+  static core::Relationship Failure;
+
+  FetchOPCProcessor(std::string name, utils::Identifier uuid = 
utils::Identifier())
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC 
Unified Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330454606
 
 

 ##
 File path: extensions/opc/src/fetchopc.cpp
 ##
 @@ -0,0 +1,235 @@
+/**
+ * FetchOPC class definition
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "fetchopc.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+  core::Property FetchOPCProcessor::NodeID(
+  core::PropertyBuilder::createProperty("Node ID")
+  ->withDescription("Specifies the ID of the root node to traverse")
+  ->isRequired(true)->build());
+
+
+  core::Property FetchOPCProcessor::NodeIDType(
+  core::PropertyBuilder::createProperty("Node ID type")
+  ->withDescription("Specifies the type of the provided node ID")
+  ->isRequired(true)
+  ->withAllowableValues({"Path", "Int", "String"})->build());
+
+  core::Property FetchOPCProcessor::NameSpaceIndex(
+  core::PropertyBuilder::createProperty("Namespace index")
+  ->withDescription("The index of the namespace. Used only if node ID type 
is not path.")
+  ->withDefaultValue(0)->build());
+
+  core::Property FetchOPCProcessor::MaxDepth(
+  core::PropertyBuilder::createProperty("Max depth")
+  ->withDescription("Specifiec the max depth of browsing. 0 means 
unlimited.")
+  ->withDefaultValue(0)->build());
+
+  core::Relationship FetchOPCProcessor::Success("success", "Successfully 
retrieved OPC-UA nodes");
+  core::Relationship FetchOPCProcessor::Failure("failure", "Retrieved OPC-UA 
nodes where value cannot be extracted (only if enabled)");
+
+
+  void FetchOPCProcessor::initialize() {
+// Set the supported properties
+std::set fetchOPCProperties = {OPCServerEndPoint, NodeID, 
NodeIDType, NameSpaceIndex, MaxDepth};
+std::set baseOPCProperties = 
BaseOPCProcessor::getSupportedProperties();
+fetchOPCProperties.insert(baseOPCProperties.begin(), 
baseOPCProperties.end());
+setSupportedProperties(fetchOPCProperties);
+
+// Set the supported relationships
+setSupportedRelationships({Success, Failure});
+  }
+
+  void FetchOPCProcessor::onSchedule(const 
std::shared_ptr &context, const 
std::shared_ptr &factory) {
+logger_->log_trace("FetchOPCProcessor::onSchedule");
+
+translatedNodeIDs_.clear();  // Path might has changed during restart
+
+BaseOPCProcessor::onSchedule(context, factory);
+
+if(!configOK_) {
+  return;
+}
+
+configOK_ = false;
+
+std::string value;
+context->getProperty(NodeID.getName(), nodeID_);
+context->getProperty(NodeIDType.getName(), value);
+
+maxDepth_ = 0;
+context->getProperty(MaxDepth.getName(), maxDepth_);
+
+if (value == "String") {
+  idType_ = opc::OPCNodeIDType::String;
+} else if (value == "Int") {
+  idType_ = opc::OPCNodeIDType::Int;
+} else if (value == "Path") {
+  idType_ = opc::OPCNodeIDType::Path;
+} else {
+  // Where have our validators gone?
+  logger_->log_error("%s is not a valid node ID type!", value.c_str());
+}
+
+if(idType_ == opc::OPCNodeIDType::Int) {
+  try {
+int t = std::stoi(nodeID_);
+  } catch(...) {
+logger_->log_error("%s cannot be used as an int type node ID", 
nodeID_.c_str());
+return;
+  }
+}
+if(idType_ != opc::OPCNodeIDType::Path) {
+  if(!context->getProperty(NameSpaceIndex.getName(), nameSpaceIdx_)) {
+logger_->log_error("%s is mandatory in case %s is not Path", 
NameSpaceIndex.getName().c_str(), NodeIDType.getName().c_str());
+return;
+  }
+}
+
+configOK_ = true;
+  }
+
+  void FetchOPCProcessor::onTrigger(const 
std::shared_ptr &co

[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC 
Unified Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330454551
 
 

 ##
 File path: extensions/opc/src/fetchopc.cpp
 ##
 @@ -0,0 +1,235 @@
+/**
+ * FetchOPC class definition
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "fetchopc.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+  core::Property FetchOPCProcessor::NodeID(
+  core::PropertyBuilder::createProperty("Node ID")
+  ->withDescription("Specifies the ID of the root node to traverse")
+  ->isRequired(true)->build());
+
+
+  core::Property FetchOPCProcessor::NodeIDType(
+  core::PropertyBuilder::createProperty("Node ID type")
+  ->withDescription("Specifies the type of the provided node ID")
+  ->isRequired(true)
+  ->withAllowableValues({"Path", "Int", "String"})->build());
+
+  core::Property FetchOPCProcessor::NameSpaceIndex(
+  core::PropertyBuilder::createProperty("Namespace index")
+  ->withDescription("The index of the namespace. Used only if node ID type 
is not path.")
+  ->withDefaultValue(0)->build());
+
+  core::Property FetchOPCProcessor::MaxDepth(
+  core::PropertyBuilder::createProperty("Max depth")
+  ->withDescription("Specifiec the max depth of browsing. 0 means 
unlimited.")
+  ->withDefaultValue(0)->build());
+
+  core::Relationship FetchOPCProcessor::Success("success", "Successfully 
retrieved OPC-UA nodes");
+  core::Relationship FetchOPCProcessor::Failure("failure", "Retrieved OPC-UA 
nodes where value cannot be extracted (only if enabled)");
+
+
+  void FetchOPCProcessor::initialize() {
+// Set the supported properties
+std::set fetchOPCProperties = {OPCServerEndPoint, NodeID, 
NodeIDType, NameSpaceIndex, MaxDepth};
+std::set baseOPCProperties = 
BaseOPCProcessor::getSupportedProperties();
+fetchOPCProperties.insert(baseOPCProperties.begin(), 
baseOPCProperties.end());
+setSupportedProperties(fetchOPCProperties);
+
+// Set the supported relationships
+setSupportedRelationships({Success, Failure});
+  }
+
+  void FetchOPCProcessor::onSchedule(const 
std::shared_ptr &context, const 
std::shared_ptr &factory) {
+logger_->log_trace("FetchOPCProcessor::onSchedule");
+
+translatedNodeIDs_.clear();  // Path might has changed during restart
+
+BaseOPCProcessor::onSchedule(context, factory);
+
+if(!configOK_) {
+  return;
+}
+
+configOK_ = false;
+
+std::string value;
+context->getProperty(NodeID.getName(), nodeID_);
+context->getProperty(NodeIDType.getName(), value);
+
+maxDepth_ = 0;
+context->getProperty(MaxDepth.getName(), maxDepth_);
+
+if (value == "String") {
+  idType_ = opc::OPCNodeIDType::String;
+} else if (value == "Int") {
+  idType_ = opc::OPCNodeIDType::Int;
+} else if (value == "Path") {
+  idType_ = opc::OPCNodeIDType::Path;
+} else {
+  // Where have our validators gone?
+  logger_->log_error("%s is not a valid node ID type!", value.c_str());
+}
+
+if(idType_ == opc::OPCNodeIDType::Int) {
+  try {
+int t = std::stoi(nodeID_);
+  } catch(...) {
+logger_->log_error("%s cannot be used as an int type node ID", 
nodeID_.c_str());
+return;
+  }
+}
+if(idType_ != opc::OPCNodeIDType::Path) {
+  if(!context->getProperty(NameSpaceIndex.getName(), nameSpaceIdx_)) {
+logger_->log_error("%s is mandatory in case %s is not Path", 
NameSpaceIndex.getName().c_str(), NodeIDType.getName().c_str());
+return;
+  }
+}
+
+configOK_ = true;
+  }
+
+  void FetchOPCProcessor::onTrigger(const 
std::shared_ptr &co

[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC 
Unified Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330454511
 
 

 ##
 File path: extensions/opc/include/opcbase.h
 ##
 @@ -0,0 +1,86 @@
+/**
+ * OPCBase class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef NIFI_MINIFI_CPP_OPCBASE_H
+#define NIFI_MINIFI_CPP_OPCBASE_H
+
+#include 
+
+#include "opc.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+class BaseOPCProcessor : public core::Processor {
+ public:
+  static core::Property OPCServerEndPoint;
+
+  static core::Property ApplicationURI;
+  static core::Property Username;
+  static core::Property Password;
+  static core::Property CertificatePath;
+  static core::Property KeyPath;
+  static core::Property TrustedPath;
+
+  BaseOPCProcessor(std::string name, utils::Identifier uuid = 
utils::Identifier())
+  : Processor(name, uuid) {
+connection_ = nullptr;
 
 Review comment:
   Done, removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC 
Unified Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330454630
 
 

 ##
 File path: extensions/opc/include/putopc.h
 ##
 @@ -0,0 +1,111 @@
+/**
+ * PutOPC class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef NIFI_MINIFI_CPP_PUTOPC_H
+#define NIFI_MINIFI_CPP_PUTOPC_H
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "opcbase.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+class PutOPCProcessor : public BaseOPCProcessor {
+ public:
+  static constexpr char const* ProcessorName = "PutOPC";
+  // Supported Properties
+  static core::Property ParentNodeIDType;
+  static core::Property ParentNodeID;
+  static core::Property ParentNameSpaceIndex;
+  static core::Property ValueType;
+
+  static core::Property TargetNodeIDType;
+  static core::Property TargetNodeID;
+  static core::Property TargetNodeBrowseName;
+  static core::Property TargetNodeNameSpaceIndex;
+
+  // Supported Relationships
+  static core::Relationship Success;
+  static core::Relationship Failure;
+
+  PutOPCProcessor(std::string name, utils::Identifier uuid = 
utils::Identifier())
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC 
Unified Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330454464
 
 

 ##
 File path: extensions/opc/include/opc.h
 ##
 @@ -0,0 +1,152 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+#ifndef NIFI_MINIFI_CPP_OPC_H
+#define NIFI_MINIFI_CPP_OPC_H
+
+#include "open62541/client.h"
+#include "open62541/client_highlevel.h"
+#include "open62541/client_config_default.h"
+#include "logging/Logger.h"
+#include "Exception.h"
+
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace opc {
+
+class OPCException : public minifi::Exception {
+ public:
+  OPCException(ExceptionType type, std::string &&errorMsg)
+  : Exception(type, errorMsg) {
+  }
+};
+
+enum class OPCNodeIDType{ Path, Int, String };
+
+enum class OPCNodeDataType{ Int64, UInt64, Int32, UInt32, Boolean, Float, 
Double, String };
+
+struct NodeData;
+
+class Client;
+
+using nodeFoundCallBackFunc = bool(Client& client, const 
UA_ReferenceDescription*, const std::string&);
+
+class Client {
+ public:
+  bool isConnected();
+  UA_StatusCode connect(const std::string& url, const std::string& username = 
"", const std::string& password = "");
+  ~Client();
+  NodeData getNodeData(const UA_ReferenceDescription *ref, const std::string& 
basePath = "");
+  UA_ReferenceDescription * getNodeReference(UA_NodeId nodeId);
+  void traverse(UA_NodeId nodeId, std::function cb, 
const std::string& basePath = "", uint32_t maxDepth = 0, bool fetchRoot = true);
+  bool exists(UA_NodeId nodeId);
+  UA_StatusCode translateBrowsePathsToNodeIdsRequest(const std::string& path, 
std::vector& foundNodeIDs, const 
std::shared_ptr& logger);
+
+  template
+  UA_StatusCode update_node(const UA_NodeId nodeId, T value);
+
+  template
+  UA_StatusCode add_node(const UA_NodeId parentNodeId, const UA_NodeId 
targetNodeId, std::string browseName, T value, OPCNodeDataType dt, UA_NodeId 
*receivedNodeId);
+
+ private:
+  Client (std::shared_ptr logger, const std::string& 
applicationURI,
+  const std::vector& certBuffer, const std::vector& keyBuffer,
+  const std::vector>& trustBuffers);
+
+  UA_Client *client_;
+  std::shared_ptr logger_;
+
+  friend std::unique_ptr 
createClient(std::shared_ptr logger,
 
 Review comment:
   Good idea, moved there, thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
arpadboda commented on a change in pull request #635: MINIFICPP-819 - OPC 
Unified Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330451088
 
 

 ##
 File path: extensions/opc/src/putopc.cpp
 ##
 @@ -0,0 +1,466 @@
+/**
+ * PutOPC class definition
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "putopc.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+  core::Property PutOPCProcessor::ParentNodeID(
+  core::PropertyBuilder::createProperty("Parent node ID")
+  ->withDescription("Specifies the ID of the root node to traverse")
+  ->isRequired(true)->build());
+
+
+  core::Property PutOPCProcessor::ParentNodeIDType(
+  core::PropertyBuilder::createProperty("Parent node ID type")
+  ->withDescription("Specifies the type of the provided node ID")
+  ->isRequired(true)
+  ->withAllowableValues({"Path", "Int", 
"String"})->build());
+
+  core::Property PutOPCProcessor::ParentNameSpaceIndex(
+  core::PropertyBuilder::createProperty("Parent node namespace index")
+  ->withDescription("The index of the namespace. Used only if node ID 
type is not path.")
+  ->withDefaultValue(0)->build());
+
+  core::Property PutOPCProcessor::ValueType(
+  core::PropertyBuilder::createProperty("Value type")
+  ->withDescription("Set the OPC value type of the created nodes")
+  ->isRequired(true)->build());
+
+  core::Property PutOPCProcessor::TargetNodeIDType(
 
 Review comment:
   Can't as it's flowfile-dependent. :(


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified 
Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330446986
 
 

 ##
 File path: extensions/opc/src/opc.cpp
 ##
 @@ -0,0 +1,567 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+//OPC includes
+#include "opc.h"
+
+//MiNiFi includes
+#include "utils/ScopeGuard.h"
+#include "utils/StringUtils.h"
+#include "logging/Logger.h"
+#include "Exception.h"
+
+//Standard includes
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace opc {
+
+/*
+ * The following functions are only used internally in OPC lib, not to be 
exported
+ */
+
+namespace {
+
+  void add_value_to_variant(UA_Variant *variant, std::string &value) {
+UA_String ua_value = UA_STRING(&value[0]);
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_STRING]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, const char *value) {
+std::string strvalue(value);
+add_value_to_variant(variant, strvalue);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, int64_t value) {
+UA_Int64 ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_INT64]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, uint64_t value) {
+UA_UInt64 ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_UINT64]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, int32_t value) {
+UA_Int32 ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_INT32]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, uint32_t value) {
+UA_UInt32 ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_UINT32]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, bool value) {
+UA_Boolean ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_BOOLEAN]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, float value) {
+UA_Float ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_FLOAT]);
+  }
+
+  void add_value_to_variant(UA_Variant *variant, double value) {
+UA_Double ua_value = value;
+UA_Variant_setScalarCopy(variant, &ua_value, &UA_TYPES[UA_TYPES_DOUBLE]);
+  }
+
+  core::logging::LOG_LEVEL MapOPCLogLevel(UA_LogLevel ualvl) {
+switch (ualvl) {
+  case UA_LOGLEVEL_TRACE:
+return core::logging::trace;
+  case UA_LOGLEVEL_DEBUG:
+return core::logging::debug;
+  case UA_LOGLEVEL_INFO:
+return core::logging::info;
+  case UA_LOGLEVEL_WARNING:
+return core::logging::warn;
+  case UA_LOGLEVEL_ERROR:
+return core::logging::err;
+  case UA_LOGLEVEL_FATAL:
+return core::logging::critical;
+  default:
+return core::logging::critical;
+}
+  }
+}
+
+/*
+ * End of internal functions
+ */
+
+Client::Client(std::shared_ptr logger, const 
std::string& applicationURI,
+   const std::vector& certBuffer, const std::vector& 
keyBuffer,
+   const std::vector>& trustBuffers) {
+
+  client_ = UA_Client_new();
+  if (certBuffer.empty()) {
+UA_ClientConfig_setDefault(UA_Client_getConfig(client_));
+  } else {
+UA_ClientConfig *cc = UA_Client_getConfig(client_);
+cc->securityMode = UA_MESSAGESECURITYMODE_SIGNANDENCRYPT;
+
+// Certificate
+UA_ByteString certByteString = UA_STRING_NULL;
+certByteString.length = certBuffer.size();
+certByteString.data = (UA_Byte*)UA_malloc(certByteString.length * 
sizeof(UA_Byte));
+memcpy(certByteString.data, certBuffer.data(), certByteString.length);
+
+// Key
+UA_ByteString keyByteString = UA_STRING_NULL;
+keyByteString.length = keyBuffer.size();
+keyByteString.data = (UA_Byte*)UA_malloc(keyByteString.length * 
sizeof(UA_Byte));
+memcpy(keyByteString.data, keyBuffer.data(), keyByteString.length);
+
+// Trusted certificates
+UA_STACKARRAY(UA_ByteString, trustList, trustBuffers.size());
+for (size_t i = 0; i < trustBuffers.size(); i++

[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified 
Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330440307
 
 

 ##
 File path: extensions/opc/src/putopc.cpp
 ##
 @@ -0,0 +1,466 @@
+/**
+ * PutOPC class definition
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "putopc.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+  core::Property PutOPCProcessor::ParentNodeID(
+  core::PropertyBuilder::createProperty("Parent node ID")
+  ->withDescription("Specifies the ID of the root node to traverse")
+  ->isRequired(true)->build());
+
+
+  core::Property PutOPCProcessor::ParentNodeIDType(
+  core::PropertyBuilder::createProperty("Parent node ID type")
+  ->withDescription("Specifies the type of the provided node ID")
+  ->isRequired(true)
+  ->withAllowableValues({"Path", "Int", 
"String"})->build());
+
+  core::Property PutOPCProcessor::ParentNameSpaceIndex(
+  core::PropertyBuilder::createProperty("Parent node namespace index")
+  ->withDescription("The index of the namespace. Used only if node ID 
type is not path.")
+  ->withDefaultValue(0)->build());
+
+  core::Property PutOPCProcessor::ValueType(
+  core::PropertyBuilder::createProperty("Value type")
+  ->withDescription("Set the OPC value type of the created nodes")
+  ->isRequired(true)->build());
+
+  core::Property PutOPCProcessor::TargetNodeIDType(
 
 Review comment:
   This should have the same validator as ParentNodeIDType.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p opened a new pull request #658: MINIFICPP-1025 Code review.

2019-10-02 Thread GitBox
am-c-p-p opened a new pull request #658: MINIFICPP-1025 Code review.
URL: https://github.com/apache/nifi-minifi-cpp/pull/658
 
 
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified 
Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330417353
 
 

 ##
 File path: extensions/opc/src/fetchopc.cpp
 ##
 @@ -0,0 +1,235 @@
+/**
+ * FetchOPC class definition
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "fetchopc.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+  core::Property FetchOPCProcessor::NodeID(
+  core::PropertyBuilder::createProperty("Node ID")
+  ->withDescription("Specifies the ID of the root node to traverse")
+  ->isRequired(true)->build());
+
+
+  core::Property FetchOPCProcessor::NodeIDType(
+  core::PropertyBuilder::createProperty("Node ID type")
+  ->withDescription("Specifies the type of the provided node ID")
+  ->isRequired(true)
+  ->withAllowableValues({"Path", "Int", "String"})->build());
+
+  core::Property FetchOPCProcessor::NameSpaceIndex(
+  core::PropertyBuilder::createProperty("Namespace index")
+  ->withDescription("The index of the namespace. Used only if node ID type 
is not path.")
+  ->withDefaultValue(0)->build());
+
+  core::Property FetchOPCProcessor::MaxDepth(
+  core::PropertyBuilder::createProperty("Max depth")
+  ->withDescription("Specifiec the max depth of browsing. 0 means 
unlimited.")
+  ->withDefaultValue(0)->build());
+
+  core::Relationship FetchOPCProcessor::Success("success", "Successfully 
retrieved OPC-UA nodes");
+  core::Relationship FetchOPCProcessor::Failure("failure", "Retrieved OPC-UA 
nodes where value cannot be extracted (only if enabled)");
+
+
+  void FetchOPCProcessor::initialize() {
+// Set the supported properties
+std::set fetchOPCProperties = {OPCServerEndPoint, NodeID, 
NodeIDType, NameSpaceIndex, MaxDepth};
+std::set baseOPCProperties = 
BaseOPCProcessor::getSupportedProperties();
+fetchOPCProperties.insert(baseOPCProperties.begin(), 
baseOPCProperties.end());
+setSupportedProperties(fetchOPCProperties);
+
+// Set the supported relationships
+setSupportedRelationships({Success, Failure});
+  }
+
+  void FetchOPCProcessor::onSchedule(const 
std::shared_ptr &context, const 
std::shared_ptr &factory) {
+logger_->log_trace("FetchOPCProcessor::onSchedule");
+
+translatedNodeIDs_.clear();  // Path might has changed during restart
+
+BaseOPCProcessor::onSchedule(context, factory);
+
+if(!configOK_) {
+  return;
+}
+
+configOK_ = false;
+
+std::string value;
+context->getProperty(NodeID.getName(), nodeID_);
+context->getProperty(NodeIDType.getName(), value);
+
+maxDepth_ = 0;
+context->getProperty(MaxDepth.getName(), maxDepth_);
+
+if (value == "String") {
+  idType_ = opc::OPCNodeIDType::String;
+} else if (value == "Int") {
+  idType_ = opc::OPCNodeIDType::Int;
+} else if (value == "Path") {
+  idType_ = opc::OPCNodeIDType::Path;
+} else {
+  // Where have our validators gone?
+  logger_->log_error("%s is not a valid node ID type!", value.c_str());
+}
+
+if(idType_ == opc::OPCNodeIDType::Int) {
+  try {
+int t = std::stoi(nodeID_);
+  } catch(...) {
+logger_->log_error("%s cannot be used as an int type node ID", 
nodeID_.c_str());
+return;
+  }
+}
+if(idType_ != opc::OPCNodeIDType::Path) {
+  if(!context->getProperty(NameSpaceIndex.getName(), nameSpaceIdx_)) {
+logger_->log_error("%s is mandatory in case %s is not Path", 
NameSpaceIndex.getName().c_str(), NodeIDType.getName().c_str());
+return;
+  }
+}
+
+configOK_ = true;
+  }
+
+  void FetchOPCProcessor::onTrigger(const 
std::shared_ptr &conte

[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified 
Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330416285
 
 

 ##
 File path: extensions/opc/include/fetchopc.h
 ##
 @@ -0,0 +1,109 @@
+/**
+ * FetchOPC class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef NIFI_MINIFI_CPP_FetchOPCProcessor_H
+#define NIFI_MINIFI_CPP_FetchOPCProcessor_H
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "opcbase.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+class FetchOPCProcessor : public BaseOPCProcessor {
+public:
+  static constexpr char const* ProcessorName = "FetchOPC";
+  // Supported Properties
+  static core::Property NodeIDType;
+  static core::Property NodeID;
+  static core::Property NameSpaceIndex;
+  static core::Property MaxDepth;
+
+  // Supported Relationships
+  static core::Relationship Success;
+  static core::Relationship Failure;
+
+  FetchOPCProcessor(std::string name, utils::Identifier uuid = 
utils::Identifier())
 
 Review comment:
   Please initialize the integral type members to 0 in the constructor. I know 
that we fill them in onSchedule, but a mistake in a future rewrite could change 
that, so I would like to be defensive here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified 
Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330416629
 
 

 ##
 File path: extensions/opc/src/fetchopc.cpp
 ##
 @@ -0,0 +1,235 @@
+/**
+ * FetchOPC class definition
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "fetchopc.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+  core::Property FetchOPCProcessor::NodeID(
+  core::PropertyBuilder::createProperty("Node ID")
+  ->withDescription("Specifies the ID of the root node to traverse")
+  ->isRequired(true)->build());
+
+
+  core::Property FetchOPCProcessor::NodeIDType(
+  core::PropertyBuilder::createProperty("Node ID type")
+  ->withDescription("Specifies the type of the provided node ID")
+  ->isRequired(true)
+  ->withAllowableValues({"Path", "Int", "String"})->build());
+
+  core::Property FetchOPCProcessor::NameSpaceIndex(
+  core::PropertyBuilder::createProperty("Namespace index")
+  ->withDescription("The index of the namespace. Used only if node ID type 
is not path.")
+  ->withDefaultValue(0)->build());
+
+  core::Property FetchOPCProcessor::MaxDepth(
+  core::PropertyBuilder::createProperty("Max depth")
+  ->withDescription("Specifiec the max depth of browsing. 0 means 
unlimited.")
+  ->withDefaultValue(0)->build());
+
+  core::Relationship FetchOPCProcessor::Success("success", "Successfully 
retrieved OPC-UA nodes");
+  core::Relationship FetchOPCProcessor::Failure("failure", "Retrieved OPC-UA 
nodes where value cannot be extracted (only if enabled)");
+
+
+  void FetchOPCProcessor::initialize() {
+// Set the supported properties
+std::set fetchOPCProperties = {OPCServerEndPoint, NodeID, 
NodeIDType, NameSpaceIndex, MaxDepth};
+std::set baseOPCProperties = 
BaseOPCProcessor::getSupportedProperties();
+fetchOPCProperties.insert(baseOPCProperties.begin(), 
baseOPCProperties.end());
+setSupportedProperties(fetchOPCProperties);
+
+// Set the supported relationships
+setSupportedRelationships({Success, Failure});
+  }
+
+  void FetchOPCProcessor::onSchedule(const 
std::shared_ptr &context, const 
std::shared_ptr &factory) {
+logger_->log_trace("FetchOPCProcessor::onSchedule");
+
+translatedNodeIDs_.clear();  // Path might has changed during restart
+
+BaseOPCProcessor::onSchedule(context, factory);
+
+if(!configOK_) {
+  return;
+}
+
+configOK_ = false;
+
+std::string value;
+context->getProperty(NodeID.getName(), nodeID_);
+context->getProperty(NodeIDType.getName(), value);
+
+maxDepth_ = 0;
+context->getProperty(MaxDepth.getName(), maxDepth_);
+
+if (value == "String") {
+  idType_ = opc::OPCNodeIDType::String;
+} else if (value == "Int") {
+  idType_ = opc::OPCNodeIDType::Int;
+} else if (value == "Path") {
+  idType_ = opc::OPCNodeIDType::Path;
+} else {
+  // Where have our validators gone?
+  logger_->log_error("%s is not a valid node ID type!", value.c_str());
+}
+
+if(idType_ == opc::OPCNodeIDType::Int) {
+  try {
+int t = std::stoi(nodeID_);
+  } catch(...) {
+logger_->log_error("%s cannot be used as an int type node ID", 
nodeID_.c_str());
+return;
+  }
+}
+if(idType_ != opc::OPCNodeIDType::Path) {
+  if(!context->getProperty(NameSpaceIndex.getName(), nameSpaceIdx_)) {
+logger_->log_error("%s is mandatory in case %s is not Path", 
NameSpaceIndex.getName().c_str(), NodeIDType.getName().c_str());
+return;
+  }
+}
+
+configOK_ = true;
+  }
+
+  void FetchOPCProcessor::onTrigger(const 
std::shared_ptr &conte

[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified 
Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330401847
 
 

 ##
 File path: extensions/opc/include/opc.h
 ##
 @@ -0,0 +1,152 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+#ifndef NIFI_MINIFI_CPP_OPC_H
+#define NIFI_MINIFI_CPP_OPC_H
+
+#include "open62541/client.h"
+#include "open62541/client_highlevel.h"
+#include "open62541/client_config_default.h"
+#include "logging/Logger.h"
+#include "Exception.h"
+
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace opc {
+
+class OPCException : public minifi::Exception {
+ public:
+  OPCException(ExceptionType type, std::string &&errorMsg)
+  : Exception(type, errorMsg) {
+  }
+};
+
+enum class OPCNodeIDType{ Path, Int, String };
+
+enum class OPCNodeDataType{ Int64, UInt64, Int32, UInt32, Boolean, Float, 
Double, String };
+
+struct NodeData;
+
+class Client;
+
+using nodeFoundCallBackFunc = bool(Client& client, const 
UA_ReferenceDescription*, const std::string&);
+
+class Client {
+ public:
+  bool isConnected();
+  UA_StatusCode connect(const std::string& url, const std::string& username = 
"", const std::string& password = "");
+  ~Client();
+  NodeData getNodeData(const UA_ReferenceDescription *ref, const std::string& 
basePath = "");
+  UA_ReferenceDescription * getNodeReference(UA_NodeId nodeId);
+  void traverse(UA_NodeId nodeId, std::function cb, 
const std::string& basePath = "", uint32_t maxDepth = 0, bool fetchRoot = true);
+  bool exists(UA_NodeId nodeId);
+  UA_StatusCode translateBrowsePathsToNodeIdsRequest(const std::string& path, 
std::vector& foundNodeIDs, const 
std::shared_ptr& logger);
+
+  template
+  UA_StatusCode update_node(const UA_NodeId nodeId, T value);
+
+  template
+  UA_StatusCode add_node(const UA_NodeId parentNodeId, const UA_NodeId 
targetNodeId, std::string browseName, T value, OPCNodeDataType dt, UA_NodeId 
*receivedNodeId);
+
+ private:
+  Client (std::shared_ptr logger, const std::string& 
applicationURI,
+  const std::vector& certBuffer, const std::vector& keyBuffer,
+  const std::vector>& trustBuffers);
+
+  UA_Client *client_;
+  std::shared_ptr logger_;
+
+  friend std::unique_ptr 
createClient(std::shared_ptr logger,
 
 Review comment:
   I think this would be better as a public static member function: it would be 
more straightforward and you wouldn't need to friend it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified 
Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330417917
 
 

 ##
 File path: extensions/opc/include/putopc.h
 ##
 @@ -0,0 +1,111 @@
+/**
+ * PutOPC class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef NIFI_MINIFI_CPP_PUTOPC_H
+#define NIFI_MINIFI_CPP_PUTOPC_H
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "opc.h"
+#include "opcbase.h"
+#include "utils/ByteArrayCallback.h"
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+#include "core/Resource.h"
+#include "controllers/SSLContextService.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+class PutOPCProcessor : public BaseOPCProcessor {
+ public:
+  static constexpr char const* ProcessorName = "PutOPC";
+  // Supported Properties
+  static core::Property ParentNodeIDType;
+  static core::Property ParentNodeID;
+  static core::Property ParentNameSpaceIndex;
+  static core::Property ValueType;
+
+  static core::Property TargetNodeIDType;
+  static core::Property TargetNodeID;
+  static core::Property TargetNodeBrowseName;
+  static core::Property TargetNodeNameSpaceIndex;
+
+  // Supported Relationships
+  static core::Relationship Success;
+  static core::Relationship Failure;
+
+  PutOPCProcessor(std::string name, utils::Identifier uuid = 
utils::Identifier())
 
 Review comment:
   Please initialize integral typed members here too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified Architecture Support

2019-10-02 Thread GitBox
bakaid commented on a change in pull request #635: MINIFICPP-819 - OPC Unified 
Architecture Support
URL: https://github.com/apache/nifi-minifi-cpp/pull/635#discussion_r330413396
 
 

 ##
 File path: extensions/opc/include/opcbase.h
 ##
 @@ -0,0 +1,86 @@
+/**
+ * OPCBase class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef NIFI_MINIFI_CPP_OPCBASE_H
+#define NIFI_MINIFI_CPP_OPCBASE_H
+
+#include 
+
+#include "opc.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Property.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+class BaseOPCProcessor : public core::Processor {
+ public:
+  static core::Property OPCServerEndPoint;
+
+  static core::Property ApplicationURI;
+  static core::Property Username;
+  static core::Property Password;
+  static core::Property CertificatePath;
+  static core::Property KeyPath;
+  static core::Property TrustedPath;
+
+  BaseOPCProcessor(std::string name, utils::Identifier uuid = 
utils::Identifier())
+  : Processor(name, uuid) {
+connection_ = nullptr;
 
 Review comment:
   This is a `std::unique_ptr` so explicitly assigning a nullptr is 
unnecessary.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] asfgit closed pull request #3483: NIFI-6275 Improved handling of scheme and authority for ListHDFS when using "Full P…

2019-10-02 Thread GitBox
asfgit closed pull request #3483: NIFI-6275 Improved handling of scheme and 
authority for ListHDFS when using "Full P…
URL: https://github.com/apache/nifi/pull/3483
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6275) ListHDFS with Full Path filter mode regex does not work as intended

2019-10-02 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942573#comment-16942573
 ] 

ASF subversion and git services commented on NIFI-6275:
---

Commit 8d748223ff8f80c7a85fc38013ecf0b221adc2da in nifi's branch 
refs/heads/master from Jeff Storck
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8d74822 ]

NIFI-6275 ListHDFS now ignores scheme and authority when uses "Full Path" 
filter mode
Updated description for "Full Path" filter mode to state that it will ignore 
scheme and authority
Added tests to TestListHDFS for listing an empty and nonexistent dirs
Updated TestListHDFS' mock file system to track state properly when FileStatus 
instances are added, and updated listStatus to work properly with the 
underlying Map that contains FileStatus instances
Updated ListHDFS' additional details to document "Full Path" filter mode 
ignoring scheme and authority, with an example
Updated TestRunners, StandardProcessorTestRunner, 
MockProcessorInitializationContext to support passing in a logger.

NIFI-6275 Updated the "Full Path" filter mode to check the full path of a file 
with and without its scheme and authority against the filter regex
Added additional documentation for how ListHDFS handles scheme and authority 
when "Full Path" filter mode is used
Added test case for "Full Path" filter mode with a regular expression that 
includes scheme and authority

This closes #3483.

Signed-off-by: Koji Kawamura 


> ListHDFS with Full Path filter mode regex does not work as intended
> ---
>
> Key: NIFI-6275
> URL: https://issues.apache.org/jira/browse/NIFI-6275
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website, Extensions
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.9.2
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When using the *{{Full Path}}* filter mode, the regex is applied to the URI 
> returned for each file which includes the scheme and authority (hostname, HA 
> namespace, port).  For the filter to work across multiple HDFS installations 
> (such as a flow used on multiple environments that is retrieved from NiFi 
> Registry), the regex filter would have to account for the scheme and 
> authority by matching possible scheme and authority values.
> To make it easier for the user, the *{{Full Path}}* filter mode's filter 
> regex should only be applied to the path components of the URI, without the 
> scheme and authority.  This can be done by updating the filter for *{{Full 
> Path}}* mode to use: 
> [Path.getPathWithoutSchemeAndAuthority(Path)|https://hadoop.apache.org/docs/r3.0.0/api/org/apache/hadoop/fs/Path.html#getPathWithoutSchemeAndAuthority-org.apache.hadoop.fs.Path-].
>   This will bring the regex values in line with the other modes, since those 
> are only applied to the value of *{{Path.getName()}}*.
> Migration guidance will be needed when this improvement is released.  
> Existing regex values for *{{Full Path}}* filter mode that accepted any 
> scheme and authority will still work. 
>  Those that specify a scheme and authority will *_not_* work, and will have 
> to be updated to specify only path components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6275) ListHDFS with Full Path filter mode regex does not work as intended

2019-10-02 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942572#comment-16942572
 ] 

ASF subversion and git services commented on NIFI-6275:
---

Commit 8d748223ff8f80c7a85fc38013ecf0b221adc2da in nifi's branch 
refs/heads/master from Jeff Storck
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8d74822 ]

NIFI-6275 ListHDFS now ignores scheme and authority when uses "Full Path" 
filter mode
Updated description for "Full Path" filter mode to state that it will ignore 
scheme and authority
Added tests to TestListHDFS for listing an empty and nonexistent dirs
Updated TestListHDFS' mock file system to track state properly when FileStatus 
instances are added, and updated listStatus to work properly with the 
underlying Map that contains FileStatus instances
Updated ListHDFS' additional details to document "Full Path" filter mode 
ignoring scheme and authority, with an example
Updated TestRunners, StandardProcessorTestRunner, 
MockProcessorInitializationContext to support passing in a logger.

NIFI-6275 Updated the "Full Path" filter mode to check the full path of a file 
with and without its scheme and authority against the filter regex
Added additional documentation for how ListHDFS handles scheme and authority 
when "Full Path" filter mode is used
Added test case for "Full Path" filter mode with a regular expression that 
includes scheme and authority

This closes #3483.

Signed-off-by: Koji Kawamura 


> ListHDFS with Full Path filter mode regex does not work as intended
> ---
>
> Key: NIFI-6275
> URL: https://issues.apache.org/jira/browse/NIFI-6275
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website, Extensions
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.9.2
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When using the *{{Full Path}}* filter mode, the regex is applied to the URI 
> returned for each file which includes the scheme and authority (hostname, HA 
> namespace, port).  For the filter to work across multiple HDFS installations 
> (such as a flow used on multiple environments that is retrieved from NiFi 
> Registry), the regex filter would have to account for the scheme and 
> authority by matching possible scheme and authority values.
> To make it easier for the user, the *{{Full Path}}* filter mode's filter 
> regex should only be applied to the path components of the URI, without the 
> scheme and authority.  This can be done by updating the filter for *{{Full 
> Path}}* mode to use: 
> [Path.getPathWithoutSchemeAndAuthority(Path)|https://hadoop.apache.org/docs/r3.0.0/api/org/apache/hadoop/fs/Path.html#getPathWithoutSchemeAndAuthority-org.apache.hadoop.fs.Path-].
>   This will bring the regex values in line with the other modes, since those 
> are only applied to the value of *{{Path.getName()}}*.
> Migration guidance will be needed when this improvement is released.  
> Existing regex values for *{{Full Path}}* filter mode that accepted any 
> scheme and authority will still work. 
>  Those that specify a scheme and authority will *_not_* work, and will have 
> to be updated to specify only path components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] ijokarumawak commented on issue #3483: NIFI-6275 Improved handling of scheme and authority for ListHDFS when using "Full P…

2019-10-02 Thread GitBox
ijokarumawak commented on issue #3483: NIFI-6275 Improved handling of scheme 
and authority for ListHDFS when using "Full P…
URL: https://github.com/apache/nifi/pull/3483#issuecomment-537384566
 
 
   Evaluating regex against both with and without authority preserve existing 
flow behavior while improve usability. I like the idea. I'm +1. Changes look 
good to me. Merging to master, thank you @jtstorck !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services