[GitHub] nifi issue #583: NIFI-2115 Detailed Version Info for About Box
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/583 Here's what the result looks like while the details section is expanded. I plead guilty to being a bad UI designer, any tips would be appreciated. ![nifi-2115-about-box-info](https://cloud.githubusercontent.com/assets/3151078/16352007/0c6735ac-3a1f-11e6-8471-c56ef57641cb.png) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #583: NIFI-2115 Detailed Version Info for About Box
GitHub user jvwing opened a pull request: https://github.com/apache/nifi/pull/583 NIFI-2115 Detailed Version Info for About Box * Java version and vendor * OS name and version * Build number(commit SHA), branch, and timestamp You can merge this pull request into a Git repository by running: $ git pull https://github.com/jvwing/nifi NIFI-2115-about-box-info-1 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/583.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #583 commit 3aefbc27a352d99a239211df692911912f9deb05 Author: James Wing <jvw...@gmail.com> Date: 2016-06-23T20:53:23Z NIFI-2115 Detailed Version Info for About Box * Java version and vendor * OS name and version * Build number(commit SHA), branch, and timestamp --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #362: NIFI-1769: added support for SSE-KMS and signature s3v4 aut...
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/362 @miquillo I'm happy to review and test the contributions you have made, and I would be happy to put some code where my comments with an implementation of the signature version controls. But I am hoping you will continue to work us on this PR. I would very much like your help in either writing the code or reviewing and testing the changes, as you have both experience with SSE-KMS in NiFi and knowledge of the driving use case. What would you feel most comfortable with? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #561: NIFI-2063 Adjusting handling of service install by using sc...
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/561 The update looks good to me. I was able to install, start, stop, reboot, etc. No test or contrib-check issues. I will squash and merge shortly. Thanks @apiri for the script and @YolandaMDavis for getting this started. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #553: NIFI-2063 - Install Script Relative Path Mismatch from Init...
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/553 Thanks for the update, @YolandaMDavis. I tested this on Amazon Linux, and it worked well. Service installed, started, restarted after a reboot, stopped, etc. I was worried that the magic chkconfig comments in nifi.sh would not be picked up after the concatenation, but they seem to work when I manually adjusted them and then ran install. And thanks for the `chmod 755`, I used to do that manually. I'm hoping others will review and/or test. I plan to merge this later today unless somebody stops me. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #362: NIFI-1769: added support for SSE-KMS and signature s3v4 aut...
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/362 @miquillo , what do you think about using [ClientConfiguration::setSignerOverride()](http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html#setSignerOverride(java.lang.String)) to control the signature version? One of the advantages I see to it is better isolation for the processor vs. any other NiFi AWS processors. I'm a bit worried that one PutS3Object processor using SSE-KMS would change the settings other processors running at the same time. I believe an appropriate location to do this would be in AbstractS3Processor::createClient(). That would allow the configuration code to be shared, while the configured value would remain specific to individual processors. But I'm not sure I agree with configuring the version as a true/false setting for signature version 4. I would recommend a list of values: * AWS SDK default (as the default selection) * Signature v2 * Signature v4 That leaves room for the AWS SDK default to change if/when we upgrade to a newer SDK, and it would allow for users to explicitly request either v4 or v2 to match whatever features and endpoint they are using. What do you think? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #362: NIFI-1769: added support for SSE-KMS and signature s3v4 aut...
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/362 I now better understand that we do not need to do anything to the FetchS3Object processor to read KMS encrypted files. Server-Side Encryption is, after all, on the server. Strike that point. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #362: NIFI-1769: added support for SSE-KMS and signature s3v4 aut...
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/362 @miquillo, thanks for putting together this PR, Server-Side Encryption with customer keys is a great feature for NiFi to have. I'm doing some review, and considering the following topics: * **NiFi's AWS SDK Version** - Since you submitted this PR, Amazon appears to have made signature version 4 the default for S3 in their Java SDK v1.11.0, May 13, 2016. NiFi's current SDK version is 1.10.32 from Nov 3, 2015. Upgrading the SDK would apply much broader than this feature, but maybe not out of line for the v1.0 release. If we chose not to upgrade now, we might consider that in a future upgrade the signature version defaults will change. * **Controlling Signature Versions** - Regardless of the SDK default, you are absolutely right to make it optional. I'm a bit baffled by why Amazon uses a System property to control this setting rather than a client or request property, and a bit concerned about multiple processors fighting over the signature version. I agree that `System.setProperty(...)` is the [documented method of setting the signature version](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html). But I don't like it, I would prefer to set this more concisely than a global setting. Are you familiar with [`ClientConfiguration::setSignerOverride()`](http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html#setSignerOverride(java.lang.String))? It appears to allow this, if not so well documented. * **SSE KMS for FetchS3Object** - We might also want to apply this feature to the FetchS3Object processor, or at least to allow for that to happen in the future. Have you considered moving some of the KMS logic to the AbstractS3Processor class? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #239: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/239 Thanks for noticing that, I didn't realize it would cross-link. That's not what I was hoping for. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #532: NIFI-1941: Child group contents in exported templates
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/532 Thanks for tracking that down, it makes sense now that I see your changes. I tested the new changes, did a full build with contrib check (thanks for the earlier fix), and will merge. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #532: NIFI-1941: Child group contents in exported templates
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/532 I ran through the following steps to test this fix: 1. Confirm the bug a. Built a template on master/1.0 branch with a process group b. Exported the template c. Confirmed that process group content was not in the exported XML file d. Confirmed that template could not be imported into a different NiFi (it imports into the same NiFi OK, because the IDs match?) 2. Verify the fix a. Exported the same template using the fix code b. Verified that the exported XML contained the process groups c. Verified that the template can be imported into a second NiFi d. Verified that the template can be run in the second NiFi Everything worked great up to step 2d, running the imported template in the second NiFi. After CTRL-A selecting everything and clicking Run, I got this nastygram: `LocalPort[name=stuff to log,id=9c17cb62-4df3-4122-ab36-44f290dc2bce] is not a member of this Process Group` Which is a reference to an input port in a Process Group nested in a Process Group. Manually navigating around the flow and starting all of the components worked fine without errors. I get a similar error stopping the flow. The id "9c17..." is not in the template XML file. From poking around at the API it does exist, it is the "stuff to log" input port as expected in my template. The parent-child relationships between Process Groups and Input Ports looked OK to the untrained eye. What make this weirder is that I get a comparable error on the original NiFi I exported the template from. So I'm not sure this is a problem specific to this fix. [TestNiFi1941FixTemplate.xml.txt](https://github.com/apache/nifi/files/319518/TestNiFi1941FixTemplate.xml.txt) Matt, are you aware of this issue, and do you experience this importing and running the attached template? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #532: NIFI-1941: Child group contents in exported templates
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/532 Reviewing --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #239: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/239 @joewitt, would you please help us with the licensing/notice requirements for using the Kinesis Client Library and Kinesis Producer Library? The Kinesis libraries are licensed under the [Amazon Software License](https://aws.amazon.com/asl/). This does not appear on the published [list of Apache-compatible licenses](http://www.apache.org/legal/resolved.html#category-a). The Apache Spark project includes comparable use of the Kinesis library, although they have chosen to [present their Kinesis integration as an optional add-on](http://spark.apache.org/docs/latest/streaming-kinesis-integration.html). Comparable code is in fact [checked into the Spark repo](https://github.com/apache/spark/tree/master/external), but I did not find mention of the ASL in a NOTICE file. I was really hoping to copy and paste. I found a [JIRA issue raised by the Spark team for the license discussion](https://issues.apache.org/jira/browse/LEGAL-198) which discusses the add-on nature of the component, but not specific referencing language. Is this OK? How can we determine what needs to be added to the NOTICE file in nifi-aws-nar? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #239: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/239 @mans2singh, the error messages did not contain anything helpful that indicated it was a memory issue, just the text `com.amazonaws.services.kinesis.producer.DaemonException: The child process has been shutdown and can no longer accept messages`. Memory was discussed on one of the KPL issue tickets you found (thanks!), so I tried it out. I'm not sure if we can do anything to handle this, but I don't know Kinesis well. One option would be to try to restart the child process when this error happens. But I may have actually been doing that. I stopped and started the PutKinesisStream processor several times, which recreates the KinesisProducer instance, and the [KinesisProducer javadoc](https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer/src/main/java/com/amazonaws/services/kinesis/producer/KinesisProducer.java) claims this spawns a new child processor. This might have actually worked, briefly, just that the new child process died shortly afterwards because of the same memory constraint. So even if we automated that troubleshooting step in PutKinesisStream, the outcome for this particular issue would be the same. Part of transparently using the KPL/KCL is that users will have to know how to troubleshoot them directly. There is no special trick to getting NiFi to work on EC2, their Linux VMs are very similar to any other. I am used to EC2, and find it convenient for testing. EC2's "micro" sized instances, which I use because I am terribly cheap, double as a handy way to test resource constraints. Have you had trouble with it? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #239: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/239 It appears that my errors were caused by memory constraints on the KPL. With a larger EC2 instance, I was able to run at the provisioned throughput threshold without the KPL process crashing. The processors also worked fine through a shard merge. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #239: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/239 The suggested use of `name` and `displayName` on PropertyDescriptors has been shared around a lot the last few days. You can read the [backstory thread](http://mail-archives.apache.org/mod_mbox/nifi-dev/201605.mbox/%3c5a6fdf1e-1889-46fe-a3c4-5d2f0a905...@apache.org%3E) on the best practice and the reasons for it. The short, short version is to provide `name` as a computer-readable key to saved settings in templates and flows, and `displayName` as a human-readable description which may be changed or translated without breaking compatibility with saved data. A good example is this PropertyDescriptor from PutS3Object: public static final PropertyDescriptor SERVER_SIDE_ENCRYPTION = new PropertyDescriptor.Builder() .name("server-side-encryption") .displayName("Server Side Encryption") .description("Specifies the algorithm used for server side encryption.") .required(true) .allowableValues(NO_SERVER_SIDE_ENCRYPTION, ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION) .defaultValue(NO_SERVER_SIDE_ENCRYPTION) .build(); --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #239: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/239 Thanks for your latest changes to the error handling. The changes look OK, I don't think we need the failure relationship. The integration tests for PutKinesisStream and GetKinesisStream both worked fine. I set up a small flow putting and getting records from a Kinesis stream to test the processors. The processors do work, but I had a rough experience interrupted by various errors that required a NiFi restart to fix. Errors include the following, not necessarily in this sequence: * ERROR [pool-123-thread-4] c.a.s.kinesis.producer.KinesisProducer Error in child process java.lang.RuntimeException: EOF reached during read * ERROR [pool-37-thread-1] c.a.s.kinesis.producer.KinesisProducer Error in child process java.lang.RuntimeException: Child process exited with code 137 * ERROR [Timer-Driven Process Thread-2] o.a.n.p.a.k.producer.PutKinesisStream com.amazonaws.services.kinesis.producer.DaemonException: The child process has been shutdown and can no longer accept messages. Are you familiar with any of these? Once the child process errors show up, the PutKinesisStream processor seems to stop working. I do not have a precise repro sequence yet, but they coincided with throughput around the throttle limit of my Kinesis Stream. Stopping and starting the processor did not help. I was running this on an Amazon Linux EC2 instance with permissions for Kinesis, Dynamo, and CloudWatch. I am not sure how to evaluate if this is a KCL problem or a PutKinesisStream problem. One suggestion I have for the error handling in PutKinesisStream would be to NOT log the entire batch of FlowFile (PutKinesisStream.java, lines 263, 268, and 272). For example: ``` if ( failedFlowFiles.size() > 0 ) { session.transfer(failedFlowFiles, PutKinesisStream.REL_FAILURE); getLogger().error("Failed to publish to kinesis {} records {}", new Object[]{stream, failedFlowFiles}); } ``` With the default batch size of 250, 250 x FlowFile::toString() adds up to a very large block of text that makes it difficult to find the error. I'm not sure how helpful the flow file records are. I certainly recommend putting the exception first, and maybe leaving out the files? Would a count of files be OK? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #239: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/239 Checkpointing after catching an exception would keep things moving, I can see the benefit there. But what is the advantage of creating a failed FlowFile over just logging the exception? What would a user do with the FlowFile? Also, given that we had thrown an exception creating a FlowFile on the happy path, how do we safely generate a FlowFile for the failure route? I recommend catching, logging, and checkpointing. It may be worth trying to log a bit more detail, like sequence number, partition, etc., in case those are not included in the exception message. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #239: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/239 Thanks, @mans2singh, I will review the changes. I don't think the AppVeyor build is reliable, there don't appear to be any recent successful builds. For GetKinesisStream exceptions, I think your approach of logging without creating a FlowFile is correct. I'm not sure we could count on there being anything useful to put in a FlowFile other than the error message, such that the failed FlowFile can be meaningfully processed other than logging. I did not find any other Get* processor with a failure route. Have you experienced errors at that stage of Kinesis client processing? I don't have enough experience with Kinesis Streams to know what bad things are going to happen there, so I don't have clear failure scenarios to think through. By checkpointing to the last successfully processed record, I believe the 'bad' record will be retried by the KCL. This seems reasonable, but if NiFi continues to fail on the same record, won't we end up in an infinite retry loop? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/239#issuecomment-222062467 Is `REL_FAILURE` used in GetKinesis? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/239#issuecomment-222057686 Your suggestion about the plain old SDK Kinesis API is interesting, but I guess I don't recommend it. Using the plain SDK would let NiFi handle the multithreading, retries, batching, etc. It would be a win for the NiFi model. But then we would also have to pick up at least some of the KCL's other features, like balancing traffic across shards, or leave that as an excercise for the user. It sounds complicated and hard, and I'm lazy. Also, the KCL is recommended by Amazon, as you pointed out earlier, and not using it might be a point of confusion as to why we didn't do it the "right" way, especially if a new version of the KCL is released with features we hadn't thought of. So even though the KCL is an awkward fit in NiFi, as long as it is an opt-in feature, it seems like a good addition to NiFi's AWS interoperability story. Part of why I would prefer to not change the base AWS processor classes is to keep it opt-in. I'm not sure how the other AWS Get* processors would benefit from the Kinesis-like threading model, they do not have applications or threads to drive activity outside of NiFi's scheduling. But I can see lobbying for a change to `AbstractProcessor` to remove `onTrigger`'s `final` modifier for cases like these without divorcing the class hierarchies. A recently merged [change in the 0.x branch](https://github.com/apache/nifi/commit/de7ecd719a2e9907042628ea3a8283cfe2d4fbac) for NIFI-786 has some shared credential property descriptors and validation logic that you may be able to use, that will hopefully make it easier to implement separate base classes. Let me know if I can help with that. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/239#issuecomment-221619193 @mans2singh, I'm skeptical about the need to change the class hierarchy for all AWS processors. I understand you want to share a common base class for the Kinesis processors, and use shared AWS code for credentials, property validation, etc. I also see that AbstractProcessor's functionality is not particularly hard to replicate right now. But that might change in the future, and most of the AWS processors would benefit from common functionality and compliance without benefiting from the customization. These Kinesis processors seem more of an exception to the rule rather than an indicator of the common needs of AWS processors. As you point out above, the Kinesis producer/consumer processors use a different set of AWS libraries, running an out-of-process native code module, and driven by different concurrency and flow control concerns. I don't believe these requirements will be shared by any other AWS processors on the near horizon. I don't have any big concerns about your implementation of AbstractBaseAWSProcessor, it appears OK. Our AWS processor class hierarchy is already in need of some repairs, and this could be made to fit. But I'm not sure that should be done for this PR, driven by the Kinesis processors. A few other comments: - I recommend renaming the processors something like "GetKinesisStream" and "PutKinesisStream", to distinguish them from PutKinesisFirehose and possible future Kinesis processors for their analytics product. - We should document the AWS permission requirements, at least a link to the AWS docs on the permissions required by the KCL/KPL (Kinesis, DynamoDB, and CloudWatch?). - There does not appear to be a lot of unit test coverage of key onTrigger methods and flowfile processing. I am still working on running the integration tests and doing more detailed code review. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1540 - AWS Kinesis Get and Put Processors
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/239#issuecomment-220657759 @mans2singh, I am a bit behind on this PR, but will try to get up to speed and contribute to the review. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1887 Updating default timeout in Admin Gui...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/447#issuecomment-219530068 @alopresto, thanks for your help. Your cheatsheet is a great idea, I would have missed the 0.x branch without it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1887 Updating default timeout in Admin Gui...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/447#issuecomment-219509273 My powers are weak :(. I waded through the new committer docs that seem heavy on Subversion setup, and I have not verified git access. If you don't mind me thrashing on this PR a bit, this does seem like a good place to start. Thanks for suggesting it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1858 Adding SiteToSiteProvenanceReportingT...
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/419#discussion_r62706651 --- Diff: nifi-nar-bundles/nifi-site-to-site-reporting-bundle/nifi-site-to-site-reporting-task/src/main/java/org/apache/nifi/reporting/SiteToSiteProvenanceReportingTask.java --- @@ -0,0 +1,354 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.reporting; + +import org.apache.nifi.annotation.behavior.Stateful; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.state.Scope; +import org.apache.nifi.components.state.StateManager; +import org.apache.nifi.controller.status.PortStatus; +import org.apache.nifi.controller.status.ProcessGroupStatus; +import org.apache.nifi.controller.status.ProcessorStatus; +import org.apache.nifi.controller.status.RemoteProcessGroupStatus; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.provenance.ProvenanceEventRecord; +import org.apache.nifi.remote.Transaction; +import org.apache.nifi.remote.TransferDirection; + +import javax.json.Json; +import javax.json.JsonArray; +import javax.json.JsonArrayBuilder; +import javax.json.JsonBuilderFactory; +import javax.json.JsonObject; +import javax.json.JsonObjectBuilder; +import java.io.IOException; +import java.net.MalformedURLException; +import java.net.URL; +import java.nio.charset.StandardCharsets; +import java.text.DateFormat; +import java.text.SimpleDateFormat; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.TimeZone; +import java.util.UUID; +import java.util.concurrent.TimeUnit; + +@Tags({"provenance", "lineage", "tracking", "site", "site to site"}) +@CapabilityDescription("Publishes Provenance events using the Site To Site protocol.") +@Stateful(scopes = Scope.LOCAL, description = "Stores the Reporting Task's last event Id so that on restart the task knows where it left off.") +public class SiteToSiteProvenanceReportingTask extends AbstractSiteToSiteReportingTask { + +private static final String TIMESTAMP_FORMAT = "-MM-dd'T'HH:mm:ss.SSS'Z'"; +private static final String LAST_EVENT_ID_KEY = "last_event_id"; + +static final PropertyDescriptor PLATFORM = new PropertyDescriptor.Builder() +.name("Platform") +.description("The value to use for the platform field in each provenance event.") +.required(true) +.expressionLanguageSupported(true) +.defaultValue("nifi") +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.build(); + +private volatile long firstEventId = -1L; + +@Override +protected List getSupportedPropertyDescriptors() { +final List properties = new ArrayList<>(super.getSupportedPropertyDescriptors()); +properties.add(PLATFORM); +return properties; +} + +private String getComponentName(final ProcessGroupStatus status, final ProvenanceEventRecord event) { +if (status == null) { +return null; +} + +final String componentId = event.getComponentId(); +if (status.getId().equals(componentId)) { +return status.getName(); +} + +for (final ProcessorStatus procStatus : status.getProcessorStatus()) { +if (procStatus.getId().equals(compo
[GitHub] nifi pull request: NIFI-1711 Client-side JS for proxy-friendly URL...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/316#issuecomment-217775821 Thank you, @mcgilman . --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1858 Adding SiteToSiteProvenanceReportingT...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/419#issuecomment-217662827 In my test configuration, it seemed that ROLE_DFM permission was sufficient to enable this controller service and export provenance data. Is that expected? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1858 Adding SiteToSiteProvenanceReportingT...
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/419#discussion_r62421129 --- Diff: nifi-nar-bundles/nifi-site-to-site-reporting-bundle/nifi-site-to-site-reporting-task/src/main/java/org/apache/nifi/reporting/AbstractSiteToSiteReportingTask.java --- @@ -0,0 +1,168 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.reporting; + +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.annotation.lifecycle.OnStopped; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.components.Validator; +import org.apache.nifi.controller.ConfigurationContext; +import org.apache.nifi.events.EventReporter; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.remote.client.SiteToSiteClient; +import org.apache.nifi.ssl.SSLContextService; + +import javax.net.ssl.SSLContext; +import java.io.IOException; +import java.net.URL; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.TimeUnit; + +/** + * Base class for ReportingTasks that send data over site-to-site. + */ +public abstract class AbstractSiteToSiteReportingTask extends AbstractReportingTask { + +static final PropertyDescriptor DESTINATION_URL = new PropertyDescriptor.Builder() +.name("Destination URL") +.description("The URL to send the Provenance Events to. For example, to send to a NiFi instance running " + +"at http://localhost:8080/nifi this value should be http://localhost:8080;) +.required(true) --- End diff -- May I ask why you request the destination without "/nifi", but then you add "/nifi" to make a destinationUrl variable on line 130? I found this slightly confusing while testing, mostly because I was also troubleshooting my own destination site-to-site configuration. It works fine as-is. But I noticed that Remote Process Groups ask for the remote URL in the form of "https://remotehost:8080/nifi;, and I think there may be some simplicity in copying that pattern. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/267#issuecomment-216431685 What would you recommend for this pull request? No utility? A simpler hashing utility? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r61836807 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/CredentialsStore.java --- @@ -0,0 +1,229 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.io.InvalidObjectException; +import java.util.List; +import javax.xml.XMLConstants; +import javax.xml.bind.JAXBContext; +import javax.xml.bind.JAXBElement; +import javax.xml.bind.JAXBException; +import javax.xml.bind.Marshaller; +import javax.xml.bind.Unmarshaller; +import javax.xml.bind.ValidationEvent; +import javax.xml.bind.ValidationEventHandler; +import javax.xml.transform.stream.StreamSource; +import javax.xml.validation.Schema; +import javax.xml.validation.SchemaFactory; + +import org.apache.nifi.authentication.file.generated.ObjectFactory; --- End diff -- They are used to serialize and deserialize the XML credentials file. What kind of issues are you experiencing? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r61821120 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/CredentialsCLI.java --- @@ -0,0 +1,207 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.util.ArrayList; + +import org.apache.nifi.authentication.file.generated.UserCredentials; +import org.apache.nifi.authentication.file.generated.UserCredentialsList; + + +/** + * Command-line interface for working with a {@link CredentialsStore} + * persisted as an XML file. + * + * Usage: + * + * list credentials.xml + * add credentials.xml admin password --- End diff -- Thanks, I'll try that method. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/267#issuecomment-216353947 I rebased the commits on the master branch to resolve conflicts and use the updated LoginIdentityProvider interface and Administrator's Guide content. I apologize if it complicates reviewing. Changes include: - Improved performance by only reloading the credentials data if the file has been modified - Provided a command-line utility reference implementation - Added documentation to the Administrator's Guide - Included a sample login-credentials.xml file to the conf directory --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/267#issuecomment-212002570 I have been working (slowly) on the suggested improvements for performance and a reference CLI for basic admin operations. I haven't figured out an elegant way of packaging a CLI-executable class in a NAR file, but I agree there should be some most basic tool available to generate proper hashes, or at least demonstrate how to do so. I will add a sample config file and documentation. I do not propose to add any UI features within this ticket/PR. However, I'm curious how you think that might work. Can plugin NARs can add to the web API and UI, or would that imply tighter coupling with the core features? For the moment, I see this as a more peripheral and optional plugin. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1738 - Repair logger names for ControllerS...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/334#issuecomment-207841740 @Hejki, this looks like a solid fix to me. One thing I suggest you add is some unit tests, which I missed when I "fixed" this a couple months ago. If that sounds like overkill for such a simple change, well... I use to agree :). I made some [sample tests to verify your fix](https://github.com/jvwing/nifi/commit/004ca21e50dd4f4c98923dbb9a49528ad8544179), you are welcome to use them. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1711 Client-side JS for proxy-friendly URL...
GitHub user jvwing opened a pull request: https://github.com/apache/nifi/pull/316 NIFI-1711 Client-side JS for proxy-friendly URLs This is a possible fix for generating proxy-friendly URLs in the content viewer using client-side Javascript rather than calculating the URLs server-side using the proxy headers. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jvwing/nifi NIFI-1711-content-viewer-format-urls Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/316.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #316 commit cb1fec4ecf883732e3e80eddb0a0bae99d7f Author: James Wing <jvw...@gmail.com> Date: 2016-03-31T17:58:02Z NIFI-1711 Client-side JS for proxy-friendly URLs --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r57017952 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/FileIdentityProvider.java --- @@ -0,0 +1,216 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.xml.XMLConstants; +import javax.xml.bind.JAXBContext; +import javax.xml.bind.JAXBElement; +import javax.xml.bind.JAXBException; +import javax.xml.bind.Unmarshaller; +import javax.xml.bind.ValidationEvent; +import javax.xml.bind.ValidationEventHandler; +import javax.xml.transform.stream.StreamSource; +import javax.xml.validation.Schema; +import javax.xml.validation.SchemaFactory; + +import org.apache.nifi.authentication.AuthenticationResponse; +import org.apache.nifi.authentication.LoginCredentials; +import org.apache.nifi.authentication.LoginIdentityProvider; +import org.apache.nifi.authentication.LoginIdentityProviderConfigurationContext; +import org.apache.nifi.authentication.LoginIdentityProviderInitializationContext; +import org.apache.nifi.authentication.exception.IdentityAccessException; +import org.apache.nifi.authentication.exception.InvalidLoginCredentialsException; +import org.apache.nifi.authorization.exception.ProviderCreationException; +import org.apache.nifi.authorization.exception.ProviderDestructionException; +import org.apache.nifi.authentication.file.generated.UserCredentials; +import org.apache.nifi.authentication.file.generated.UserCredentialsList; +import org.apache.nifi.util.FormatUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; +import org.springframework.security.crypto.password.PasswordEncoder; + + +/** + * Identity provider for simple username/password authentication backed by a local credentials file. The credentials + * file contains usernames and password hashes in bcrypt format. Any compatible bcrypt "2a" implementation may be used + * to populate the credentials file. + * + * The XML format of the credentials file is as follows: + * + * {@code + * + * + * + * + * + * + * } + * + */ +public class FileIdentityProvider implements LoginIdentityProvider { + +static final String PROPERTY_CREDENTIALS_FILE = "Credentials File"; +static final String PROPERTY_EXPIRATION_PERIOD = "Authentication Expiration"; + +private static final Logger logger = LoggerFactory.getLogger(FileIdentityProvider.class); +private static final String CREDENTIALS_XSD = "/credentials.xsd"; +private static final String JAXB_GENERATED_PATH = "org.apache.nifi.authentication.file.generated"; +private static final JAXBContext JAXB_CONTEXT = initializeJaxbContext(); + +private String issuer; +private long expirationPeriodMilliseconds; +private String credentialsFilePath; +private PasswordEncoder passwordEncoder = new BCryptPasswordEncoder(); +private String identifier; + +private static JAXBContext initializeJaxbContext() { +try { +return JAXBContext.newInstance(JAXB_GENERATED_PATH, FileIdentityProvider.class.getClassLoader()); +} catch (JAXBException e) { +throw new RuntimeException("Failed creating JAXBContext for " + FileIdentityProvider.class.getCanonicalName()); +} +} + +private static ValidationEventHandler defaultValidationEventHandler = new ValidationEventHandler() { +@Override +p
[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/218#issuecomment-195858721 I have a [suggested fix/hack for dealing with spaces in the Java path in env-nifi.bat](https://github.com/jvwing/nifi/commit/2094080675d0d976fd4cd883d9f97cd10994ba69)). The fix converts the TOOLS_JAR path into a short 8.3 path without spaces, so the eventual call to NiFi has only one quoted parameter. The expanded TOOLS_JAR path was not working due to spaces, and two quoted paths make the whole thing blow up for reasons I do not comprehend. However, this was not enough to get the env feature running on Windows 10. Running it outputs ``` 17:58:32.959 [main] DEBUG o.a.n.b.NotificationServiceManager - Found 0 service elements 17:58:32.963 [main] INFO o.a.n.b.NotificationServiceManager - Successfully loaded the following 0 services: [] 17:58:32.965 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_STARTED 17:58:32.987 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_STOPPED 17:58:32.989 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_DIED 17:58:33.012 [main] DEBUG org.apache.nifi.bootstrap.Command - Status File: bin\nifi.pid 17:58:33.013 [main] DEBUG org.apache.nifi.bootstrap.Command - Status File: bin\nifi.pid 17:58:33.040 [main] DEBUG org.apache.nifi.bootstrap.Command - Properties: {port=40930} 17:58:33.041 [main] DEBUG org.apache.nifi.bootstrap.Command - Pinging 40930 17:58:33.078 [main] DEBUG org.apache.nifi.bootstrap.Command - Sent PING command 17:58:33.080 [main] DEBUG org.apache.nifi.bootstrap.Command - PING response: PING 17:58:33.081 [main] INFO org.apache.nifi.bootstrap.Command - Apache NiFi is not running ``` Where the last line "Apache NiFi is not running" matches RunNiFi.java line 559, where the env() method quits after not finding a PID. Yes, NiFi was running when I tested this. In comparison, the `status-nifi.bat` outputs ``` 17:49:49.660 [main] DEBUG o.a.n.b.NotificationServiceManager - Found 0 service elements 17:49:49.664 [main] INFO o.a.n.b.NotificationServiceManager - Successfully loaded the following 0 services: [] 17:49:49.666 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_STARTED 17:49:49.690 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_STOPPED 17:49:49.691 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_DIED 17:49:49.719 [main] DEBUG org.apache.nifi.bootstrap.Command - Status File: bin\nifi.pid 17:49:49.723 [main] DEBUG org.apache.nifi.bootstrap.Command - Status File: bin\nifi.pid 17:49:49.749 [main] DEBUG org.apache.nifi.bootstrap.Command - Properties: {port=40930} 17:49:49.752 [main] DEBUG org.apache.nifi.bootstrap.Command - Pinging 40930 17:49:49.798 [main] DEBUG org.apache.nifi.bootstrap.Command - Sent PING command 17:49:49.801 [main] DEBUG org.apache.nifi.bootstrap.Command - PING response: PING 17:49:49.804 [main] INFO org.apache.nifi.bootstrap.Command - Apache NiFi is currently running, listening to Bootstrap on port 40930, PID=unknkown ``` Which recognizes that NiFi is running based on `isRespondingToPing()` (line 532), and describes the PID as "unknkown". Since it looks like you use the PID in the env method, I'm not immediately sure what to suggest next. It might be possible to work out an "if Windows, do this to get the PID" flow. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r55884048 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/FileIdentityProvider.java --- @@ -0,0 +1,216 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.xml.XMLConstants; +import javax.xml.bind.JAXBContext; +import javax.xml.bind.JAXBElement; +import javax.xml.bind.JAXBException; +import javax.xml.bind.Unmarshaller; +import javax.xml.bind.ValidationEvent; +import javax.xml.bind.ValidationEventHandler; +import javax.xml.transform.stream.StreamSource; +import javax.xml.validation.Schema; +import javax.xml.validation.SchemaFactory; + +import org.apache.nifi.authentication.AuthenticationResponse; +import org.apache.nifi.authentication.LoginCredentials; +import org.apache.nifi.authentication.LoginIdentityProvider; +import org.apache.nifi.authentication.LoginIdentityProviderConfigurationContext; +import org.apache.nifi.authentication.LoginIdentityProviderInitializationContext; +import org.apache.nifi.authentication.exception.IdentityAccessException; +import org.apache.nifi.authentication.exception.InvalidLoginCredentialsException; +import org.apache.nifi.authorization.exception.ProviderCreationException; +import org.apache.nifi.authorization.exception.ProviderDestructionException; +import org.apache.nifi.authentication.file.generated.UserCredentials; +import org.apache.nifi.authentication.file.generated.UserCredentialsList; +import org.apache.nifi.util.FormatUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; +import org.springframework.security.crypto.password.PasswordEncoder; + + +/** + * Identity provider for simple username/password authentication backed by a local credentials file. The credentials + * file contains usernames and password hashes in bcrypt format. Any compatible bcrypt "2a" implementation may be used + * to populate the credentials file. + * + * The XML format of the credentials file is as follows: + * + * {@code + * + * + * + * + * + * + * } + * + */ +public class FileIdentityProvider implements LoginIdentityProvider { + +static final String PROPERTY_CREDENTIALS_FILE = "Credentials File"; +static final String PROPERTY_EXPIRATION_PERIOD = "Authentication Expiration"; + +private static final Logger logger = LoggerFactory.getLogger(FileIdentityProvider.class); +private static final String CREDENTIALS_XSD = "/credentials.xsd"; +private static final String JAXB_GENERATED_PATH = "org.apache.nifi.authentication.file.generated"; +private static final JAXBContext JAXB_CONTEXT = initializeJaxbContext(); + +private String issuer; +private long expirationPeriodMilliseconds; +private String credentialsFilePath; +private PasswordEncoder passwordEncoder = new BCryptPasswordEncoder(); +private String identifier; + +private static JAXBContext initializeJaxbContext() { +try { +return JAXBContext.newInstance(JAXB_GENERATED_PATH, FileIdentityProvider.class.getClassLoader()); +} catch (JAXBException e) { +throw new RuntimeException("Failed creating JAXBContext for " + FileIdentityProvider.class.getCanonicalName()); +} +} + +private static ValidationEventHandler defaultValidationEventHandler = new ValidationEventHandler() { +@Override +p
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r55872279 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/FileIdentityProvider.java --- @@ -0,0 +1,216 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.xml.XMLConstants; +import javax.xml.bind.JAXBContext; +import javax.xml.bind.JAXBElement; +import javax.xml.bind.JAXBException; +import javax.xml.bind.Unmarshaller; +import javax.xml.bind.ValidationEvent; +import javax.xml.bind.ValidationEventHandler; +import javax.xml.transform.stream.StreamSource; +import javax.xml.validation.Schema; +import javax.xml.validation.SchemaFactory; + +import org.apache.nifi.authentication.AuthenticationResponse; +import org.apache.nifi.authentication.LoginCredentials; +import org.apache.nifi.authentication.LoginIdentityProvider; +import org.apache.nifi.authentication.LoginIdentityProviderConfigurationContext; +import org.apache.nifi.authentication.LoginIdentityProviderInitializationContext; +import org.apache.nifi.authentication.exception.IdentityAccessException; +import org.apache.nifi.authentication.exception.InvalidLoginCredentialsException; +import org.apache.nifi.authorization.exception.ProviderCreationException; +import org.apache.nifi.authorization.exception.ProviderDestructionException; +import org.apache.nifi.authentication.file.generated.UserCredentials; +import org.apache.nifi.authentication.file.generated.UserCredentialsList; +import org.apache.nifi.util.FormatUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; +import org.springframework.security.crypto.password.PasswordEncoder; + + +/** + * Identity provider for simple username/password authentication backed by a local credentials file. The credentials + * file contains usernames and password hashes in bcrypt format. Any compatible bcrypt "2a" implementation may be used + * to populate the credentials file. + * + * The XML format of the credentials file is as follows: + * + * {@code + * + * + * + * + * + * + * } + * + */ +public class FileIdentityProvider implements LoginIdentityProvider { + +static final String PROPERTY_CREDENTIALS_FILE = "Credentials File"; +static final String PROPERTY_EXPIRATION_PERIOD = "Authentication Expiration"; + +private static final Logger logger = LoggerFactory.getLogger(FileIdentityProvider.class); +private static final String CREDENTIALS_XSD = "/credentials.xsd"; +private static final String JAXB_GENERATED_PATH = "org.apache.nifi.authentication.file.generated"; +private static final JAXBContext JAXB_CONTEXT = initializeJaxbContext(); + +private String issuer; +private long expirationPeriodMilliseconds; +private String credentialsFilePath; +private PasswordEncoder passwordEncoder = new BCryptPasswordEncoder(); +private String identifier; + +private static JAXBContext initializeJaxbContext() { +try { +return JAXBContext.newInstance(JAXB_GENERATED_PATH, FileIdentityProvider.class.getClassLoader()); +} catch (JAXBException e) { +throw new RuntimeException("Failed creating JAXBContext for " + FileIdentityProvider.class.getCanonicalName()); +} +} + +private static ValidationEventHandler defaultValidationEventHandler = new ValidationEventHandler() { +@Override +p
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r55870940 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/FileIdentityProvider.java --- @@ -0,0 +1,216 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.xml.XMLConstants; +import javax.xml.bind.JAXBContext; +import javax.xml.bind.JAXBElement; +import javax.xml.bind.JAXBException; +import javax.xml.bind.Unmarshaller; +import javax.xml.bind.ValidationEvent; +import javax.xml.bind.ValidationEventHandler; +import javax.xml.transform.stream.StreamSource; +import javax.xml.validation.Schema; +import javax.xml.validation.SchemaFactory; + +import org.apache.nifi.authentication.AuthenticationResponse; +import org.apache.nifi.authentication.LoginCredentials; +import org.apache.nifi.authentication.LoginIdentityProvider; +import org.apache.nifi.authentication.LoginIdentityProviderConfigurationContext; +import org.apache.nifi.authentication.LoginIdentityProviderInitializationContext; +import org.apache.nifi.authentication.exception.IdentityAccessException; +import org.apache.nifi.authentication.exception.InvalidLoginCredentialsException; +import org.apache.nifi.authorization.exception.ProviderCreationException; +import org.apache.nifi.authorization.exception.ProviderDestructionException; +import org.apache.nifi.authentication.file.generated.UserCredentials; +import org.apache.nifi.authentication.file.generated.UserCredentialsList; +import org.apache.nifi.util.FormatUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; +import org.springframework.security.crypto.password.PasswordEncoder; + + +/** + * Identity provider for simple username/password authentication backed by a local credentials file. The credentials + * file contains usernames and password hashes in bcrypt format. Any compatible bcrypt "2a" implementation may be used + * to populate the credentials file. + * + * The XML format of the credentials file is as follows: + * + * {@code + * + * + * + * + * + * + * } + * + */ +public class FileIdentityProvider implements LoginIdentityProvider { + +static final String PROPERTY_CREDENTIALS_FILE = "Credentials File"; +static final String PROPERTY_EXPIRATION_PERIOD = "Authentication Expiration"; + +private static final Logger logger = LoggerFactory.getLogger(FileIdentityProvider.class); +private static final String CREDENTIALS_XSD = "/credentials.xsd"; +private static final String JAXB_GENERATED_PATH = "org.apache.nifi.authentication.file.generated"; +private static final JAXBContext JAXB_CONTEXT = initializeJaxbContext(); + +private String issuer; +private long expirationPeriodMilliseconds; +private String credentialsFilePath; +private PasswordEncoder passwordEncoder = new BCryptPasswordEncoder(); +private String identifier; + +private static JAXBContext initializeJaxbContext() { +try { +return JAXBContext.newInstance(JAXB_GENERATED_PATH, FileIdentityProvider.class.getClassLoader()); +} catch (JAXBException e) { +throw new RuntimeException("Failed creating JAXBContext for " + FileIdentityProvider.class.getCanonicalName()); +} +} + +private static ValidationEventHandler defaultValidationEventHandler = new ValidationEventHandler() { +@Override +p
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r55868135 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/FileIdentityProvider.java --- @@ -0,0 +1,216 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.xml.XMLConstants; +import javax.xml.bind.JAXBContext; +import javax.xml.bind.JAXBElement; +import javax.xml.bind.JAXBException; +import javax.xml.bind.Unmarshaller; +import javax.xml.bind.ValidationEvent; +import javax.xml.bind.ValidationEventHandler; +import javax.xml.transform.stream.StreamSource; +import javax.xml.validation.Schema; +import javax.xml.validation.SchemaFactory; + +import org.apache.nifi.authentication.AuthenticationResponse; +import org.apache.nifi.authentication.LoginCredentials; +import org.apache.nifi.authentication.LoginIdentityProvider; +import org.apache.nifi.authentication.LoginIdentityProviderConfigurationContext; +import org.apache.nifi.authentication.LoginIdentityProviderInitializationContext; +import org.apache.nifi.authentication.exception.IdentityAccessException; +import org.apache.nifi.authentication.exception.InvalidLoginCredentialsException; +import org.apache.nifi.authorization.exception.ProviderCreationException; +import org.apache.nifi.authorization.exception.ProviderDestructionException; +import org.apache.nifi.authentication.file.generated.UserCredentials; +import org.apache.nifi.authentication.file.generated.UserCredentialsList; +import org.apache.nifi.util.FormatUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; +import org.springframework.security.crypto.password.PasswordEncoder; + + +/** + * Identity provider for simple username/password authentication backed by a local credentials file. The credentials + * file contains usernames and password hashes in bcrypt format. Any compatible bcrypt "2a" implementation may be used + * to populate the credentials file. + * + * The XML format of the credentials file is as follows: + * + * {@code + * + * + * + * + * + * + * } + * + */ +public class FileIdentityProvider implements LoginIdentityProvider { + +static final String PROPERTY_CREDENTIALS_FILE = "Credentials File"; +static final String PROPERTY_EXPIRATION_PERIOD = "Authentication Expiration"; + +private static final Logger logger = LoggerFactory.getLogger(FileIdentityProvider.class); +private static final String CREDENTIALS_XSD = "/credentials.xsd"; +private static final String JAXB_GENERATED_PATH = "org.apache.nifi.authentication.file.generated"; +private static final JAXBContext JAXB_CONTEXT = initializeJaxbContext(); + +private String issuer; +private long expirationPeriodMilliseconds; +private String credentialsFilePath; +private PasswordEncoder passwordEncoder = new BCryptPasswordEncoder(); +private String identifier; + +private static JAXBContext initializeJaxbContext() { +try { +return JAXBContext.newInstance(JAXB_GENERATED_PATH, FileIdentityProvider.class.getClassLoader()); +} catch (JAXBException e) { +throw new RuntimeException("Failed creating JAXBContext for " + FileIdentityProvider.class.getCanonicalName()); +} +} + +private static ValidationEventHandler defaultValidationEventHandler = new ValidationEventHandler() { +@Override +p
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r55863700 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/login-identity-providers.xml --- @@ -89,4 +89,28 @@ 12 hours To enable the ldap-provider remove 2 lines. This is 2 of 2. --> + + + --- End diff -- Yes, thanks, it should say 2 of 2. I will change that. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1516 - AWS DynamoDB Get/Put/Delete Process...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/224#issuecomment-194414769 Thanks for the updates, @mans2singh , I think this pull request looks good (and works good). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1516 - AWS DynamoDB Get/Put/Delete Process...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/224#issuecomment-194031254 I also have a laundry list of nitpicking comments: * The names of the relationships for GetDynamoDB are `failure`, `success`, `unprocessed`, and `Not Found`. I recommend we change proper case "Not Found" to lowercase "not found" to fit in. * Description of REL_UNPROCESSED in AbstractDynamoDBProcessor does not clarify why items are unprocessed. I recommend something like: > FlowFiles are routed to this relationship when DynamoDB does not process them in the batch. Typical reasons are insufficient table throughput capacity and exceeding the maximum bytes per request. Unprocessed FlowFiles are expected to be retry-able without modification. * `@Ignore` was commented out on ITPutGetDeleteGetDynamoDBTest.java, line 31, so the integration tests are trying to run. I think this is bothering TravisCI, when it's not running out of memory. * GetDyanmoDB @CapabilityDescription does not specify where the data goes. Also, there is a misspelling "parimary". I recommend something like the following: > Retrieves a document from DynamoDB based on hash and range key. The key can be string or number. For any get request all the primary keys are required (hash or hash and range based on the table keys). A Json Document ("Map") attribute of the DynamoDB item is read into the content of the FlowFile. * PutDynamoDB @CapabilityDescription, same issue. I recommend: > Puts a document to DynamoDB based on hash and range key. The table can have either hash and range or hash key alone. Currently the keys supported are string and number and value can be json document. In case of hash and range keys both key are required for the operation. The FlowFile content must be JSON. FlowFile content is mapped to the specified Json Document attribute in the DynamoDB item. * Commit 51448fb introduced a single-space whitespace inconsistency in GetDynamoDB.java, line 114 "final String jsonDocument = ..." Yeah, I think we're down to the whitespace issues. This is looking pretty good to me. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1516 - AWS DynamoDB Get/Put/Delete Process...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/224#issuecomment-194029775 I have [another annoying test for you](https://github.com/jvwing/nifi/commit/95d7620f45398f20f23d6f5e7cfe572616beb8d6), for the following scenario: 1. GetDynamoDB successfully receives an item 1. The item does not have the Json Document attribute expected from the configured properties 1. GetDynamoDB throws a NullPointerException with no helpful explanation. Logged text was: > 2016-03-08 10:48:40,855 ERROR [Timer-Driven Process Thread-5] o.a.n.p.aws.dynamodb.GetDynamoDB GetDynamoDB[id=86f82c25-0021-4228-a5a6-df7d11c2370e] Could not process flowFiles due to exception : null I expect this scenario to be fairly normal with the Json Document attribute design since DynamoDB has no schema enforcement. I recommend changing the code to either throw a more helpful exception message (see sample), or to silently accept this by passing empty FlowFile content. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1516 - AWS DynamoDB Get/Put/Delete Process...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/224#issuecomment-193512143 I ran into a NullPointerException testing GetDynamoDB with a single unprocessed item consisting of a hash key, but no range key. The NullPointerException is thrown from AbstractDynamoDBProcessor.getAttributeValue() on line 207 when it tries to use the range key AttributeValue, which is null. I copy/paste/modified one of your tests to illustrate, [308d60cf6167b209d0d222bb6a3e1cff25b1e4cd](https://github.com/jvwing/nifi/commit/308d60cf6167b209d0d222bb6a3e1cff25b1e4cd). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1516 - AWS DynamoDB Get/Put/Delete Process...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/224#issuecomment-192777256 @mans2singh - Sorry I'm slow getting back to you. I have a few comments about the most recent changes: # Tests * The unit tests are much improved. Your mock implementation is better than my suggested approach, and there has been a big improvement in code coverage as a result. * I think we could use one or two mocked tests for GetDynamoDB. * Test coverage for unprocessed items is low, given that your design targets the batch get/put/delete APIs and unprocessed items will be expected. Earlier today, I thought that would be easily fixed. It turns out I don't have solid knowledge of what unprocessed responses will really look like in practice, and the AWS SDK is more confusing than helpful. # AbstractWriteDynamoDBProcessor AbstractWriteDynamoDBProcessor has two methods - handleUnprocessedPutItems and handleUnprocessedDeleteItems - and two subclasses, PutDynamoDB and DeleteDynamoDB. I know that isn't how things started a couple weeks ago, it used to be shared code, but isn't this a sign that we don't need an AbstractWriteDynamoDBProcessor? Can we just move those methods to their respective processors, or do you anticipate sharing them with additional writing processors? # Relationships What types of relationships have you considered besides "success" and "failure"? A malformed input "failure" requires correction before retrying, but unprocessed item failures might simply be retried. Also, GetDynamoDB might have a separate relationship for "not found" as opposed to "failure". On the good side, your inclusion of complete response codes in the output attributes makes it possible to filter out the various failure modes. Additional relationships could be added as features later. I am still working on setting up some tests for unprocessed items. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1516 - AWS DynamoDB Get/Put/Delete Process...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/224#issuecomment-189425709 I stand corrected. The non-integration unit tests are importing CREDENTIALS_FILE from ITAbstractDynamoDBTest, although the integration test itself may not be called. But I think the unit tests run by TravisCI should pass. Either the tests requiring a file in the user's home folder are truly integration tests and need to be marked as such, or the file needs to get checked in to git. I believe you have checked in a similar test credentials file while working on the AWSCredentialsProviderControllerService, and that seemed to work just fine. It's also possible that the unit tests do not need a credentials file at all, since they won't make API calls to AWS. Can't we just remove it from the tests? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1516 - AWS DynamoDB Get/Put/Delete Process...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/224#issuecomment-189379141 I believe the unit tests are failing now, for some combination of the following reasons (see the [TravisCI logs](https://s3.amazonaws.com/archive.travis-ci.org/jobs/111920709/log.txt)) * ITAbstractDynamoDBTest is no longer marked Ignore * The file referenced in CREDENTIALS_FILE does not exist * Json Document is now required --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1548 Fixing Controller Service Usage Butto...
GitHub user jvwing opened a pull request: https://github.com/apache/nifi/pull/245 NIFI-1548 Fixing Controller Service Usage Button Recovering the event handling code for the Controller Service and Reporting Task usage buttons. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jvwing/nifi NIFI-1548-controller-service-usage Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/245.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #245 commit 85b29a26c645bbee3eb734ccb99202b6f02ece3f Author: James Wing <jvw...@gmail.com> Date: 2016-02-22T23:49:03Z NIFI-1548 Fixing Controller Service Usage Button --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-786 AWS credential refactoring and enhance...
GitHub user jvwing opened a pull request: https://github.com/apache/nifi/pull/244 NIFI-786 AWS credential refactoring and enhancements Changes to AWS credential handling: * Refactoring the creation of AWS Credentials into a factory class for use by AWSCredentialsProviderControllerService (included) and future shared use by processors or other components. * Centralized the PropertyDescriptors used for AWS Credential configuration to standardize behavior and improve documentation. * Improved self-documentation by making Default Credentials an explicit and visible option, while preserving the behavior of using it implicity if no other credential type is configured. * In this commit, the explicit Use Default Credential option defaults to "false", to maintain backward compatibility upgrading to this implementation. * Credential enhancements * New credential option - Named Profile * New credential option - Anonymous * Added External ID as an optional parameter for Assume Role You can merge this pull request into a Git repository by running: $ git pull https://github.com/jvwing/nifi NIFI-786-refactor-aws-credentials Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/244.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #244 commit 164ebcd9c89a28ae6c1b40fb009044bc04de3195 Author: James Wing <jvw...@gmail.com> Date: 2016-02-22T18:14:47Z NIFI-786 AWS credential refactoring and enhancements --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Updating RPM build to fix bootstrap dependencie...
Github user jvwing commented on the pull request: https://github.com/apache/nifi/pull/196#issuecomment-177621908 We no longer need this PR. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Updating RPM build to fix bootstrap dependencie...
GitHub user jvwing opened a pull request: https://github.com/apache/nifi/pull/196 Updating RPM build to fix bootstrap dependencies This is a potential fix to NIFI-1454 "Built RPMs Do Not Result in Working NiFi Installation". The fix involves adding all dependencies to `lib/bootstrap` by default, then excluding the entire laundry list of NAR file dependencies. This is ugly, with the following pros and cons: **Pros**: - It results in a working install **Cons**: - Unnecessary duplicate transitive dependencies are left in both `lib` and `lib/bootstrap` as a result of using two `` lists - Excluding the detailed list of NAR files implies an increase in future nifi-assembly POM maintenance as new modules are added - Failure to maintain the detailed exclude list will result in more unnecessary duplicate files (but not breaking) I haven't figured out a more elegant solution, but I believe this is an incremental improvement. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jvwing/nifi nifi-1454-rpm-build Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/196.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #196 commit 54f044b0d97d04621504d13be6e5a7270e5bcf0d Author: James Wing <jvw...@gmail.com> Date: 2016-01-31T00:39:21Z Updating RPM build to fix bootstrap dependencies --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1283 Fixing ControllerStatusReportingTask ...
GitHub user jvwing opened a pull request: https://github.com/apache/nifi/pull/166 NIFI-1283 Fixing ControllerStatusReportingTask logger name ControllerStatusReportingTask was using an abbreviated class name to prefix its loggers, "ControllerStatusReportingTask", instead of the fully-qualified name "org.apache.nifi.controller.ControllerStatusReportingTask", as specified in the documentation and standard across reporting tasks in the same package. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jvwing/nifi nifi-1283 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/166.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #166 commit c526656a228389aa972ce7ebc1df037333000516 Author: James Wing <jvw...@gmail.com> Date: 2016-01-11T21:39:15Z NIFI-1283 Fixing ControllerStatusReportingTask loggers to use fully-qualified class name --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---