[jira] [Updated] (NIFI-2610) TestProcessorLifecycle class causes brittle builds and appears to be an integration test
[ https://issues.apache.org/jira/browse/NIFI-2610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2610: -- Description: The tests in TestProcessorLifecycle appear to be attempting to replicate various threading scenarios. Such tests are notoriously difficult to get right and indeed the build is brittle as a result. These tests are likely valuable and should be improved but they also should be considered integration tests it appears. Tests run: 16, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 42.708 sec <<< FAILURE! - in org.apache.nifi.controller.scheduling.TestProcessorLifecycle validateSuccessfullAndOrderlyShutdown(org.apache.nifi.controller.scheduling.TestProcessorLifecycle) Time elapsed: 6.313 sec <<< FAILURE! java.lang.AssertionError: expected:<3> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at org.apache.nifi.controller.scheduling.TestProcessorLifecycle.validateSuccessfullAndOrderlyShutdown(TestProcessorLifecycle.java:224) This test also causes build problems and seems to be of a similar style Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 21.447 sec <<< FAILURE! - in org.apache.nifi.controller.scheduling.TestStandardProcessScheduler validateEnabledDisableMultiThread(org.apache.nifi.controller.scheduling.TestStandardProcessScheduler) Time elapsed: 5.667 sec <<< FAILURE! java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.nifi.controller.scheduling.TestStandardProcessScheduler.validateEnabledDisableMultiThread(TestStandardProcessScheduler.java:373) Brittle tests like this risk the build process which harms the review cycle and complicates release voting. was: The tests in TestProcessorLifecycle appear to be attempting to replicate various threading scenarios. Such tests are notoriously difficult to get right and indeed the build is brittle as a result. These tests are likely valuable and should be improved but they also should be considered integration tests it appears. Tests run: 16, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 42.708 sec <<< FAILURE! - in org.apache.nifi.controller.scheduling.TestProcessorLifecycle validateSuccessfullAndOrderlyShutdown(org.apache.nifi.controller.scheduling.TestProcessorLifecycle) Time elapsed: 6.313 sec <<< FAILURE! java.lang.AssertionError: expected:<3> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at org.apache.nifi.controller.scheduling.TestProcessorLifecycle.validateSuccessfullAndOrderlyShutdown(TestProcessorLifecycle.java:224) Brittle tests like this risk the build process which harms the review cycle and complicates release voting. > TestProcessorLifecycle class causes brittle builds and appears to be an > integration test > > > Key: NIFI-2610 > URL: https://issues.apache.org/jira/browse/NIFI-2610 > Project: Apache NiFi > Issue Type: Bug >Reporter: Joseph Witt > Fix For: 1.0.0 > > > The tests in TestProcessorLifecycle appear to be attempting to replicate > various threading scenarios. Such tests are notoriously difficult to get > right and indeed the build is brittle as a result. These tests are likely > valuable and should be improved but they also should be considered > integration tests it appears. > Tests run: 16, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 42.708 sec > <<< FAILURE! - in org.apache.nifi.controller.scheduling.TestProcessorLifecycle > validateSuccessfullAndOrderlyShutdown(org.apache.nifi.controller.scheduling.TestProcessorLifecycle) > Time elapsed: 6.313 sec <<< FAILURE! > java.lang.AssertionError: expected:<3> but was:<2> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > at org.junit.Assert.assertEquals(Assert.java:631) > at > org.apache.nifi.controller.scheduling.TestProcessorLifecycle.validateSuccessfullAndOrderlyShutdown(TestProcessorLifecycle.java:224) > This test also causes build problems and seems to be of a similar style > Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 21.447 sec > <<< FAILURE! - in > org.apache.nifi.controller.scheduling.TestStandardProcessScheduler >
[jira] [Updated] (NIFI-2610) TestProcessorLifecycle and TestStandardProcessScheduler classes causes brittle builds and appears to be an integration test
[ https://issues.apache.org/jira/browse/NIFI-2610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2610: -- Summary: TestProcessorLifecycle and TestStandardProcessScheduler classes causes brittle builds and appears to be an integration test (was: TestProcessorLifecycle class causes brittle builds and appears to be an integration test) > TestProcessorLifecycle and TestStandardProcessScheduler classes causes > brittle builds and appears to be an integration test > --- > > Key: NIFI-2610 > URL: https://issues.apache.org/jira/browse/NIFI-2610 > Project: Apache NiFi > Issue Type: Bug >Reporter: Joseph Witt > Fix For: 1.0.0 > > > The tests in TestProcessorLifecycle appear to be attempting to replicate > various threading scenarios. Such tests are notoriously difficult to get > right and indeed the build is brittle as a result. These tests are likely > valuable and should be improved but they also should be considered > integration tests it appears. > Tests run: 16, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 42.708 sec > <<< FAILURE! - in org.apache.nifi.controller.scheduling.TestProcessorLifecycle > validateSuccessfullAndOrderlyShutdown(org.apache.nifi.controller.scheduling.TestProcessorLifecycle) > Time elapsed: 6.313 sec <<< FAILURE! > java.lang.AssertionError: expected:<3> but was:<2> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > at org.junit.Assert.assertEquals(Assert.java:631) > at > org.apache.nifi.controller.scheduling.TestProcessorLifecycle.validateSuccessfullAndOrderlyShutdown(TestProcessorLifecycle.java:224) > This test also causes build problems and seems to be of a similar style > Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 21.447 sec > <<< FAILURE! - in > org.apache.nifi.controller.scheduling.TestStandardProcessScheduler > validateEnabledDisableMultiThread(org.apache.nifi.controller.scheduling.TestStandardProcessScheduler) > Time elapsed: 5.667 sec <<< FAILURE! > java.lang.AssertionError: null > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.nifi.controller.scheduling.TestStandardProcessScheduler.validateEnabledDisableMultiThread(TestStandardProcessScheduler.java:373) > Brittle tests like this risk the build process which harms the review cycle > and complicates release voting. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-2610) TestProcessorLifecycle class causes brittle builds and appears to be an integration test
Joseph Witt created NIFI-2610: - Summary: TestProcessorLifecycle class causes brittle builds and appears to be an integration test Key: NIFI-2610 URL: https://issues.apache.org/jira/browse/NIFI-2610 Project: Apache NiFi Issue Type: Bug Reporter: Joseph Witt Fix For: 1.0.0 The tests in TestProcessorLifecycle appear to be attempting to replicate various threading scenarios. Such tests are notoriously difficult to get right and indeed the build is brittle as a result. These tests are likely valuable and should be improved but they also should be considered integration tests it appears. Tests run: 16, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 42.708 sec <<< FAILURE! - in org.apache.nifi.controller.scheduling.TestProcessorLifecycle validateSuccessfullAndOrderlyShutdown(org.apache.nifi.controller.scheduling.TestProcessorLifecycle) Time elapsed: 6.313 sec <<< FAILURE! java.lang.AssertionError: expected:<3> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at org.apache.nifi.controller.scheduling.TestProcessorLifecycle.validateSuccessfullAndOrderlyShutdown(TestProcessorLifecycle.java:224) Brittle tests like this risk the build process which harms the review cycle and complicates release voting. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1831) Allow encrypted passwords in configuration files
[ https://issues.apache.org/jira/browse/NIFI-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428998#comment-15428998 ] ASF GitHub Bot commented on NIFI-1831: -- Github user alopresto commented on the issue: https://github.com/apache/nifi/pull/834 @joewitt @mattyb149 @bbende @brosander As of right now, this PR (commit `ffab01d`) is rebased against the latest master. The tool is found in `nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.0.0-SNAPSHOT-bin/nifi-toolkit-1.0.0-SNAPSHOT/`. Running the command below (for example) will read an existing _nifi.properties_ file, encrypt all sensitive non-empty values using the provided _key_, populate those values (and the associated protection schemes -- `x.y.z.protected=aes/gcm/256`) into the new _nifi-encrypted.properties_ file, and persist the key in _bootstrap.conf_. * `$ ./bin/encrypt-config.sh -h` -- prints a usage message * `$ ./bin/encrypt-config.sh -b path/to/bootstrap.conf -n path/to/nifi.properties -o path/to/nifi-encrypted.properties -p thisIsABadPropertiesPassword` -- normal use as described above * `$ ./bin/encrypt-config.sh -b path/to/bootstrap.conf -n path/to/nifi.properties -o path/to/nifi-encrypted.properties -k 0123456789ABCDEFFEDCBA98765432100123456789ABCDEFFEDCBA9876543210` -- normal use as described above with raw hex key instead of password * `$ ./bin/encrypt-config.sh -b path/to/bootstrap.conf -n path/to/nifi.properties -o path/to/nifi-encrypted.properties` -- normal use as described above but will prompt for key in secure console read By default, it considers *sensitive* properties as anything that would be a password or key: * `nifi.sensitive.props.key` * `nifi.security.keystorePasswd` * `nifi.security.keyPasswd` * `nifi.security.truststorePasswd` You can mark additional keys as *sensitive* by including them in a comma or semi-colon delimited string as follows (do this by hand in the input _nifi.properties_ before running the tool): `nifi.sensitive.props.additional.keys=nifi.ui.banner.text` Example: *before* -- `~/Workspace/scratch/encrypted-configs/nifi.properties` ``` nifi.ui.banner.text=This is the banner text ... # security properties # nifi.sensitive.props.key= nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL nifi.sensitive.props.provider=BC nifi.sensitive.props.additional.keys=nifi.ui.banner.text nifi.security.keystore=keystore.jks nifi.security.keystoreType=jks nifi.security.keystorePasswd=thisIsABadKeystorePassword nifi.security.keyPasswd=thisIsABadKeyPassword nifi.security.truststore=truststore.jks nifi.security.truststoreType=jks nifi.security.truststorePasswd=thisIsABadTruststorePassword nifi.security.needClientAuth= nifi.security.user.authorizer=file-provider nifi.security.user.login.identity.provider= nifi.security.ocsp.responder.url= nifi.security.ocsp.responder.certificate= ... ``` *run tool* -- ``` hw12203:...assembly/target/nifi-toolkit-1.0.0-SNAPSHOT-bin/nifi-toolkit-1.0.0-SNAPSHOT (NIFI-1831) alopresto 167s @ 15:49:34 $ ./bin/encrypt-config.sh -b ~/Workspace/scratch/encrypted-configs/bootstrap.conf -n ~/Workspace/scratch/encrypted-configs/nifi.properties -o ~/Workspace/scratch/encrypted-configs/nifi-encrypted.properties -p thisIsABadPropertiesPassword 2016-08-19 15:57:48,097 INFO [main] o.a.nifi.properties.ConfigEncryptionTool Invoked ConfigEncryptionTool with args [-b,/Users/alopresto/Workspace/scratch/encrypted-configs/bootstrap.conf,-n,/Users/alopresto/Workspace/scratch/encrypted-configs/nifi.properties,-o,/Users/alopresto/Workspace/scratch/encrypted-configs/nifi-encrypted.properties,-p,thisIsABadPropertiesPassword] 2016-08-19 15:57:48,794 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader Loaded 112 properties from /Users/alopresto/Workspace/scratch/encrypted-configs/nifi.properties 2016-08-19 15:57:48,796 INFO [main] o.a.n.properties.ProtectedNiFiProperties Loaded 112 properties (including 0 protection schemes) into ProtectedNiFiProperties 2016-08-19 15:57:48,800 INFO [main] o.a.nifi.properties.ConfigEncryptionTool Loaded NiFiProperties instance with 112 properties 2016-08-19 15:57:48,805 INFO [main] o.a.n.properties.ProtectedNiFiProperties Loaded 112 properties (including 0 protection schemes) into ProtectedNiFiProperties 2016-08-19 15:57:49,149 INFO [main] o.a.n.p.AESSensitivePropertyProvider AES Sensitive Property Provider encrypted a sensitive value successfully 2016-08-19 15:57:49,151 INFO [main] o.a.nifi.properties.ConfigEncryptionTool Protected nifi.ui.banner.text with aes/gcm/256 -> 2ZJZaFqqXl62HB5w||I57IDLE7hYJf2vJmrkC29ZjDztRJT00CVV1QkDiGte4VIfUB+n2X 2016-08-19 15:57:49,152 INFO [main]
[GitHub] nifi issue #834: NIFI-1831 Implemented encrypted configuration capabilities
Github user alopresto commented on the issue: https://github.com/apache/nifi/pull/834 @joewitt @mattyb149 @bbende @brosander As of right now, this PR (commit `ffab01d`) is rebased against the latest master. The tool is found in `nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.0.0-SNAPSHOT-bin/nifi-toolkit-1.0.0-SNAPSHOT/`. Running the command below (for example) will read an existing _nifi.properties_ file, encrypt all sensitive non-empty values using the provided _key_, populate those values (and the associated protection schemes -- `x.y.z.protected=aes/gcm/256`) into the new _nifi-encrypted.properties_ file, and persist the key in _bootstrap.conf_. * `$ ./bin/encrypt-config.sh -h` -- prints a usage message * `$ ./bin/encrypt-config.sh -b path/to/bootstrap.conf -n path/to/nifi.properties -o path/to/nifi-encrypted.properties -p thisIsABadPropertiesPassword` -- normal use as described above * `$ ./bin/encrypt-config.sh -b path/to/bootstrap.conf -n path/to/nifi.properties -o path/to/nifi-encrypted.properties -k 0123456789ABCDEFFEDCBA98765432100123456789ABCDEFFEDCBA9876543210` -- normal use as described above with raw hex key instead of password * `$ ./bin/encrypt-config.sh -b path/to/bootstrap.conf -n path/to/nifi.properties -o path/to/nifi-encrypted.properties` -- normal use as described above but will prompt for key in secure console read By default, it considers *sensitive* properties as anything that would be a password or key: * `nifi.sensitive.props.key` * `nifi.security.keystorePasswd` * `nifi.security.keyPasswd` * `nifi.security.truststorePasswd` You can mark additional keys as *sensitive* by including them in a comma or semi-colon delimited string as follows (do this by hand in the input _nifi.properties_ before running the tool): `nifi.sensitive.props.additional.keys=nifi.ui.banner.text` Example: *before* -- `~/Workspace/scratch/encrypted-configs/nifi.properties` ``` nifi.ui.banner.text=This is the banner text ... # security properties # nifi.sensitive.props.key= nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL nifi.sensitive.props.provider=BC nifi.sensitive.props.additional.keys=nifi.ui.banner.text nifi.security.keystore=keystore.jks nifi.security.keystoreType=jks nifi.security.keystorePasswd=thisIsABadKeystorePassword nifi.security.keyPasswd=thisIsABadKeyPassword nifi.security.truststore=truststore.jks nifi.security.truststoreType=jks nifi.security.truststorePasswd=thisIsABadTruststorePassword nifi.security.needClientAuth= nifi.security.user.authorizer=file-provider nifi.security.user.login.identity.provider= nifi.security.ocsp.responder.url= nifi.security.ocsp.responder.certificate= ... ``` *run tool* -- ``` hw12203:...assembly/target/nifi-toolkit-1.0.0-SNAPSHOT-bin/nifi-toolkit-1.0.0-SNAPSHOT (NIFI-1831) alopresto ð 167s @ 15:49:34 $ ./bin/encrypt-config.sh -b ~/Workspace/scratch/encrypted-configs/bootstrap.conf -n ~/Workspace/scratch/encrypted-configs/nifi.properties -o ~/Workspace/scratch/encrypted-configs/nifi-encrypted.properties -p thisIsABadPropertiesPassword 2016-08-19 15:57:48,097 INFO [main] o.a.nifi.properties.ConfigEncryptionTool Invoked ConfigEncryptionTool with args [-b,/Users/alopresto/Workspace/scratch/encrypted-configs/bootstrap.conf,-n,/Users/alopresto/Workspace/scratch/encrypted-configs/nifi.properties,-o,/Users/alopresto/Workspace/scratch/encrypted-configs/nifi-encrypted.properties,-p,thisIsABadPropertiesPassword] 2016-08-19 15:57:48,794 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader Loaded 112 properties from /Users/alopresto/Workspace/scratch/encrypted-configs/nifi.properties 2016-08-19 15:57:48,796 INFO [main] o.a.n.properties.ProtectedNiFiProperties Loaded 112 properties (including 0 protection schemes) into ProtectedNiFiProperties 2016-08-19 15:57:48,800 INFO [main] o.a.nifi.properties.ConfigEncryptionTool Loaded NiFiProperties instance with 112 properties 2016-08-19 15:57:48,805 INFO [main] o.a.n.properties.ProtectedNiFiProperties Loaded 112 properties (including 0 protection schemes) into ProtectedNiFiProperties 2016-08-19 15:57:49,149 INFO [main] o.a.n.p.AESSensitivePropertyProvider AES Sensitive Property Provider encrypted a sensitive value successfully 2016-08-19 15:57:49,151 INFO [main] o.a.nifi.properties.ConfigEncryptionTool Protected nifi.ui.banner.text with aes/gcm/256 -> 2ZJZaFqqXl62HB5w||I57IDLE7hYJf2vJmrkC29ZjDztRJT00CVV1QkDiGte4VIfUB+n2X 2016-08-19 15:57:49,152 INFO [main] o.a.nifi.properties.ConfigEncryptionTool Updated protection key nifi.ui.banner.text.protected 2016-08-19 15:57:49,152 INFO [main] o.a.n.p.AESSensitivePropertyProvider AES Sensitive Property Provider encrypted a sensitive value successfully 2016-08-19
[jira] [Commented] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428857#comment-15428857 ] Joe Skora commented on NIFI-2411: - Actually, I tested it with 6GB input files and it worked great, with and without the Remove All Content. It's the way the Mock framework reads the entire file into memory using a {{byte[]}} that limits it to 2G (Integer.MAX_VALUE actually). If you have any thoughts on how to test in a streaming way, I'm glad to look into that, but I couldn't find any examples. > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joseph Witt > Labels: easyfix > Fix For: 1.0.0, 0.8.0 > > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2411: -- Fix Version/s: 0.8.0 1.0.0 > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joe Skora > Labels: easyfix > Fix For: 1.0.0, 0.8.0 > > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428845#comment-15428845 ] Joseph Witt commented on NIFI-2411: --- assigned to myself to review. [~jskora] I'm comfortable with this change not requiring unit test alterations frankly. I surely don't want you making 2GB+ content to test it :-) > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joseph Witt > Labels: easyfix > Fix For: 1.0.0, 0.8.0 > > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt reassigned NIFI-2411: - Assignee: Joseph Witt (was: Joe Skora) > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joseph Witt > Labels: easyfix > Fix For: 1.0.0, 0.8.0 > > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Skora updated NIFI-2411: Status: Patch Available (was: Open) [~joewitt], thanks, yes they are ready to go. 0.x is [PR #903|https://github.com/apache/nifi/pull/903] and 1.x is [PR #904|https://github.com/apache/nifi/pull/904]. I would like to have added units tests, but the Mock* framework uses {{byte[]}} to hold the data limiting it to precisely the file size limit this change is trying to eliminate. I looked at enhancing the Mock* classes, but that will actually be a bigger change than this was. > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 0.7.0, 1.0.0 >Reporter: Joe Skora >Assignee: Joe Skora > Labels: easyfix > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428733#comment-15428733 ] Joseph Witt commented on NIFI-2411: --- [~jskora] I just assumed - but is this all set for review? I should have waited until it said 'Patch Available' - if it is ready please click 'Submit Patch' to signal that. My bad > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joseph Witt > Labels: easyfix > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2411: -- Assignee: Joe Skora (was: Joseph Witt) > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joe Skora > Labels: easyfix > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2411: -- Fix Version/s: (was: 0.8.0) (was: 1.0.0) > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joe Skora > Labels: easyfix > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428731#comment-15428731 ] Joseph Witt commented on NIFI-2411: --- will review > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joe Skora > Labels: easyfix > Fix For: 1.0.0, 0.8.0 > > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2411: -- Component/s: (was: Core Framework) Extensions > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joe Skora > Labels: easyfix > Fix For: 1.0.0, 0.8.0 > > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt reassigned NIFI-2411: - Assignee: Joseph Witt (was: Joe Skora) > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joseph Witt > Labels: easyfix > Fix For: 1.0.0, 0.8.0 > > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2411: -- Fix Version/s: 0.8.0 1.0.0 > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joe Skora > Labels: easyfix > Fix For: 1.0.0, 0.8.0 > > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #904: NIFI-2411 ModifyBytes should use long instead of int...
GitHub user jskora opened a pull request: https://github.com/apache/nifi/pull/904 NIFI-2411 ModifyBytes should use long instead of int for offsets (1.x) * Update to support offsets larger than 2 gigabyte. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jskora/nifi NIFI-2411-1.x Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/904.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #904 commit f47d04e5cf3620de833d8c53161214564750dacf Author: Joe SkoraDate: 2016-08-19T19:49:11Z Update to support offsets larger than 2 gigabyte. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428727#comment-15428727 ] ASF GitHub Bot commented on NIFI-2411: -- GitHub user jskora opened a pull request: https://github.com/apache/nifi/pull/904 NIFI-2411 ModifyBytes should use long instead of int for offsets (1.x) * Update to support offsets larger than 2 gigabyte. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jskora/nifi NIFI-2411-1.x Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/904.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #904 commit f47d04e5cf3620de833d8c53161214564750dacf Author: Joe SkoraDate: 2016-08-19T19:49:11Z Update to support offsets larger than 2 gigabyte. > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joe Skora > Labels: easyfix > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #903: NIFI-2411 ModifyBytes should use long instead of int...
GitHub user jskora opened a pull request: https://github.com/apache/nifi/pull/903 NIFI-2411 ModifyBytes should use long instead of int for offsets (0.x) * Update to support offsets larger than 2 gigabyte. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jskora/nifi NIFI-2411-0.x Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/903.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #903 commit a0e1efd7a3ac1b96dbe63e954dff977084d7c1f9 Author: Joe SkoraDate: 2016-08-19T19:06:23Z * Update to support offsets larger than 2 gigabyte. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2411) ModifyBytes should use long instead of int for offsets.
[ https://issues.apache.org/jira/browse/NIFI-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428726#comment-15428726 ] ASF GitHub Bot commented on NIFI-2411: -- GitHub user jskora opened a pull request: https://github.com/apache/nifi/pull/903 NIFI-2411 ModifyBytes should use long instead of int for offsets (0.x) * Update to support offsets larger than 2 gigabyte. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jskora/nifi NIFI-2411-0.x Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/903.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #903 commit a0e1efd7a3ac1b96dbe63e954dff977084d7c1f9 Author: Joe SkoraDate: 2016-08-19T19:06:23Z * Update to support offsets larger than 2 gigabyte. > ModifyBytes should use long instead of int for offsets. > --- > > Key: NIFI-2411 > URL: https://issues.apache.org/jira/browse/NIFI-2411 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0 >Reporter: Joe Skora >Assignee: Joe Skora > Labels: easyfix > Original Estimate: 2h > Remaining Estimate: 2h > > ModifyBytes.onTrigger() uses Java 32 bit {{int}} value for byte offsets > limiting it to 2 Gigabytes, switching to {{long}} values will allow it to > handle up to 15 Exabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2609) Issue determining appropriate site to site URL
[ https://issues.apache.org/jira/browse/NIFI-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-2609: - Assignee: Mark Payne Status: Patch Available (was: Open) > Issue determining appropriate site to site URL > -- > > Key: NIFI-2609 > URL: https://issues.apache.org/jira/browse/NIFI-2609 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Matt Gilman >Assignee: Mark Payne > Fix For: 1.0.0 > > Attachments: Screen Shot 2016-08-19 at 10.15.49 AM.png > > > NiFi should be more lenient in terms of what URL is entered when creating a > RemoteProcessGroup. The user should be able to enter either > http://: or http://:/nifi without issue. Currently > the former leads to a very confusing error. See attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2609) Issue determining appropriate site to site URL
[ https://issues.apache.org/jira/browse/NIFI-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428703#comment-15428703 ] ASF GitHub Bot commented on NIFI-2609: -- GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/902 NIFI-2609: Ensure that we handle URIs for Remote Process Groups that … …do not have a path of /nifi or /nifi/ You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-2609 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/902.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #902 commit 58311040cbc1518264a7b53173e0e52d1e6d25f3 Author: Mark PayneDate: 2016-08-19T19:34:15Z NIFI-2609: Ensure that we handle URIs for Remote Process Groups that do not have a path of /nifi or /nifi/ > Issue determining appropriate site to site URL > -- > > Key: NIFI-2609 > URL: https://issues.apache.org/jira/browse/NIFI-2609 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Matt Gilman > Fix For: 1.0.0 > > Attachments: Screen Shot 2016-08-19 at 10.15.49 AM.png > > > NiFi should be more lenient in terms of what URL is entered when creating a > RemoteProcessGroup. The user should be able to enter either > http://: or http://:/nifi without issue. Currently > the former leads to a very confusing error. See attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #902: NIFI-2609: Ensure that we handle URIs for Remote Pro...
GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/902 NIFI-2609: Ensure that we handle URIs for Remote Process Groups that ⦠â¦do not have a path of /nifi or /nifi/ You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-2609 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/902.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #902 commit 58311040cbc1518264a7b53173e0e52d1e6d25f3 Author: Mark PayneDate: 2016-08-19T19:34:15Z NIFI-2609: Ensure that we handle URIs for Remote Process Groups that do not have a path of /nifi or /nifi/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #850: NIFI-2547: Add DeleteHDFS Processor
Github user rickysaltzer commented on a diff in the pull request: https://github.com/apache/nifi/pull/850#discussion_r75538007 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/test/java/org/apache/nifi/processors/hadoop/TestDeleteHDFS.java --- @@ -0,0 +1,187 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.hadoop; + +import static org.junit.Assert.assertEquals; +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +import java.io.IOException; +import java.util.List; +import java.util.Map; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.hadoop.KerberosProperties; +import org.apache.nifi.util.MockFlowFile; +import org.apache.nifi.util.NiFiProperties; +import org.apache.nifi.util.TestRunner; +import org.apache.nifi.util.TestRunners; +import org.junit.Before; +import org.junit.Test; + +import com.google.common.collect.Maps; + +public class TestDeleteHDFS { +private NiFiProperties mockNiFiProperties; +private FileSystem mockFileSystem; +private KerberosProperties kerberosProperties; + +@Before +public void setup() throws Exception { +mockNiFiProperties = mock(NiFiProperties.class); + when(mockNiFiProperties.getKerberosConfigurationFile()).thenReturn(null); +kerberosProperties = KerberosProperties.create(mockNiFiProperties); +mockFileSystem = mock(FileSystem.class); --- End diff -- I could rewrite it to use the local fs, but I was just going off how the other tests behaved. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2547) Add DeleteHDFS Processor
[ https://issues.apache.org/jira/browse/NIFI-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428687#comment-15428687 ] ASF GitHub Bot commented on NIFI-2547: -- Github user rickysaltzer commented on a diff in the pull request: https://github.com/apache/nifi/pull/850#discussion_r75538007 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/test/java/org/apache/nifi/processors/hadoop/TestDeleteHDFS.java --- @@ -0,0 +1,187 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.hadoop; + +import static org.junit.Assert.assertEquals; +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +import java.io.IOException; +import java.util.List; +import java.util.Map; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.hadoop.KerberosProperties; +import org.apache.nifi.util.MockFlowFile; +import org.apache.nifi.util.NiFiProperties; +import org.apache.nifi.util.TestRunner; +import org.apache.nifi.util.TestRunners; +import org.junit.Before; +import org.junit.Test; + +import com.google.common.collect.Maps; + +public class TestDeleteHDFS { +private NiFiProperties mockNiFiProperties; +private FileSystem mockFileSystem; +private KerberosProperties kerberosProperties; + +@Before +public void setup() throws Exception { +mockNiFiProperties = mock(NiFiProperties.class); + when(mockNiFiProperties.getKerberosConfigurationFile()).thenReturn(null); +kerberosProperties = KerberosProperties.create(mockNiFiProperties); +mockFileSystem = mock(FileSystem.class); --- End diff -- I could rewrite it to use the local fs, but I was just going off how the other tests behaved. > Add DeleteHDFS Processor > - > > Key: NIFI-2547 > URL: https://issues.apache.org/jira/browse/NIFI-2547 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Ricky Saltzer >Assignee: Ricky Saltzer > > There are times where a user may want to remove a file or directory from > HDFS. The reasons for this vary, but to provide some context, I currently > have a pipeline where I need to periodically delete files that my NiFi > pipeline is producing. In my case, it's a "Delete files after they are 7 days > old". > Currently, I have to use the {{ExecuteStreamCommand}} processor and manually > call {{hdfs dfs -rm}}, which is awful when dealing with a large amount of > files. For one, an entire JVM is spun up for each delete, and two, when > deleting directories with thousands of files, it can sometimes cause the > command to hang indefinitely. > With that being said, I am proposing we add a {{DeleteHDFS}} processor which > meets the following criteria. > * Can delete both directories and files > * Can delete directories recursively > * Supports the dynamic expression language > * Supports using glob paths (e.g. /data/for/2017/08/*) > * Capable of being a downstream processor as well as a standalone processor -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2547) Add DeleteHDFS Processor
[ https://issues.apache.org/jira/browse/NIFI-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428685#comment-15428685 ] ASF GitHub Bot commented on NIFI-2547: -- Github user rickysaltzer commented on a diff in the pull request: https://github.com/apache/nifi/pull/850#discussion_r75537912 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/DeleteHDFS.java --- @@ -0,0 +1,161 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.hadoop; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.TriggerWhenEmpty; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; + +import com.google.common.collect.Lists; +import com.google.common.collect.Maps; + +@TriggerWhenEmpty +@InputRequirement(InputRequirement.Requirement.INPUT_ALLOWED) +@Tags({ "hadoop", "HDFS", "delete", "remove", "filesystem" }) +@CapabilityDescription("Deletes a file from HDFS. The file can be provided as an attribute from an incoming FlowFile, " ++ "or a statically set file that is periodically removed. If this processor has an incoming connection, it" ++ "will ignore running on a periodic basis and instead rely on incoming FlowFiles to trigger a delete. " ++ "Optionally, you may specify use a wildcard character to match multiple files or directories.") +public class DeleteHDFS extends AbstractHadoopProcessor { +public static final Relationship REL_SUCCESS = new Relationship.Builder() +.name("success") +.description("FlowFiles will be routed here if the delete command was successful") +.build(); + +public static final Relationship REL_FAILURE = new Relationship.Builder() +.name("failure") +.description("FlowFiles will be routed here if the delete command was unsuccessful") +.build(); + +public static final PropertyDescriptor FILE_OR_DIRECTORY = new PropertyDescriptor.Builder() +.name("File or Directory") +.description("The HDFS file or directory to delete. A wildcard expression may be used to only delete certain files") +.required(true) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.expressionLanguageSupported(true) +.build(); + +public static final PropertyDescriptor RECURSIVE = new PropertyDescriptor.Builder() +.name("Recursive") +.description("Remove contents of a non-empty directory recursively") +.allowableValues("true", "false") +.required(true) +.defaultValue("true") +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.build(); + +private static final Set relationships; + +static { +final Set relationshipSet = new HashSet<>(); +relationshipSet.add(REL_SUCCESS); +relationshipSet.add(REL_FAILURE); +relationships =
[GitHub] nifi pull request #850: NIFI-2547: Add DeleteHDFS Processor
Github user rickysaltzer commented on a diff in the pull request: https://github.com/apache/nifi/pull/850#discussion_r75537912 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/DeleteHDFS.java --- @@ -0,0 +1,161 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.hadoop; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.TriggerWhenEmpty; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; + +import com.google.common.collect.Lists; +import com.google.common.collect.Maps; + +@TriggerWhenEmpty +@InputRequirement(InputRequirement.Requirement.INPUT_ALLOWED) +@Tags({ "hadoop", "HDFS", "delete", "remove", "filesystem" }) +@CapabilityDescription("Deletes a file from HDFS. The file can be provided as an attribute from an incoming FlowFile, " ++ "or a statically set file that is periodically removed. If this processor has an incoming connection, it" ++ "will ignore running on a periodic basis and instead rely on incoming FlowFiles to trigger a delete. " ++ "Optionally, you may specify use a wildcard character to match multiple files or directories.") +public class DeleteHDFS extends AbstractHadoopProcessor { +public static final Relationship REL_SUCCESS = new Relationship.Builder() +.name("success") +.description("FlowFiles will be routed here if the delete command was successful") +.build(); + +public static final Relationship REL_FAILURE = new Relationship.Builder() +.name("failure") +.description("FlowFiles will be routed here if the delete command was unsuccessful") +.build(); + +public static final PropertyDescriptor FILE_OR_DIRECTORY = new PropertyDescriptor.Builder() +.name("File or Directory") +.description("The HDFS file or directory to delete. A wildcard expression may be used to only delete certain files") +.required(true) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.expressionLanguageSupported(true) +.build(); + +public static final PropertyDescriptor RECURSIVE = new PropertyDescriptor.Builder() +.name("Recursive") +.description("Remove contents of a non-empty directory recursively") +.allowableValues("true", "false") +.required(true) +.defaultValue("true") +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.build(); + +private static final Set relationships; + +static { +final Set relationshipSet = new HashSet<>(); +relationshipSet.add(REL_SUCCESS); +relationshipSet.add(REL_FAILURE); +relationships = Collections.unmodifiableSet(relationshipSet); +} + +@Override +protected List getSupportedPropertyDescriptors() { +List props = new ArrayList<>(properties); +props.add(FILE_OR_DIRECTORY); +props.add(RECURSIVE);
[jira] [Commented] (NIFI-2609) Issue determining appropriate site to site URL
[ https://issues.apache.org/jira/browse/NIFI-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428636#comment-15428636 ] Matt Gilman commented on NIFI-2609: --- There's a chance that is what the response says. > Issue determining appropriate site to site URL > -- > > Key: NIFI-2609 > URL: https://issues.apache.org/jira/browse/NIFI-2609 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Matt Gilman > Fix For: 1.0.0 > > Attachments: Screen Shot 2016-08-19 at 10.15.49 AM.png > > > NiFi should be more lenient in terms of what URL is entered when creating a > RemoteProcessGroup. The user should be able to enter either > http://: or http://:/nifi without issue. Currently > the former leads to a very confusing error. See attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1867) improve ModifyBytes to make it easy to remove all flowfile content
[ https://issues.apache.org/jira/browse/NIFI-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428615#comment-15428615 ] Joseph Witt commented on NIFI-1867: --- removing 0.7.1. It was not pushed there nor should an improvement be on there. 0.x and master are the correct branches and do appear to have been used. > improve ModifyBytes to make it easy to remove all flowfile content > -- > > Key: NIFI-1867 > URL: https://issues.apache.org/jira/browse/NIFI-1867 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Ben Icore >Assignee: Joe Skora > Fix For: 1.0.0, 0.8.0 > > > update ModifyBytes processor to include a "Remove all content" property. > this property shouild default to false so existing functionality is not > changed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-1867) improve ModifyBytes to make it easy to remove all flowfile content
[ https://issues.apache.org/jira/browse/NIFI-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-1867: -- Fix Version/s: (was: 0.7.1) > improve ModifyBytes to make it easy to remove all flowfile content > -- > > Key: NIFI-1867 > URL: https://issues.apache.org/jira/browse/NIFI-1867 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Ben Icore >Assignee: Joe Skora > Fix For: 1.0.0, 0.8.0 > > > update ModifyBytes processor to include a "Remove all content" property. > this property shouild default to false so existing functionality is not > changed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2609) Issue determining appropriate site to site URL
[ https://issues.apache.org/jira/browse/NIFI-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428624#comment-15428624 ] Joseph Witt commented on NIFI-2609: --- Should it say "You must have mistyped" ? ;-) > Issue determining appropriate site to site URL > -- > > Key: NIFI-2609 > URL: https://issues.apache.org/jira/browse/NIFI-2609 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Matt Gilman > Fix For: 1.0.0 > > Attachments: Screen Shot 2016-08-19 at 10.15.49 AM.png > > > NiFi should be more lenient in terms of what URL is entered when creating a > RemoteProcessGroup. The user should be able to enter either > http://: or http://:/nifi without issue. Currently > the former leads to a very confusing error. See attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2609) Issue determining appropriate site to site URL
[ https://issues.apache.org/jira/browse/NIFI-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-2609: -- Attachment: Screen Shot 2016-08-19 at 10.15.49 AM.png > Issue determining appropriate site to site URL > -- > > Key: NIFI-2609 > URL: https://issues.apache.org/jira/browse/NIFI-2609 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Matt Gilman > Fix For: 1.0.0 > > Attachments: Screen Shot 2016-08-19 at 10.15.49 AM.png > > > NiFi should be more lenient in terms of what URL is entered when creating a > RemoteProcessGroup. The user should be able to enter either > http://: or http://:/nifi without issue. Currently > the former leads to a very confusing error. See attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2609) Issue determining appropriate site to site URL
[ https://issues.apache.org/jira/browse/NIFI-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-2609: -- Summary: Issue determining appropriate site to site URL (was: Issue determine appropriate site to site URL) > Issue determining appropriate site to site URL > -- > > Key: NIFI-2609 > URL: https://issues.apache.org/jira/browse/NIFI-2609 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Matt Gilman > Fix For: 1.0.0 > > > NiFi should be more lenient in terms of what URL is entered when creating a > RemoteProcessGroup. The user should be able to enter either > http://: or http://:/nifi without issue. Currently > the former leads to a very confusing error. See attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-2609) Issue determine appropriate site to site URL
Matt Gilman created NIFI-2609: - Summary: Issue determine appropriate site to site URL Key: NIFI-2609 URL: https://issues.apache.org/jira/browse/NIFI-2609 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Matt Gilman Fix For: 1.0.0 NiFi should be more lenient in terms of what URL is entered when creating a RemoteProcessGroup. The user should be able to enter either http://: or http://:/nifi without issue. Currently the former leads to a very confusing error. See attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1867) improve ModifyBytes to make it easy to remove all flowfile content
[ https://issues.apache.org/jira/browse/NIFI-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428609#comment-15428609 ] ASF GitHub Bot commented on NIFI-1867: -- Github user jskora closed the pull request at: https://github.com/apache/nifi/pull/886 > improve ModifyBytes to make it easy to remove all flowfile content > -- > > Key: NIFI-1867 > URL: https://issues.apache.org/jira/browse/NIFI-1867 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Ben Icore >Assignee: Joe Skora > Fix For: 1.0.0, 0.8.0, 0.7.1 > > > update ModifyBytes processor to include a "Remove all content" property. > this property shouild default to false so existing functionality is not > changed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1867) improve ModifyBytes to make it easy to remove all flowfile content
[ https://issues.apache.org/jira/browse/NIFI-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428608#comment-15428608 ] ASF GitHub Bot commented on NIFI-1867: -- Github user jskora commented on the issue: https://github.com/apache/nifi/pull/886 Closed by 0.x branch commit f89bc9efd8b8458b5bfd6a1b4045ce8230117ff4. Thanks @pvillard31 > improve ModifyBytes to make it easy to remove all flowfile content > -- > > Key: NIFI-1867 > URL: https://issues.apache.org/jira/browse/NIFI-1867 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Ben Icore >Assignee: Joe Skora > Fix For: 1.0.0, 0.8.0, 0.7.1 > > > update ModifyBytes processor to include a "Remove all content" property. > this property shouild default to false so existing functionality is not > changed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #886: NIFI-1867 Improve ModifyBytes to make it easy to remove all...
Github user jskora commented on the issue: https://github.com/apache/nifi/pull/886 Closed by 0.x branch commit f89bc9efd8b8458b5bfd6a1b4045ce8230117ff4. Thanks @pvillard31 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #886: NIFI-1867 Improve ModifyBytes to make it easy to rem...
Github user jskora closed the pull request at: https://github.com/apache/nifi/pull/886 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Resolved] (NIFI-1867) improve ModifyBytes to make it easy to remove all flowfile content
[ https://issues.apache.org/jira/browse/NIFI-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-1867. -- Resolution: Fixed Fix Version/s: 0.7.1 0.8.0 1.0.0 > improve ModifyBytes to make it easy to remove all flowfile content > -- > > Key: NIFI-1867 > URL: https://issues.apache.org/jira/browse/NIFI-1867 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Ben Icore >Assignee: Joe Skora > Fix For: 1.0.0, 0.8.0, 0.7.1 > > > update ModifyBytes processor to include a "Remove all content" property. > this property shouild default to false so existing functionality is not > changed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #857: NIFI-2567: Site-to-Site to send large data via HTTPS
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/857 @ijokarumawak unfortunately, I did forget to put the "This closes #857" in the commit message, so please close this PR when you get a chance. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2567) HTTP Site-to-Site can't send data larger than about 7KB via HTTPS
[ https://issues.apache.org/jira/browse/NIFI-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428593#comment-15428593 ] ASF GitHub Bot commented on NIFI-2567: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/857 @ijokarumawak unfortunately, I did forget to put the "This closes #857" in the commit message, so please close this PR when you get a chance. > HTTP Site-to-Site can't send data larger than about 7KB via HTTPS > - > > Key: NIFI-2567 > URL: https://issues.apache.org/jira/browse/NIFI-2567 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Blocker > Fix For: 1.0.0 > > > HTTP Site-to-Site fails to send data bigger than about 7KB through HTTPS. > Getting data via HTTPS works. It can send large data with HTTP. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #886: NIFI-1867 Improve ModifyBytes to make it easy to remove all...
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/886 +1, checked improvement, full build w/ contrib-check, LGTM. Merged in 0.x. @jskora, could you manually close this PR? Thanks! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2567) HTTP Site-to-Site can't send data larger than about 7KB via HTTPS
[ https://issues.apache.org/jira/browse/NIFI-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428590#comment-15428590 ] ASF GitHub Bot commented on NIFI-2567: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/857 @ijokarumawak this looks good! Tested by having data generated and pushed back to own 3-node cluster. Once received the data went to an Output Port so that I could pull it back through the same Remote Process Group. The generated data came in 4 sizes: 0 bytes, 1 KB, 1 MB, 50 MB. Was able to push (and pull back) around 1 million FlowFiles and several GB in 5 mins. Great work! +1 merged to master. > HTTP Site-to-Site can't send data larger than about 7KB via HTTPS > - > > Key: NIFI-2567 > URL: https://issues.apache.org/jira/browse/NIFI-2567 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Blocker > Fix For: 1.0.0 > > > HTTP Site-to-Site fails to send data bigger than about 7KB through HTTPS. > Getting data via HTTPS works. It can send large data with HTTP. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #857: NIFI-2567: Site-to-Site to send large data via HTTPS
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/857 @ijokarumawak this looks good! Tested by having data generated and pushed back to own 3-node cluster. Once received the data went to an Output Port so that I could pull it back through the same Remote Process Group. The generated data came in 4 sizes: 0 bytes, 1 KB, 1 MB, 50 MB. Was able to push (and pull back) around 1 million FlowFiles and several GB in 5 mins. Great work! +1 merged to master. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2567) HTTP Site-to-Site can't send data larger than about 7KB via HTTPS
[ https://issues.apache.org/jira/browse/NIFI-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428586#comment-15428586 ] ASF subversion and git services commented on NIFI-2567: --- Commit a919844461d63a26fa6c1d8c7daa447cd5ef912e in nifi's branch refs/heads/master from [~ijokarumawak] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=a919844 ] NIFI-2567: Site-to-Site to send large data via HTTPS - It couldn't send data larger than about 7KB due to the mis-use of httpasyncclient library - Updated httpasyncclient from 4.1.1 to 4.1.2 - Let httpasyncclient framework to call produceContent multiple times as it gets ready to send more data via SSL session - Added HTTPS test cases to TestHttpClient, which failed without this fix > HTTP Site-to-Site can't send data larger than about 7KB via HTTPS > - > > Key: NIFI-2567 > URL: https://issues.apache.org/jira/browse/NIFI-2567 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Blocker > Fix For: 1.0.0 > > > HTTP Site-to-Site fails to send data bigger than about 7KB through HTTPS. > Getting data via HTTPS works. It can send large data with HTTP. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2608) Thread-safety issue with ConsumeKafka
[ https://issues.apache.org/jira/browse/NIFI-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleg Zhurakousky updated NIFI-2608: --- Status: Patch Available (was: Open) > Thread-safety issue with ConsumeKafka > - > > Key: NIFI-2608 > URL: https://issues.apache.org/jira/browse/NIFI-2608 > Project: Apache NiFi > Issue Type: Bug >Reporter: Oleg Zhurakousky >Assignee: Oleg Zhurakousky > Fix For: 1.0.0 > > > KafkaConsumer went fro thread-safe in 0.8 to not-thread-safe in 0.9 which was > overlooked while implementing new ConsumeKafka processor which relied on 0.9 > Client API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1867) improve ModifyBytes to make it easy to remove all flowfile content
[ https://issues.apache.org/jira/browse/NIFI-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428563#comment-15428563 ] ASF GitHub Bot commented on NIFI-1867: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/890 +1, checked improvement, full build w/ contrib-check, LGTM. Merged in master, will take care of 0.x patch. > improve ModifyBytes to make it easy to remove all flowfile content > -- > > Key: NIFI-1867 > URL: https://issues.apache.org/jira/browse/NIFI-1867 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Ben Icore >Assignee: Joe Skora > > update ModifyBytes processor to include a "Remove all content" property. > this property shouild default to false so existing functionality is not > changed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #890: NIFI-1867 Improve ModifyBytes to make it easy to remove all...
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/890 +1, checked improvement, full build w/ contrib-check, LGTM. Merged in master, will take care of 0.x patch. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-1867) improve ModifyBytes to make it easy to remove all flowfile content
[ https://issues.apache.org/jira/browse/NIFI-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428561#comment-15428561 ] ASF GitHub Bot commented on NIFI-1867: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/890 > improve ModifyBytes to make it easy to remove all flowfile content > -- > > Key: NIFI-1867 > URL: https://issues.apache.org/jira/browse/NIFI-1867 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Ben Icore >Assignee: Joe Skora > > update ModifyBytes processor to include a "Remove all content" property. > this property shouild default to false so existing functionality is not > changed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #890: NIFI-1867 Improve ModifyBytes to make it easy to rem...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/890 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #901: Fixed multi-threading issue causing java.util.Concur...
GitHub user olegz opened a pull request: https://github.com/apache/nifi/pull/901 Fixed multi-threading issue causing java.util.ConcurrentModificationE⦠â¦xception You can merge this pull request into a Git repository by running: $ git pull https://github.com/olegz/nifi NIFI-2608 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/901.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #901 commit c94c7920dba2f6866c25e2c05cd463a5bf345c58 Author: Oleg ZhurakouskyDate: 2016-08-19T15:30:29Z Fixed multi-threading issue causing java.util.ConcurrentModificationException --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2608) Thread-safety issue with ConsumeKafka
[ https://issues.apache.org/jira/browse/NIFI-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428549#comment-15428549 ] Oleg Zhurakousky commented on NIFI-2608: Below is the stack trace you may get when configuring multiple concurrent tasks in ConsumeKafka {code} 2016-08-04 22:51:51,379 ERROR [Timer-Driven Process Thread-9] o.a.n.p.kafka.pubsub.ConsumeKafka ConsumeKafka[id=a2b8149c-c7b7-49d4-b65a-8fd5c174158b] ConsumeKafka[id=a2b8149c-c7b7-49d4-b65a-8fd5c174158b] failed to process due to java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access; rolling back session: java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access 2016-08-04 22:51:51,381 ERROR [Timer-Driven Process Thread-9] o.a.n.p.kafka.pubsub.ConsumeKafka java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1324) ~[kafka-clients-0.9.0.1.jar:na] at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:844) ~[kafka-clients-0.9.0.1.jar:na] at org.apache.nifi.processors.kafka.pubsub.ConsumeKafka.rendezvousWithKafka(ConsumeKafka.java:159) ~[nifi-kafka-pubsub-processors-0.7.0.jar:0.7.0] at org.apache.nifi.processors.kafka.pubsub.AbstractKafkaProcessor.onTrigger(AbstractKafkaProcessor.java:192) ~[nifi-kafka-pubsub-processors-0.7.0.jar:0.7.0] at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1054) [nifi-framework-core-0.7.0.jar:0.7.0] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-0.7.0.jar:0.7.0] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-0.7.0.jar:0.7.0] at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:127) [nifi-framework-core-0.7.0.jar:0.7.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_80] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [na:1.7.0_80] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [na:1.7.0_80] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.7.0_80] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_80] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_80] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80] {code} As you can see from the stack trace the problem happens when we invoke poll(..) on KafkaConsumer (kafka class) which has changed from being thread-safe in 0.8 to non-thread-safe in 0.9. Further reading into the API i see the following: {code} * Multi-threaded Processing * * The Kafka consumer is NOT thread-safe. All network I/O happens in the thread of the application * making the call. It is the responsibility of the user to ensure that multi-threaded access * is properly synchronized. Un-synchronized access will result in {@link ConcurrentModificationException}. * {code} Basically further analysis shows that KafkaConsumer.poll() must be synchronized where acquire(); then release(); must happen synchronously. After that you have ConsumerRecord and you can continue with multi-threaded processing. In the future we may consider implementing a consumer-per-thread model. > Thread-safety issue with ConsumeKafka > - > > Key: NIFI-2608 > URL: https://issues.apache.org/jira/browse/NIFI-2608 > Project: Apache NiFi > Issue Type: Bug >Reporter: Oleg Zhurakousky >Assignee: Oleg Zhurakousky > Fix For: 1.0.0 > > > KafkaConsumer went fro thread-safe in 0.8 to not-thread-safe in 0.9 which was > overlooked while implementing new ConsumeKafka processor which relied on 0.9 > Client API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2567) HTTP Site-to-Site can't send data larger than about 7KB via HTTPS
[ https://issues.apache.org/jira/browse/NIFI-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428546#comment-15428546 ] ASF GitHub Bot commented on NIFI-2567: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/857 Reviewing... > HTTP Site-to-Site can't send data larger than about 7KB via HTTPS > - > > Key: NIFI-2567 > URL: https://issues.apache.org/jira/browse/NIFI-2567 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Blocker > Fix For: 1.0.0 > > > HTTP Site-to-Site fails to send data bigger than about 7KB through HTTPS. > Getting data via HTTPS works. It can send large data with HTTP. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #857: NIFI-2567: Site-to-Site to send large data via HTTPS
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/857 Reviewing... --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (NIFI-2608) Thread-safety issue with ConsumeKafka
Oleg Zhurakousky created NIFI-2608: -- Summary: Thread-safety issue with ConsumeKafka Key: NIFI-2608 URL: https://issues.apache.org/jira/browse/NIFI-2608 Project: Apache NiFi Issue Type: Bug Reporter: Oleg Zhurakousky Assignee: Oleg Zhurakousky KafkaConsumer went fro thread-safe in 0.8 to not-thread-safe in 0.9 which was overlooked while implementing new ConsumeKafka processor which relied on 0.9 Client API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2608) Thread-safety issue with ConsumeKafka
[ https://issues.apache.org/jira/browse/NIFI-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleg Zhurakousky updated NIFI-2608: --- Fix Version/s: 1.0.0 > Thread-safety issue with ConsumeKafka > - > > Key: NIFI-2608 > URL: https://issues.apache.org/jira/browse/NIFI-2608 > Project: Apache NiFi > Issue Type: Bug >Reporter: Oleg Zhurakousky >Assignee: Oleg Zhurakousky > Fix For: 1.0.0 > > > KafkaConsumer went fro thread-safe in 0.8 to not-thread-safe in 0.9 which was > overlooked while implementing new ConsumeKafka processor which relied on 0.9 > Client API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2605) On restart of all nodes in nifi cluster one of the nodes failed to join the cluster with fingerprint mismatch
[ https://issues.apache.org/jira/browse/NIFI-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-2605: - Status: Patch Available (was: Open) > On restart of all nodes in nifi cluster one of the nodes failed to join the > cluster with fingerprint mismatch > - > > Key: NIFI-2605 > URL: https://issues.apache.org/jira/browse/NIFI-2605 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Arpit Gupta >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.0.0 > > > Follow stack trace was present in the node that did not connect > {code} > 2016-08-18 12:04:55,628 INFO [Process Cluster Protocol Request-1] > o.a.n.c.p.impl.SocketProtocolListener Finished processing request > ea80ad62-585c-4460-9ee9-93cc12c8db54 (type=NODE_STATUS_CHANGE, length=1052 > bytes) from host in 61 millis > 2016-08-18 12:04:55,806 ERROR [main] o.a.nifi.controller.StandardFlowService > Failed to load flow from cluster due to: > org.apache.nifi.controller.UninheritableFlowException: Failed to connect node > to cluster because local flow is different than cluster flow. > org.apache.nifi.controller.UninheritableFlowException: Failed to connect node > to cluster because local flow is different than cluster flow. > at > org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:866) > ~[nifi-framework-core-1.0.0.jar:1.0.0] > at > org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:492) > ~[nifi-framework-core-1.0.0.jar:1.0.0] > at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:746) > [nifi-jetty-1.0.0.jar:1.0.0] > at org.apache.nifi.NiFi.(NiFi.java:137) > [nifi-runtime-1.0.0.jar:1.0.0] > at org.apache.nifi.NiFi.main(NiFi.java:227) > [nifi-runtime-1.0.0.jar:1.0.0] > Caused by: org.apache.nifi.controller.UninheritableFlowException: Proposed > configuration is not inheritable by the flow controller because of flow > differences: Found difference in Flows: > Local Fingerprint: > 7c84501d-d10c-407c-b9f3-1d80e38fe36a9d7d39c0-0156-1000--c6ce3a7d9d7d3cd1-0156-1000-- > Cluster Fingerprint: 9d89d844-0156-1000-e4bc-8ae5e0566749 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2605) On restart of all nodes in nifi cluster one of the nodes failed to join the cluster with fingerprint mismatch
[ https://issues.apache.org/jira/browse/NIFI-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428522#comment-15428522 ] ASF GitHub Bot commented on NIFI-2605: -- GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/900 NIFI-2605: Fixing a regression bug where nodes would potentially be e… …lected leader for Cluster Coordinator role when they do not have the correct flow You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-2605 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/900.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #900 commit d05fa581595aff6d1529da2716cada114ad9 Author: Mark PayneDate: 2016-08-19T17:41:58Z NIFI-2605: Fixing a regression bug where nodes would potentially be elected leader for Cluster Coordinator role when they do not have the correct flow > On restart of all nodes in nifi cluster one of the nodes failed to join the > cluster with fingerprint mismatch > - > > Key: NIFI-2605 > URL: https://issues.apache.org/jira/browse/NIFI-2605 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Arpit Gupta >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.0.0 > > > Follow stack trace was present in the node that did not connect > {code} > 2016-08-18 12:04:55,628 INFO [Process Cluster Protocol Request-1] > o.a.n.c.p.impl.SocketProtocolListener Finished processing request > ea80ad62-585c-4460-9ee9-93cc12c8db54 (type=NODE_STATUS_CHANGE, length=1052 > bytes) from host in 61 millis > 2016-08-18 12:04:55,806 ERROR [main] o.a.nifi.controller.StandardFlowService > Failed to load flow from cluster due to: > org.apache.nifi.controller.UninheritableFlowException: Failed to connect node > to cluster because local flow is different than cluster flow. > org.apache.nifi.controller.UninheritableFlowException: Failed to connect node > to cluster because local flow is different than cluster flow. > at > org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:866) > ~[nifi-framework-core-1.0.0.jar:1.0.0] > at > org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:492) > ~[nifi-framework-core-1.0.0.jar:1.0.0] > at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:746) > [nifi-jetty-1.0.0.jar:1.0.0] > at org.apache.nifi.NiFi.(NiFi.java:137) > [nifi-runtime-1.0.0.jar:1.0.0] > at org.apache.nifi.NiFi.main(NiFi.java:227) > [nifi-runtime-1.0.0.jar:1.0.0] > Caused by: org.apache.nifi.controller.UninheritableFlowException: Proposed > configuration is not inheritable by the flow controller because of flow > differences: Found difference in Flows: > Local Fingerprint: > 7c84501d-d10c-407c-b9f3-1d80e38fe36a9d7d39c0-0156-1000--c6ce3a7d9d7d3cd1-0156-1000-- > Cluster Fingerprint: 9d89d844-0156-1000-e4bc-8ae5e0566749 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #900: NIFI-2605: Fixing a regression bug where nodes would...
GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/900 NIFI-2605: Fixing a regression bug where nodes would potentially be e⦠â¦lected leader for Cluster Coordinator role when they do not have the correct flow You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-2605 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/900.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #900 commit d05fa581595aff6d1529da2716cada114ad9 Author: Mark PayneDate: 2016-08-19T17:41:58Z NIFI-2605: Fixing a regression bug where nodes would potentially be elected leader for Cluster Coordinator role when they do not have the correct flow --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-2581) Context menus and tooltips getting hidden after automatic refresh
[ https://issues.apache.org/jira/browse/NIFI-2581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-2581: - Resolution: Fixed Status: Resolved (was: Patch Available) > Context menus and tooltips getting hidden after automatic refresh > - > > Key: NIFI-2581 > URL: https://issues.apache.org/jira/browse/NIFI-2581 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 >Reporter: Jeff Storck >Assignee: Matt Gilman >Priority: Minor > Fix For: 1.0.0 > > > When the NiFi UI's automatic refresh occurs, if there are context menus > (right-click) or tooltips (for validation and bulletins) visible, they get > hidden when the refresh completes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2581) Context menus and tooltips getting hidden after automatic refresh
[ https://issues.apache.org/jira/browse/NIFI-2581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428506#comment-15428506 ] ASF subversion and git services commented on NIFI-2581: --- Commit 3378426f3520cf66ec0525382a3596ce25915a15 in nifi's branch refs/heads/master from [~mcgilman] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3378426 ] NIFI-2581: Keeping context menu and tooltips open when refreshing the canvas. This closes #899. > Context menus and tooltips getting hidden after automatic refresh > - > > Key: NIFI-2581 > URL: https://issues.apache.org/jira/browse/NIFI-2581 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 >Reporter: Jeff Storck >Assignee: Matt Gilman >Priority: Minor > Fix For: 1.0.0 > > > When the NiFi UI's automatic refresh occurs, if there are context menus > (right-click) or tooltips (for validation and bulletins) visible, they get > hidden when the refresh completes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #899: Keeping context menu and tooltips open when refreshing the ...
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/899 Thanks for the review @jtstorck +1 as well, merged into master --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #899: Keeping context menu and tooltips open when refreshi...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/899 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2581) Context menus and tooltips getting hidden after automatic refresh
[ https://issues.apache.org/jira/browse/NIFI-2581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428507#comment-15428507 ] ASF GitHub Bot commented on NIFI-2581: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/899 > Context menus and tooltips getting hidden after automatic refresh > - > > Key: NIFI-2581 > URL: https://issues.apache.org/jira/browse/NIFI-2581 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 >Reporter: Jeff Storck >Assignee: Matt Gilman >Priority: Minor > Fix For: 1.0.0 > > > When the NiFi UI's automatic refresh occurs, if there are context menus > (right-click) or tooltips (for validation and bulletins) visible, they get > hidden when the refresh completes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-2607) All List (HDFS, File, S3, SFTP) processors should add fragment attributes
Jeff Storck created NIFI-2607: - Summary: All List (HDFS, File, S3, SFTP) processors should add fragment attributes Key: NIFI-2607 URL: https://issues.apache.org/jira/browse/NIFI-2607 Project: Apache NiFi Issue Type: Improvement Reporter: Jeff Storck Priority: Minor It would be beneficial to be able to use the fragment.identifier, fragment.index, and fragment.count attributes when processing a "batch" of flowfiles from the List* processors. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #899: Keeping context menu and tooltips open when refreshing the ...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/899 +1 on this PR. Tested auto refresh while right-click context menus and processor bulletins were being displayed, and they remained visible during and after the refresh. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-2605) On restart of all nodes in nifi cluster one of the nodes failed to join the cluster with fingerprint mismatch
[ https://issues.apache.org/jira/browse/NIFI-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-2605: - Fix Version/s: 1.0.0 > On restart of all nodes in nifi cluster one of the nodes failed to join the > cluster with fingerprint mismatch > - > > Key: NIFI-2605 > URL: https://issues.apache.org/jira/browse/NIFI-2605 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Arpit Gupta >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.0.0 > > > Follow stack trace was present in the node that did not connect > {code} > 2016-08-18 12:04:55,628 INFO [Process Cluster Protocol Request-1] > o.a.n.c.p.impl.SocketProtocolListener Finished processing request > ea80ad62-585c-4460-9ee9-93cc12c8db54 (type=NODE_STATUS_CHANGE, length=1052 > bytes) from host in 61 millis > 2016-08-18 12:04:55,806 ERROR [main] o.a.nifi.controller.StandardFlowService > Failed to load flow from cluster due to: > org.apache.nifi.controller.UninheritableFlowException: Failed to connect node > to cluster because local flow is different than cluster flow. > org.apache.nifi.controller.UninheritableFlowException: Failed to connect node > to cluster because local flow is different than cluster flow. > at > org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:866) > ~[nifi-framework-core-1.0.0.jar:1.0.0] > at > org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:492) > ~[nifi-framework-core-1.0.0.jar:1.0.0] > at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:746) > [nifi-jetty-1.0.0.jar:1.0.0] > at org.apache.nifi.NiFi.(NiFi.java:137) > [nifi-runtime-1.0.0.jar:1.0.0] > at org.apache.nifi.NiFi.main(NiFi.java:227) > [nifi-runtime-1.0.0.jar:1.0.0] > Caused by: org.apache.nifi.controller.UninheritableFlowException: Proposed > configuration is not inheritable by the flow controller because of flow > differences: Found difference in Flows: > Local Fingerprint: > 7c84501d-d10c-407c-b9f3-1d80e38fe36a9d7d39c0-0156-1000--c6ce3a7d9d7d3cd1-0156-1000-- > Cluster Fingerprint: 9d89d844-0156-1000-e4bc-8ae5e0566749 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2605) On restart of all nodes in nifi cluster one of the nodes failed to join the cluster with fingerprint mismatch
[ https://issues.apache.org/jira/browse/NIFI-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428404#comment-15428404 ] Mark Payne commented on NIFI-2605: -- D'oh! Looks like this is a regression that was introduced during a recent refactoring of the Leader Election Manager. Will address. Thanks for reporting this, [~arpitgupta]! > On restart of all nodes in nifi cluster one of the nodes failed to join the > cluster with fingerprint mismatch > - > > Key: NIFI-2605 > URL: https://issues.apache.org/jira/browse/NIFI-2605 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Arpit Gupta >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.0.0 > > > Follow stack trace was present in the node that did not connect > {code} > 2016-08-18 12:04:55,628 INFO [Process Cluster Protocol Request-1] > o.a.n.c.p.impl.SocketProtocolListener Finished processing request > ea80ad62-585c-4460-9ee9-93cc12c8db54 (type=NODE_STATUS_CHANGE, length=1052 > bytes) from host in 61 millis > 2016-08-18 12:04:55,806 ERROR [main] o.a.nifi.controller.StandardFlowService > Failed to load flow from cluster due to: > org.apache.nifi.controller.UninheritableFlowException: Failed to connect node > to cluster because local flow is different than cluster flow. > org.apache.nifi.controller.UninheritableFlowException: Failed to connect node > to cluster because local flow is different than cluster flow. > at > org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:866) > ~[nifi-framework-core-1.0.0.jar:1.0.0] > at > org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:492) > ~[nifi-framework-core-1.0.0.jar:1.0.0] > at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:746) > [nifi-jetty-1.0.0.jar:1.0.0] > at org.apache.nifi.NiFi.(NiFi.java:137) > [nifi-runtime-1.0.0.jar:1.0.0] > at org.apache.nifi.NiFi.main(NiFi.java:227) > [nifi-runtime-1.0.0.jar:1.0.0] > Caused by: org.apache.nifi.controller.UninheritableFlowException: Proposed > configuration is not inheritable by the flow controller because of flow > differences: Found difference in Flows: > Local Fingerprint: > 7c84501d-d10c-407c-b9f3-1d80e38fe36a9d7d39c0-0156-1000--c6ce3a7d9d7d3cd1-0156-1000-- > Cluster Fingerprint: 9d89d844-0156-1000-e4bc-8ae5e0566749 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (NIFI-2605) On restart of all nodes in nifi cluster one of the nodes failed to join the cluster with fingerprint mismatch
[ https://issues.apache.org/jira/browse/NIFI-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne reassigned NIFI-2605: Assignee: Mark Payne > On restart of all nodes in nifi cluster one of the nodes failed to join the > cluster with fingerprint mismatch > - > > Key: NIFI-2605 > URL: https://issues.apache.org/jira/browse/NIFI-2605 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Arpit Gupta >Assignee: Mark Payne >Priority: Blocker > > Follow stack trace was present in the node that did not connect > {code} > 2016-08-18 12:04:55,628 INFO [Process Cluster Protocol Request-1] > o.a.n.c.p.impl.SocketProtocolListener Finished processing request > ea80ad62-585c-4460-9ee9-93cc12c8db54 (type=NODE_STATUS_CHANGE, length=1052 > bytes) from host in 61 millis > 2016-08-18 12:04:55,806 ERROR [main] o.a.nifi.controller.StandardFlowService > Failed to load flow from cluster due to: > org.apache.nifi.controller.UninheritableFlowException: Failed to connect node > to cluster because local flow is different than cluster flow. > org.apache.nifi.controller.UninheritableFlowException: Failed to connect node > to cluster because local flow is different than cluster flow. > at > org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:866) > ~[nifi-framework-core-1.0.0.jar:1.0.0] > at > org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:492) > ~[nifi-framework-core-1.0.0.jar:1.0.0] > at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:746) > [nifi-jetty-1.0.0.jar:1.0.0] > at org.apache.nifi.NiFi.(NiFi.java:137) > [nifi-runtime-1.0.0.jar:1.0.0] > at org.apache.nifi.NiFi.main(NiFi.java:227) > [nifi-runtime-1.0.0.jar:1.0.0] > Caused by: org.apache.nifi.controller.UninheritableFlowException: Proposed > configuration is not inheritable by the flow controller because of flow > differences: Found difference in Flows: > Local Fingerprint: > 7c84501d-d10c-407c-b9f3-1d80e38fe36a9d7d39c0-0156-1000--c6ce3a7d9d7d3cd1-0156-1000-- > Cluster Fingerprint: 9d89d844-0156-1000-e4bc-8ae5e0566749 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-2606) FingerprintFactory and default property values
Matt Gilman created NIFI-2606: - Summary: FingerprintFactory and default property values Key: NIFI-2606 URL: https://issues.apache.org/jira/browse/NIFI-2606 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Matt Gilman Assignee: Matt Gilman Fix For: 1.0.0 When a node joins a cluster, it checks the proposed flow (of the cluster) against it's local flow. The proposed flow will include default property values if a property is not explicitly set. This is accounted for when comparing a Processor but is not for Controller Services and Reporting Tasks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-2605) On restart of all nodes in nifi cluster one of the nodes failed to join the cluster with fingerprint mismatch
Arpit Gupta created NIFI-2605: - Summary: On restart of all nodes in nifi cluster one of the nodes failed to join the cluster with fingerprint mismatch Key: NIFI-2605 URL: https://issues.apache.org/jira/browse/NIFI-2605 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.0.0 Reporter: Arpit Gupta Priority: Blocker Follow stack trace was present in the node that did not connect {code} 2016-08-18 12:04:55,628 INFO [Process Cluster Protocol Request-1] o.a.n.c.p.impl.SocketProtocolListener Finished processing request ea80ad62-585c-4460-9ee9-93cc12c8db54 (type=NODE_STATUS_CHANGE, length=1052 bytes) from host in 61 millis 2016-08-18 12:04:55,806 ERROR [main] o.a.nifi.controller.StandardFlowService Failed to load flow from cluster due to: org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow. org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow. at org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:866) ~[nifi-framework-core-1.0.0.jar:1.0.0] at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:492) ~[nifi-framework-core-1.0.0.jar:1.0.0] at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:746) [nifi-jetty-1.0.0.jar:1.0.0] at org.apache.nifi.NiFi.(NiFi.java:137) [nifi-runtime-1.0.0.jar:1.0.0] at org.apache.nifi.NiFi.main(NiFi.java:227) [nifi-runtime-1.0.0.jar:1.0.0] Caused by: org.apache.nifi.controller.UninheritableFlowException: Proposed configuration is not inheritable by the flow controller because of flow differences: Found difference in Flows: Local Fingerprint: 7c84501d-d10c-407c-b9f3-1d80e38fe36a9d7d39c0-0156-1000--c6ce3a7d9d7d3cd1-0156-1000-- Cluster Fingerprint: 9d89d844-0156-1000-e4bc-8ae5e0566749 {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #883: NIFI-2591 - PutSQL has no handling for Binary data types
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/883 That sounds good. We should move the discussion into an email thread on the users or dev list? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2591) PutSQL has no handling for Binary data types
[ https://issues.apache.org/jira/browse/NIFI-2591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428325#comment-15428325 ] ASF GitHub Bot commented on NIFI-2591: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/883 That sounds good. We should move the discussion into an email thread on the users or dev list? > PutSQL has no handling for Binary data types > > > Key: NIFI-2591 > URL: https://issues.apache.org/jira/browse/NIFI-2591 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Peter Wicks > > PutSQL does not call out binary types for any special treatment, so they end > up being routed through stmt.setObject. > The problem is that upstream processors have formatted the binary data as a > string and the JDBC driver doesn't know what to do with a string going into a > binary field. > Investigation into the AvroToJSON processor shows that if users are trying to > load data exported from a source system as Avro Binary that Avro encodes the > binary data into ASCII (One byte per character). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2562) PutHDFS writes corrupted data in the transparent disk encryption zone
[ https://issues.apache.org/jira/browse/NIFI-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428313#comment-15428313 ] Vik commented on NIFI-2562: --- I tried 2.6.0-cdh5.8.0. It threw me the same error with exact corrupted content. > PutHDFS writes corrupted data in the transparent disk encryption zone > - > > Key: NIFI-2562 > URL: https://issues.apache.org/jira/browse/NIFI-2562 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0 >Reporter: Vik >Priority: Blocker > Labels: encryption, security > Attachments: HdfsCorrupted.jpg, NiFi-PutHDFS.jpg > > > Problem 1: UnknownHostExcepion > When NiFi is trying to ingest files into HDFS encryption zone, it was > throwing UnknownHostException > Reason: In hadoop Configuration files, like core-site.xml and hdfs-site.xml, > kms hosts were mentioned in the following format "h...@xxx1.int..com; > xxx2.int..com:16000". > Since NiFi was using old hadoop libraries (2.6.2), It could not resolve two > hosts. So instead it considered two hosts as a single host and started > throwing UnknownHostExcepion. > We tried a couple different fixes for this. > Fix 1: Changing configuration files from having property like: >hadoop.security.key.provider.path > kms://h...@.int..com; > .int..com:16000/kms > to: >hadoop.security.key.provider.path > kms://h...@.int..com:16000/kms > > Fix 2: Building NiFi nar files with hadoop version, as installed in our > system. (2.6.0-cdh5.7.0). > Steps followed: > a) Changed NiFi pom file hadoop version from 2.6.2 to 2.6.0-cdh5.7.0. > b) Run mvn clean package -DskipTests > c) Copy following nar files to /opt/nifi-dev/lib > ./nifi-nar-bundles/nifi-hadoop-bundle/nifi-hadoop-nar/target/nifi-hadoop-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/target/nifi-hadoop-libraries-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-nar/target/nifi-hbase-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-standard-services/nifi-http-context-map-bundle/nifi-http-context-map-nar/target/nifi-http-context-map-nar-1.0.0-SNAPSHOT.nar > d) Restart NiFi with bin/nifi.sh restart > This fixes resolved the Unknown Host Exception for us but we ran into Problem > 2 mentioned below. > Problem 2: Ingesting Corrupted data into HDFS encryption zone > After resolving the UnknownHostException, NiFi was able to ingest files into > encryption zone but content of the file is corrupted. > Approaches: > Tried to simulate error with sample Java program which uses similar logic and > same library, but it was ingesting files into encryption zone without any > problem. > Checked NiFi log files to find the cause, found NiFi is making HTTP requests > to kms to decrypt keys but could not proceed further as there is no error. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (NIFI-2601) Splash screen spinner size is too big
[ https://issues.apache.org/jira/browse/NIFI-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman resolved NIFI-2601. --- Resolution: Fixed > Splash screen spinner size is too big > - > > Key: NIFI-2601 > URL: https://issues.apache.org/jira/browse/NIFI-2601 > Project: Apache NiFi > Issue Type: Bug >Reporter: Scott Aslan >Assignee: Scott Aslan >Priority: Critical > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2601) Splash screen spinner size is too big
[ https://issues.apache.org/jira/browse/NIFI-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428272#comment-15428272 ] ASF GitHub Bot commented on NIFI-2601: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/897 Thanks @scottyaslan. This has been merged to master > Splash screen spinner size is too big > - > > Key: NIFI-2601 > URL: https://issues.apache.org/jira/browse/NIFI-2601 > Project: Apache NiFi > Issue Type: Bug >Reporter: Scott Aslan >Assignee: Scott Aslan >Priority: Critical > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2601) Splash screen spinner size is too big
[ https://issues.apache.org/jira/browse/NIFI-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428269#comment-15428269 ] ASF subversion and git services commented on NIFI-2601: --- Commit a181c7b9d70a224e2aae1af830ddb38adbb39a24 in nifi's branch refs/heads/master from [~scottyaslan] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=a181c7b ] [NIFI-2601] Update bower.json to use specific versions. This closes #897 > Splash screen spinner size is too big > - > > Key: NIFI-2601 > URL: https://issues.apache.org/jira/browse/NIFI-2601 > Project: Apache NiFi > Issue Type: Bug >Reporter: Scott Aslan >Assignee: Scott Aslan >Priority: Critical > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2601) Splash screen spinner size is too big
[ https://issues.apache.org/jira/browse/NIFI-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428273#comment-15428273 ] ASF GitHub Bot commented on NIFI-2601: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/897 > Splash screen spinner size is too big > - > > Key: NIFI-2601 > URL: https://issues.apache.org/jira/browse/NIFI-2601 > Project: Apache NiFi > Issue Type: Bug >Reporter: Scott Aslan >Assignee: Scott Aslan >Priority: Critical > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #897: [NIFI-2601] Update bower.json to use specific versio...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/897 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #899: Keeping context menu and tooltips open when refreshi...
GitHub user mcgilman opened a pull request: https://github.com/apache/nifi/pull/899 Keeping context menu and tooltips open when refreshing the canvas NIFI-2581: - Keeping context menu and tooltips open when refreshing the canvas. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mcgilman/nifi NIFI-2581 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/899.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #899 commit 4561415c1cb92fdcbc3d4db5290039b23f88f949 Author: Matt GilmanDate: 2016-08-19T14:19:24Z NIFI-2581: - Keeping context menu and tooltips open when refreshing the canvas. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2581) Context menus and tooltips getting hidden after automatic refresh
[ https://issues.apache.org/jira/browse/NIFI-2581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428252#comment-15428252 ] ASF GitHub Bot commented on NIFI-2581: -- GitHub user mcgilman opened a pull request: https://github.com/apache/nifi/pull/899 Keeping context menu and tooltips open when refreshing the canvas NIFI-2581: - Keeping context menu and tooltips open when refreshing the canvas. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mcgilman/nifi NIFI-2581 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/899.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #899 commit 4561415c1cb92fdcbc3d4db5290039b23f88f949 Author: Matt GilmanDate: 2016-08-19T14:19:24Z NIFI-2581: - Keeping context menu and tooltips open when refreshing the canvas. > Context menus and tooltips getting hidden after automatic refresh > - > > Key: NIFI-2581 > URL: https://issues.apache.org/jira/browse/NIFI-2581 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 >Reporter: Jeff Storck >Assignee: Matt Gilman >Priority: Minor > Fix For: 1.0.0 > > > When the NiFi UI's automatic refresh occurs, if there are context menus > (right-click) or tooltips (for validation and bulletins) visible, they get > hidden when the refresh completes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2562) PutHDFS writes corrupted data in the transparent disk encryption zone
[ https://issues.apache.org/jira/browse/NIFI-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428242#comment-15428242 ] Joseph Witt commented on NIFI-2562: --- 2.6.0-cdh5.8.0Could you try that? > PutHDFS writes corrupted data in the transparent disk encryption zone > - > > Key: NIFI-2562 > URL: https://issues.apache.org/jira/browse/NIFI-2562 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0 >Reporter: Vik >Priority: Blocker > Labels: encryption, security > Attachments: HdfsCorrupted.jpg, NiFi-PutHDFS.jpg > > > Problem 1: UnknownHostExcepion > When NiFi is trying to ingest files into HDFS encryption zone, it was > throwing UnknownHostException > Reason: In hadoop Configuration files, like core-site.xml and hdfs-site.xml, > kms hosts were mentioned in the following format "h...@xxx1.int..com; > xxx2.int..com:16000". > Since NiFi was using old hadoop libraries (2.6.2), It could not resolve two > hosts. So instead it considered two hosts as a single host and started > throwing UnknownHostExcepion. > We tried a couple different fixes for this. > Fix 1: Changing configuration files from having property like: >hadoop.security.key.provider.path > kms://h...@.int..com; > .int..com:16000/kms > to: >hadoop.security.key.provider.path > kms://h...@.int..com:16000/kms > > Fix 2: Building NiFi nar files with hadoop version, as installed in our > system. (2.6.0-cdh5.7.0). > Steps followed: > a) Changed NiFi pom file hadoop version from 2.6.2 to 2.6.0-cdh5.7.0. > b) Run mvn clean package -DskipTests > c) Copy following nar files to /opt/nifi-dev/lib > ./nifi-nar-bundles/nifi-hadoop-bundle/nifi-hadoop-nar/target/nifi-hadoop-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/target/nifi-hadoop-libraries-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-nar/target/nifi-hbase-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-standard-services/nifi-http-context-map-bundle/nifi-http-context-map-nar/target/nifi-http-context-map-nar-1.0.0-SNAPSHOT.nar > d) Restart NiFi with bin/nifi.sh restart > This fixes resolved the Unknown Host Exception for us but we ran into Problem > 2 mentioned below. > Problem 2: Ingesting Corrupted data into HDFS encryption zone > After resolving the UnknownHostException, NiFi was able to ingest files into > encryption zone but content of the file is corrupted. > Approaches: > Tried to simulate error with sample Java program which uses similar logic and > same library, but it was ingesting files into encryption zone without any > problem. > Checked NiFi log files to find the cause, found NiFi is making HTTP requests > to kms to decrypt keys but could not proceed further as there is no error. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2562) PutHDFS writes corrupted data in the transparent disk encryption zone
[ https://issues.apache.org/jira/browse/NIFI-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428229#comment-15428229 ] Vik commented on NIFI-2562: --- Yes, we are observing it at the end of the message. We don't have any other HDFS client versions. It works fine for non TDE scenarios in our case and it only fails for TDE scenarios. > PutHDFS writes corrupted data in the transparent disk encryption zone > - > > Key: NIFI-2562 > URL: https://issues.apache.org/jira/browse/NIFI-2562 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0 >Reporter: Vik >Priority: Blocker > Labels: encryption, security > Attachments: HdfsCorrupted.jpg, NiFi-PutHDFS.jpg > > > Problem 1: UnknownHostExcepion > When NiFi is trying to ingest files into HDFS encryption zone, it was > throwing UnknownHostException > Reason: In hadoop Configuration files, like core-site.xml and hdfs-site.xml, > kms hosts were mentioned in the following format "h...@xxx1.int..com; > xxx2.int..com:16000". > Since NiFi was using old hadoop libraries (2.6.2), It could not resolve two > hosts. So instead it considered two hosts as a single host and started > throwing UnknownHostExcepion. > We tried a couple different fixes for this. > Fix 1: Changing configuration files from having property like: >hadoop.security.key.provider.path > kms://h...@.int..com; > .int..com:16000/kms > to: >hadoop.security.key.provider.path > kms://h...@.int..com:16000/kms > > Fix 2: Building NiFi nar files with hadoop version, as installed in our > system. (2.6.0-cdh5.7.0). > Steps followed: > a) Changed NiFi pom file hadoop version from 2.6.2 to 2.6.0-cdh5.7.0. > b) Run mvn clean package -DskipTests > c) Copy following nar files to /opt/nifi-dev/lib > ./nifi-nar-bundles/nifi-hadoop-bundle/nifi-hadoop-nar/target/nifi-hadoop-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/target/nifi-hadoop-libraries-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-nar/target/nifi-hbase-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-standard-services/nifi-http-context-map-bundle/nifi-http-context-map-nar/target/nifi-http-context-map-nar-1.0.0-SNAPSHOT.nar > d) Restart NiFi with bin/nifi.sh restart > This fixes resolved the Unknown Host Exception for us but we ran into Problem > 2 mentioned below. > Problem 2: Ingesting Corrupted data into HDFS encryption zone > After resolving the UnknownHostException, NiFi was able to ingest files into > encryption zone but content of the file is corrupted. > Approaches: > Tried to simulate error with sample Java program which uses similar logic and > same library, but it was ingesting files into encryption zone without any > problem. > Checked NiFi log files to find the cause, found NiFi is making HTTP requests > to kms to decrypt keys but could not proceed further as there is no error. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (NIFI-2562) PutHDFS writes corrupted data in the transparent disk encryption zone
[ https://issues.apache.org/jira/browse/NIFI-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428209#comment-15428209 ] Vik edited comment on NIFI-2562 at 8/19/16 1:51 PM: Here are the screenshots you asked for. Browsing through data-provenance tab, we could locate that within NiFi and PUTHDFS, we can view data without any corruption (image 1 is proof of that). The latter image shows the data in HDFS which is corrupted. So we can infer that, after the data flows through PutHDFS and before it's ingested into HDFS, something is fishy and we can't figure out what it is. Atleast, not yet. https://issues.apache.org/jira/secure/attachment/12824563/NiFi-PutHDFS.jpg https://issues.apache.org/jira/secure/attachment/12824564/HdfsCorrupted.jpg was (Author: allnamesaretaken): Here are the screenshots you asked for. Browsing through data-provenance tab, we could locate that within NiFi and PUTHDFS, we can view data without any corruption (image 1 is proof of that). The latter image shows the data in HDFS which is corrupted. So we can infer that, after the data flows through PutHDFS and before it's ingested into HDFS, something is fishy and we can't figure out what it is. Atleast, not yet. "!HdfsCorrupted.jpg" "!NiFi-PutHDFS.jpg" > PutHDFS writes corrupted data in the transparent disk encryption zone > - > > Key: NIFI-2562 > URL: https://issues.apache.org/jira/browse/NIFI-2562 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0 >Reporter: Vik >Priority: Blocker > Labels: encryption, security > Attachments: HdfsCorrupted.jpg, NiFi-PutHDFS.jpg > > > Problem 1: UnknownHostExcepion > When NiFi is trying to ingest files into HDFS encryption zone, it was > throwing UnknownHostException > Reason: In hadoop Configuration files, like core-site.xml and hdfs-site.xml, > kms hosts were mentioned in the following format "h...@xxx1.int..com; > xxx2.int..com:16000". > Since NiFi was using old hadoop libraries (2.6.2), It could not resolve two > hosts. So instead it considered two hosts as a single host and started > throwing UnknownHostExcepion. > We tried a couple different fixes for this. > Fix 1: Changing configuration files from having property like: >hadoop.security.key.provider.path > kms://h...@.int..com; > .int..com:16000/kms > to: >hadoop.security.key.provider.path > kms://h...@.int..com:16000/kms > > Fix 2: Building NiFi nar files with hadoop version, as installed in our > system. (2.6.0-cdh5.7.0). > Steps followed: > a) Changed NiFi pom file hadoop version from 2.6.2 to 2.6.0-cdh5.7.0. > b) Run mvn clean package -DskipTests > c) Copy following nar files to /opt/nifi-dev/lib > ./nifi-nar-bundles/nifi-hadoop-bundle/nifi-hadoop-nar/target/nifi-hadoop-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/target/nifi-hadoop-libraries-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-nar/target/nifi-hbase-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-standard-services/nifi-http-context-map-bundle/nifi-http-context-map-nar/target/nifi-http-context-map-nar-1.0.0-SNAPSHOT.nar > d) Restart NiFi with bin/nifi.sh restart > This fixes resolved the Unknown Host Exception for us but we ran into Problem > 2 mentioned below. > Problem 2: Ingesting Corrupted data into HDFS encryption zone > After resolving the UnknownHostException, NiFi was able to ingest files into > encryption zone but content of the file is corrupted. > Approaches: > Tried to simulate error with sample Java program which uses similar logic and > same library, but it was ingesting files into encryption zone without any > problem. > Checked NiFi log files to find the cause, found NiFi is making HTTP requests > to kms to decrypt keys but could not proceed further as there is no error. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (NIFI-2562) PutHDFS writes corrupted data in the transparent disk encryption zone
[ https://issues.apache.org/jira/browse/NIFI-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428209#comment-15428209 ] Vik edited comment on NIFI-2562 at 8/19/16 1:50 PM: Here are the screenshots you asked for. Browsing through data-provenance tab, we could locate that within NiFi and PUTHDFS, we can view data without any corruption (image 1 is proof of that). The latter image shows the data in HDFS which is corrupted. So we can infer that, after the data flows through PutHDFS and before it's ingested into HDFS, something is fishy and we can't figure out what it is. Atleast, not yet. "!HdfsCorrupted.jpg" "!NiFi-PutHDFS.jpg" was (Author: allnamesaretaken): Here are the screenshots you asked for. Browsing through data-provenance tab, we could locate that within NiFi and PUTHDFS, we can view data without any corruption (image 1 is proof of that). The latter image shows the data in HDFS which is corrupted. So we can infer that, after the data flows through PutHDFS and before it's ingested into HDFS, something is fishy and we can't figure out what it is. Atleast, not yet. !NiFi-PutHDFS.jpg|thumbnail, width=800px! !HdfsCorrupted.jpg|thumbnail, width=800px! > PutHDFS writes corrupted data in the transparent disk encryption zone > - > > Key: NIFI-2562 > URL: https://issues.apache.org/jira/browse/NIFI-2562 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0 >Reporter: Vik >Priority: Blocker > Labels: encryption, security > Attachments: HdfsCorrupted.jpg, NiFi-PutHDFS.jpg > > > Problem 1: UnknownHostExcepion > When NiFi is trying to ingest files into HDFS encryption zone, it was > throwing UnknownHostException > Reason: In hadoop Configuration files, like core-site.xml and hdfs-site.xml, > kms hosts were mentioned in the following format "h...@xxx1.int..com; > xxx2.int..com:16000". > Since NiFi was using old hadoop libraries (2.6.2), It could not resolve two > hosts. So instead it considered two hosts as a single host and started > throwing UnknownHostExcepion. > We tried a couple different fixes for this. > Fix 1: Changing configuration files from having property like: >hadoop.security.key.provider.path > kms://h...@.int..com; > .int..com:16000/kms > to: >hadoop.security.key.provider.path > kms://h...@.int..com:16000/kms > > Fix 2: Building NiFi nar files with hadoop version, as installed in our > system. (2.6.0-cdh5.7.0). > Steps followed: > a) Changed NiFi pom file hadoop version from 2.6.2 to 2.6.0-cdh5.7.0. > b) Run mvn clean package -DskipTests > c) Copy following nar files to /opt/nifi-dev/lib > ./nifi-nar-bundles/nifi-hadoop-bundle/nifi-hadoop-nar/target/nifi-hadoop-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/target/nifi-hadoop-libraries-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-nar/target/nifi-hbase-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-standard-services/nifi-http-context-map-bundle/nifi-http-context-map-nar/target/nifi-http-context-map-nar-1.0.0-SNAPSHOT.nar > d) Restart NiFi with bin/nifi.sh restart > This fixes resolved the Unknown Host Exception for us but we ran into Problem > 2 mentioned below. > Problem 2: Ingesting Corrupted data into HDFS encryption zone > After resolving the UnknownHostException, NiFi was able to ingest files into > encryption zone but content of the file is corrupted. > Approaches: > Tried to simulate error with sample Java program which uses similar logic and > same library, but it was ingesting files into encryption zone without any > problem. > Checked NiFi log files to find the cause, found NiFi is making HTTP requests > to kms to decrypt keys but could not proceed further as there is no error. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2562) PutHDFS writes corrupted data in the transparent disk encryption zone
[ https://issues.apache.org/jira/browse/NIFI-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428221#comment-15428221 ] Joseph Witt commented on NIFI-2562: --- Ok thanks for the screenshots. Is the corruption issue you're observing consistently at the end of the message? Are there other HDFS client versions for you to try? The nifi portion of interacting with the client is quite straight forward and of course we've not seen this happen in non TDE scenarios so it seems unlikely, at least now, that it is a nifi client implementation issue. > PutHDFS writes corrupted data in the transparent disk encryption zone > - > > Key: NIFI-2562 > URL: https://issues.apache.org/jira/browse/NIFI-2562 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0 >Reporter: Vik >Priority: Blocker > Labels: encryption, security > Attachments: HdfsCorrupted.jpg, NiFi-PutHDFS.jpg > > > Problem 1: UnknownHostExcepion > When NiFi is trying to ingest files into HDFS encryption zone, it was > throwing UnknownHostException > Reason: In hadoop Configuration files, like core-site.xml and hdfs-site.xml, > kms hosts were mentioned in the following format "h...@xxx1.int..com; > xxx2.int..com:16000". > Since NiFi was using old hadoop libraries (2.6.2), It could not resolve two > hosts. So instead it considered two hosts as a single host and started > throwing UnknownHostExcepion. > We tried a couple different fixes for this. > Fix 1: Changing configuration files from having property like: >hadoop.security.key.provider.path > kms://h...@.int..com; > .int..com:16000/kms > to: >hadoop.security.key.provider.path > kms://h...@.int..com:16000/kms > > Fix 2: Building NiFi nar files with hadoop version, as installed in our > system. (2.6.0-cdh5.7.0). > Steps followed: > a) Changed NiFi pom file hadoop version from 2.6.2 to 2.6.0-cdh5.7.0. > b) Run mvn clean package -DskipTests > c) Copy following nar files to /opt/nifi-dev/lib > ./nifi-nar-bundles/nifi-hadoop-bundle/nifi-hadoop-nar/target/nifi-hadoop-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/target/nifi-hadoop-libraries-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-nar/target/nifi-hbase-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-standard-services/nifi-http-context-map-bundle/nifi-http-context-map-nar/target/nifi-http-context-map-nar-1.0.0-SNAPSHOT.nar > d) Restart NiFi with bin/nifi.sh restart > This fixes resolved the Unknown Host Exception for us but we ran into Problem > 2 mentioned below. > Problem 2: Ingesting Corrupted data into HDFS encryption zone > After resolving the UnknownHostException, NiFi was able to ingest files into > encryption zone but content of the file is corrupted. > Approaches: > Tried to simulate error with sample Java program which uses similar logic and > same library, but it was ingesting files into encryption zone without any > problem. > Checked NiFi log files to find the cause, found NiFi is making HTTP requests > to kms to decrypt keys but could not proceed further as there is no error. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2562) PutHDFS writes corrupted data in the transparent disk encryption zone
[ https://issues.apache.org/jira/browse/NIFI-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428209#comment-15428209 ] Vik commented on NIFI-2562: --- Here are the screenshots you asked for. Browsing through data-provenance tab, we could locate that within NiFi and PUTHDFS, we can view data without any corruption (image 1 is proof of that). The latter image shows the data in HDFS which is corrupted. So we can infer that, after the data flows through PutHDFS and before it's ingested into HDFS, something is fishy and we can't figure out what it is. Atleast, not yet. !NiFi-PutHDFS.jpg|thumbnail, width=800px! !HdfsCorrupted.jpg|thumbnail, width=800px! > PutHDFS writes corrupted data in the transparent disk encryption zone > - > > Key: NIFI-2562 > URL: https://issues.apache.org/jira/browse/NIFI-2562 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0 >Reporter: Vik >Priority: Blocker > Labels: encryption, security > Attachments: HdfsCorrupted.jpg, NiFi-PutHDFS.jpg > > > Problem 1: UnknownHostExcepion > When NiFi is trying to ingest files into HDFS encryption zone, it was > throwing UnknownHostException > Reason: In hadoop Configuration files, like core-site.xml and hdfs-site.xml, > kms hosts were mentioned in the following format "h...@xxx1.int..com; > xxx2.int..com:16000". > Since NiFi was using old hadoop libraries (2.6.2), It could not resolve two > hosts. So instead it considered two hosts as a single host and started > throwing UnknownHostExcepion. > We tried a couple different fixes for this. > Fix 1: Changing configuration files from having property like: >hadoop.security.key.provider.path > kms://h...@.int..com; > .int..com:16000/kms > to: >hadoop.security.key.provider.path > kms://h...@.int..com:16000/kms > > Fix 2: Building NiFi nar files with hadoop version, as installed in our > system. (2.6.0-cdh5.7.0). > Steps followed: > a) Changed NiFi pom file hadoop version from 2.6.2 to 2.6.0-cdh5.7.0. > b) Run mvn clean package -DskipTests > c) Copy following nar files to /opt/nifi-dev/lib > ./nifi-nar-bundles/nifi-hadoop-bundle/nifi-hadoop-nar/target/nifi-hadoop-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/target/nifi-hadoop-libraries-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-nar/target/nifi-hbase-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-standard-services/nifi-http-context-map-bundle/nifi-http-context-map-nar/target/nifi-http-context-map-nar-1.0.0-SNAPSHOT.nar > d) Restart NiFi with bin/nifi.sh restart > This fixes resolved the Unknown Host Exception for us but we ran into Problem > 2 mentioned below. > Problem 2: Ingesting Corrupted data into HDFS encryption zone > After resolving the UnknownHostException, NiFi was able to ingest files into > encryption zone but content of the file is corrupted. > Approaches: > Tried to simulate error with sample Java program which uses similar logic and > same library, but it was ingesting files into encryption zone without any > problem. > Checked NiFi log files to find the cause, found NiFi is making HTTP requests > to kms to decrypt keys but could not proceed further as there is no error. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #655: DataDog support added
Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/655 @Ramizjon I am traveling currently but yes, I will be performing basic functional testing when I am able --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-2562) PutHDFS writes corrupted data in the transparent disk encryption zone
[ https://issues.apache.org/jira/browse/NIFI-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vik updated NIFI-2562: -- Attachment: HdfsCorrupted.jpg NiFi-PutHDFS.jpg > PutHDFS writes corrupted data in the transparent disk encryption zone > - > > Key: NIFI-2562 > URL: https://issues.apache.org/jira/browse/NIFI-2562 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0 >Reporter: Vik >Priority: Blocker > Labels: encryption, security > Attachments: HdfsCorrupted.jpg, NiFi-PutHDFS.jpg > > > Problem 1: UnknownHostExcepion > When NiFi is trying to ingest files into HDFS encryption zone, it was > throwing UnknownHostException > Reason: In hadoop Configuration files, like core-site.xml and hdfs-site.xml, > kms hosts were mentioned in the following format "h...@xxx1.int..com; > xxx2.int..com:16000". > Since NiFi was using old hadoop libraries (2.6.2), It could not resolve two > hosts. So instead it considered two hosts as a single host and started > throwing UnknownHostExcepion. > We tried a couple different fixes for this. > Fix 1: Changing configuration files from having property like: >hadoop.security.key.provider.path > kms://h...@.int..com; > .int..com:16000/kms > to: >hadoop.security.key.provider.path > kms://h...@.int..com:16000/kms > > Fix 2: Building NiFi nar files with hadoop version, as installed in our > system. (2.6.0-cdh5.7.0). > Steps followed: > a) Changed NiFi pom file hadoop version from 2.6.2 to 2.6.0-cdh5.7.0. > b) Run mvn clean package -DskipTests > c) Copy following nar files to /opt/nifi-dev/lib > ./nifi-nar-bundles/nifi-hadoop-bundle/nifi-hadoop-nar/target/nifi-hadoop-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/target/nifi-hadoop-libraries-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-nar/target/nifi-hbase-nar-1.0.0-SNAPSHOT.nar > ./nifi-nar-bundles/nifi-standard-services/nifi-http-context-map-bundle/nifi-http-context-map-nar/target/nifi-http-context-map-nar-1.0.0-SNAPSHOT.nar > d) Restart NiFi with bin/nifi.sh restart > This fixes resolved the Unknown Host Exception for us but we ran into Problem > 2 mentioned below. > Problem 2: Ingesting Corrupted data into HDFS encryption zone > After resolving the UnknownHostException, NiFi was able to ingest files into > encryption zone but content of the file is corrupted. > Approaches: > Tried to simulate error with sample Java program which uses similar logic and > same library, but it was ingesting files into encryption zone without any > problem. > Checked NiFi log files to find the cause, found NiFi is making HTTP requests > to kms to decrypt keys but could not proceed further as there is no error. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1867) improve ModifyBytes to make it easy to remove all flowfile content
[ https://issues.apache.org/jira/browse/NIFI-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428177#comment-15428177 ] ASF GitHub Bot commented on NIFI-1867: -- Github user jskora commented on the issue: https://github.com/apache/nifi/pull/886 @pvillard31, I committed the .allowableValues change to this and the 1.x pull https://github.com/apache/nifi/pull/890. > improve ModifyBytes to make it easy to remove all flowfile content > -- > > Key: NIFI-1867 > URL: https://issues.apache.org/jira/browse/NIFI-1867 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Ben Icore >Assignee: Joe Skora > > update ModifyBytes processor to include a "Remove all content" property. > this property shouild default to false so existing functionality is not > changed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #886: NIFI-1867 Improve ModifyBytes to make it easy to remove all...
Github user jskora commented on the issue: https://github.com/apache/nifi/pull/886 @pvillard31, I committed the .allowableValues change to this and the 1.x pull https://github.com/apache/nifi/pull/890. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (NIFI-2604) JDBC Connection Pool support for lib directory and expression language
Joseph Witt created NIFI-2604: - Summary: JDBC Connection Pool support for lib directory and expression language Key: NIFI-2604 URL: https://issues.apache.org/jira/browse/NIFI-2604 Project: Apache NiFi Issue Type: Improvement Reporter: Joseph Witt It would be ideal if the JDBC Connection Service supported specifying a directory instead of particular driver jars. It would also be helpful if it accepted expression language statements so that it could refer to a location that is based on variable registry values so it is more portable between environments. This stems from a user list thread titled "adding dependencies like jdbc drivers to the build" on Aug 18 2016 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2567) HTTP Site-to-Site can't send data larger than about 7KB via HTTPS
[ https://issues.apache.org/jira/browse/NIFI-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-2567: -- Fix Version/s: 1.0.0 > HTTP Site-to-Site can't send data larger than about 7KB via HTTPS > - > > Key: NIFI-2567 > URL: https://issues.apache.org/jira/browse/NIFI-2567 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Blocker > Fix For: 1.0.0 > > > HTTP Site-to-Site fails to send data bigger than about 7KB through HTTPS. > Getting data via HTTPS works. It can send large data with HTTP. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2488) Flow history does not distinguish between "No value previously set" and unauthorized
[ https://issues.apache.org/jira/browse/NIFI-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-2488: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Flow history does not distinguish between "No value previously set" and > unauthorized > > > Key: NIFI-2488 > URL: https://issues.apache.org/jira/browse/NIFI-2488 > Project: Apache NiFi > Issue Type: Bug >Reporter: Joseph Percivall >Assignee: Jeff Storck > Fix For: 1.0.0 > > Attachments: Screen Shot 2016-08-04 at 5.39.26 PM.png > > > When viewing the flow history the events for which a user does not have > access to see will still populate the list but instead of seeing > "unauthorized" for values they cannot see, they see "No value previously set". > Flow history should do a better job of distinguishing between the two cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2601) Splash screen spinner size is too big
[ https://issues.apache.org/jira/browse/NIFI-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428130#comment-15428130 ] ASF GitHub Bot commented on NIFI-2601: -- Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/897#discussion_r75473550 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/frontend/bower.json --- @@ -10,9 +10,12 @@ ], "dependencies": { "font-awesome": "fontawesome#^4.6.1", --- End diff -- Can also rely on a specific version of fontawesome? > Splash screen spinner size is too big > - > > Key: NIFI-2601 > URL: https://issues.apache.org/jira/browse/NIFI-2601 > Project: Apache NiFi > Issue Type: Bug >Reporter: Scott Aslan >Assignee: Scott Aslan >Priority: Critical > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #897: [NIFI-2601] Update bower.json to use specific versio...
Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/897#discussion_r75473550 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/frontend/bower.json --- @@ -10,9 +10,12 @@ ], "dependencies": { "font-awesome": "fontawesome#^4.6.1", --- End diff -- Can also rely on a specific version of fontawesome? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #883: NIFI-2591 - PutSQL has no handling for Binary data types
Github user patricker commented on the issue: https://github.com/apache/nifi/pull/883 I was thinking about this and I had a new idea for a solution. What if I added code that tried to read a new, optional, attribute of the format `sql.args.N.format`. For the moment it would be just for binary data, with values like 'hex' or 'ascii' (something like that). But it could be expanded down the road to also support things like a more flexible version of my Timestamp PR so users could optionally provide their own Timestamp format. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #897: [NIFI-2601] Update bower.json to use specific versions
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/897 Reviewing... --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2601) Splash screen spinner size is too big
[ https://issues.apache.org/jira/browse/NIFI-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15428126#comment-15428126 ] ASF GitHub Bot commented on NIFI-2601: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/897 Reviewing... > Splash screen spinner size is too big > - > > Key: NIFI-2601 > URL: https://issues.apache.org/jira/browse/NIFI-2601 > Project: Apache NiFi > Issue Type: Bug >Reporter: Scott Aslan >Assignee: Scott Aslan >Priority: Critical > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)