[jira] [Updated] (NIFI-9241) Review CORS Security Configuration

2021-09-30 Thread Nathan Gough (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Gough updated NIFI-9241:
---
Fix Version/s: 1.15.0

> Review CORS Security Configuration
> --
>
> Key: NIFI-9241
> URL: https://issues.apache.org/jira/browse/NIFI-9241
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI, Security
>Affects Versions: 1.8.0, 1.14.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 1.15.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The NiFi Web Security Configuration includes a custom CORS Configuration 
> Source that disallows HTTP POST requests for Template Uploads. The works as 
> expected with direct access to the NiFi UI, but causes issues when attempting 
> to upload a template to NiFi through a reverse proxy.
> When a web browser sends a template upload request that includes an 
> unexpected {{Origin}} header, the Spring CORS Filter returns HTTP 403 
> Forbidden with a response body containing the message {{Invalid CORS 
> Request}}.  NIFI-6080 describes a workaround that involves setting a 
> different {{Origin}} header.  The current approach as implemented in 
> NIFI-5595 should be evaluated for potential improvements to avoid this 
> behavior when running NiFi with a reverse proxy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9241) Review CORS Security Configuration

2021-09-30 Thread Nathan Gough (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Gough updated NIFI-9241:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Review CORS Security Configuration
> --
>
> Key: NIFI-9241
> URL: https://issues.apache.org/jira/browse/NIFI-9241
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI, Security
>Affects Versions: 1.8.0, 1.14.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The NiFi Web Security Configuration includes a custom CORS Configuration 
> Source that disallows HTTP POST requests for Template Uploads. The works as 
> expected with direct access to the NiFi UI, but causes issues when attempting 
> to upload a template to NiFi through a reverse proxy.
> When a web browser sends a template upload request that includes an 
> unexpected {{Origin}} header, the Spring CORS Filter returns HTTP 403 
> Forbidden with a response body containing the message {{Invalid CORS 
> Request}}.  NIFI-6080 describes a workaround that involves setting a 
> different {{Origin}} header.  The current approach as implemented in 
> NIFI-5595 should be evaluated for potential improvements to avoid this 
> behavior when running NiFi with a reverse proxy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] Lehel44 commented on a change in pull request #5423: NIFI-9260 Making the 'write and rename' behaviour optional for PutHDFS

2021-09-30 Thread GitBox


Lehel44 commented on a change in pull request #5423:
URL: https://github.com/apache/nifi/pull/5423#discussion_r719866135



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/test/java/org/apache/nifi/processors/hadoop/PutHDFSTest.java
##
@@ -189,35 +193,76 @@ public void testValidators() {
 
 @Test
 public void testPutFile() throws IOException {
-PutHDFS proc = new TestablePutHDFS(kerberosProperties, mockFileSystem);
-TestRunner runner = TestRunners.newTestRunner(proc);
-runner.setProperty(PutHDFS.DIRECTORY, "target/test-classes");
+// given
+final FileSystem spyFileSystem = Mockito.spy(mockFileSystem);
+final PutHDFS proc = new TestablePutHDFS(kerberosProperties, 
spyFileSystem);
+final TestRunner runner = TestRunners.newTestRunner(proc);
+runner.setProperty(PutHDFS.DIRECTORY, TARGET_DIRECTORY);
 runner.setProperty(PutHDFS.CONFLICT_RESOLUTION, "replace");
-try (FileInputStream fis = new 
FileInputStream("src/test/resources/testdata/randombytes-1")) {
-Map attributes = new HashMap<>();
-attributes.put(CoreAttributes.FILENAME.key(), "randombytes-1");
+
+// when
+try (final FileInputStream fis = new FileInputStream(SOURCE_DIRECTORY 
+ "/" + FILE_NAME)) {
+final Map attributes = new HashMap<>();
+attributes.put(CoreAttributes.FILENAME.key(), FILE_NAME);
 runner.enqueue(fis, attributes);
 runner.run();
 }
 
-List failedFlowFiles = runner
-.getFlowFilesForRelationship(new 
Relationship.Builder().name("failure").build());
+// then
+final List failedFlowFiles = 
runner.getFlowFilesForRelationship(PutHDFS.REL_FAILURE);
 assertTrue(failedFlowFiles.isEmpty());
 
-List flowFiles = 
runner.getFlowFilesForRelationship(PutHDFS.REL_SUCCESS);
+final List flowFiles = 
runner.getFlowFilesForRelationship(PutHDFS.REL_SUCCESS);
 assertEquals(1, flowFiles.size());
-MockFlowFile flowFile = flowFiles.get(0);
-assertTrue(mockFileSystem.exists(new 
Path("target/test-classes/randombytes-1")));
-assertEquals("randombytes-1", 
flowFile.getAttribute(CoreAttributes.FILENAME.key()));
-assertEquals("target/test-classes", 
flowFile.getAttribute(PutHDFS.ABSOLUTE_HDFS_PATH_ATTRIBUTE));
+
+final MockFlowFile flowFile = flowFiles.get(0);
+assertTrue(spyFileSystem.exists(new Path(TARGET_DIRECTORY + "/" + 
FILE_NAME)));
+assertEquals(FILE_NAME, 
flowFile.getAttribute(CoreAttributes.FILENAME.key()));
+assertEquals(TARGET_DIRECTORY, 
flowFile.getAttribute(PutHDFS.ABSOLUTE_HDFS_PATH_ATTRIBUTE));
 assertEquals("true", 
flowFile.getAttribute(PutHDFS.TARGET_HDFS_DIR_CREATED_ATTRIBUTE));
 
 final List provenanceEvents = 
runner.getProvenanceEvents();
 assertEquals(1, provenanceEvents.size());
 final ProvenanceEventRecord sendEvent = provenanceEvents.get(0);
 assertEquals(ProvenanceEventType.SEND, sendEvent.getEventType());
 // If it runs with a real HDFS, the protocol will be "hdfs://", but 
with a local filesystem, just assert the filename.
-
assertTrue(sendEvent.getTransitUri().endsWith("target/test-classes/randombytes-1"));
+assertTrue(sendEvent.getTransitUri().endsWith(TARGET_DIRECTORY + "/" + 
FILE_NAME));
+
+Mockito.verify(spyFileSystem, 
Mockito.times(1)).rename(Mockito.any(Path.class), Mockito.any(Path.class));
+}
+
+@Test
+public void testPutFileWithSimpleWrite() throws IOException {
+// given
+final FileSystem spyFileSystem = Mockito.spy(mockFileSystem);
+final PutHDFS proc = new TestablePutHDFS(kerberosProperties, 
spyFileSystem);
+final TestRunner runner = TestRunners.newTestRunner(proc);
+runner.setProperty(PutHDFS.DIRECTORY, TARGET_DIRECTORY);
+runner.setProperty(PutHDFS.CONFLICT_RESOLUTION, "replace");
+runner.setProperty(PutHDFS.WRITING_STRATEGY, PutHDFS.SIMPLE_WRITE);
+
+// when
+try (final FileInputStream fis = new FileInputStream(SOURCE_DIRECTORY 
+ "/" + FILE_NAME)) {
+final Map attributes = new HashMap<>();
+attributes.put(CoreAttributes.FILENAME.key(), FILE_NAME);

Review comment:
   ```suggestion
   final Map attributes = 
Collections.singletonMap(CoreAttributes.FILENAME.key(), FILE_NAME);
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #5423: NIFI-9260 Making the 'write and rename' behaviour optional for PutHDFS

2021-09-30 Thread GitBox


Lehel44 commented on a change in pull request #5423:
URL: https://github.com/apache/nifi/pull/5423#discussion_r719865861



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/test/java/org/apache/nifi/processors/hadoop/PutHDFSTest.java
##
@@ -189,35 +193,76 @@ public void testValidators() {
 
 @Test
 public void testPutFile() throws IOException {
-PutHDFS proc = new TestablePutHDFS(kerberosProperties, mockFileSystem);
-TestRunner runner = TestRunners.newTestRunner(proc);
-runner.setProperty(PutHDFS.DIRECTORY, "target/test-classes");
+// given
+final FileSystem spyFileSystem = Mockito.spy(mockFileSystem);
+final PutHDFS proc = new TestablePutHDFS(kerberosProperties, 
spyFileSystem);
+final TestRunner runner = TestRunners.newTestRunner(proc);
+runner.setProperty(PutHDFS.DIRECTORY, TARGET_DIRECTORY);
 runner.setProperty(PutHDFS.CONFLICT_RESOLUTION, "replace");
-try (FileInputStream fis = new 
FileInputStream("src/test/resources/testdata/randombytes-1")) {
-Map attributes = new HashMap<>();
-attributes.put(CoreAttributes.FILENAME.key(), "randombytes-1");
+
+// when
+try (final FileInputStream fis = new FileInputStream(SOURCE_DIRECTORY 
+ "/" + FILE_NAME)) {
+final Map attributes = new HashMap<>();
+attributes.put(CoreAttributes.FILENAME.key(), FILE_NAME);
 runner.enqueue(fis, attributes);
 runner.run();
 }
 
-List failedFlowFiles = runner
-.getFlowFilesForRelationship(new 
Relationship.Builder().name("failure").build());
+// then
+final List failedFlowFiles = 
runner.getFlowFilesForRelationship(PutHDFS.REL_FAILURE);
 assertTrue(failedFlowFiles.isEmpty());
 
-List flowFiles = 
runner.getFlowFilesForRelationship(PutHDFS.REL_SUCCESS);
+final List flowFiles = 
runner.getFlowFilesForRelationship(PutHDFS.REL_SUCCESS);
 assertEquals(1, flowFiles.size());
-MockFlowFile flowFile = flowFiles.get(0);
-assertTrue(mockFileSystem.exists(new 
Path("target/test-classes/randombytes-1")));
-assertEquals("randombytes-1", 
flowFile.getAttribute(CoreAttributes.FILENAME.key()));
-assertEquals("target/test-classes", 
flowFile.getAttribute(PutHDFS.ABSOLUTE_HDFS_PATH_ATTRIBUTE));
+
+final MockFlowFile flowFile = flowFiles.get(0);
+assertTrue(spyFileSystem.exists(new Path(TARGET_DIRECTORY + "/" + 
FILE_NAME)));
+assertEquals(FILE_NAME, 
flowFile.getAttribute(CoreAttributes.FILENAME.key()));
+assertEquals(TARGET_DIRECTORY, 
flowFile.getAttribute(PutHDFS.ABSOLUTE_HDFS_PATH_ATTRIBUTE));
 assertEquals("true", 
flowFile.getAttribute(PutHDFS.TARGET_HDFS_DIR_CREATED_ATTRIBUTE));
 
 final List provenanceEvents = 
runner.getProvenanceEvents();
 assertEquals(1, provenanceEvents.size());
 final ProvenanceEventRecord sendEvent = provenanceEvents.get(0);
 assertEquals(ProvenanceEventType.SEND, sendEvent.getEventType());
 // If it runs with a real HDFS, the protocol will be "hdfs://", but 
with a local filesystem, just assert the filename.
-
assertTrue(sendEvent.getTransitUri().endsWith("target/test-classes/randombytes-1"));
+assertTrue(sendEvent.getTransitUri().endsWith(TARGET_DIRECTORY + "/" + 
FILE_NAME));
+
+Mockito.verify(spyFileSystem, 
Mockito.times(1)).rename(Mockito.any(Path.class), Mockito.any(Path.class));
+}
+
+@Test
+public void testPutFileWithSimpleWrite() throws IOException {
+// given
+final FileSystem spyFileSystem = Mockito.spy(mockFileSystem);
+final PutHDFS proc = new TestablePutHDFS(kerberosProperties, 
spyFileSystem);
+final TestRunner runner = TestRunners.newTestRunner(proc);
+runner.setProperty(PutHDFS.DIRECTORY, TARGET_DIRECTORY);
+runner.setProperty(PutHDFS.CONFLICT_RESOLUTION, "replace");

Review comment:
   ```suggestion
   runner.setProperty(PutHDFS.CONFLICT_RESOLUTION, 
PutHDFS.REPLACE_RESOLUTION);
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #5423: NIFI-9260 Making the 'write and rename' behaviour optional for PutHDFS

2021-09-30 Thread GitBox


Lehel44 commented on a change in pull request #5423:
URL: https://github.com/apache/nifi/pull/5423#discussion_r719864892



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/test/java/org/apache/nifi/processors/hadoop/PutHDFSTest.java
##
@@ -189,35 +193,76 @@ public void testValidators() {
 
 @Test
 public void testPutFile() throws IOException {
-PutHDFS proc = new TestablePutHDFS(kerberosProperties, mockFileSystem);
-TestRunner runner = TestRunners.newTestRunner(proc);
-runner.setProperty(PutHDFS.DIRECTORY, "target/test-classes");
+// given
+final FileSystem spyFileSystem = Mockito.spy(mockFileSystem);
+final PutHDFS proc = new TestablePutHDFS(kerberosProperties, 
spyFileSystem);
+final TestRunner runner = TestRunners.newTestRunner(proc);
+runner.setProperty(PutHDFS.DIRECTORY, TARGET_DIRECTORY);
 runner.setProperty(PutHDFS.CONFLICT_RESOLUTION, "replace");
-try (FileInputStream fis = new 
FileInputStream("src/test/resources/testdata/randombytes-1")) {
-Map attributes = new HashMap<>();
-attributes.put(CoreAttributes.FILENAME.key(), "randombytes-1");
+
+// when
+try (final FileInputStream fis = new FileInputStream(SOURCE_DIRECTORY 
+ "/" + FILE_NAME)) {
+final Map attributes = new HashMap<>();
+attributes.put(CoreAttributes.FILENAME.key(), FILE_NAME);

Review comment:
   ```suggestion
   final Map attributes = 
Collections.singletonMap(CoreAttributes.FILENAME.key(), FILE_NAME);
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #5423: NIFI-9260 Making the 'write and rename' behaviour optional for PutHDFS

2021-09-30 Thread GitBox


Lehel44 commented on a change in pull request #5423:
URL: https://github.com/apache/nifi/pull/5423#discussion_r719863600



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
##
@@ -365,9 +388,12 @@ public void process(InputStream in) throws IOException {
 final long millis = 
stopWatch.getDuration(TimeUnit.MILLISECONDS);
 tempDotCopyFile = tempCopyFile;
 
-if (!conflictResponse.equals(APPEND_RESOLUTION)
-|| (conflictResponse.equals(APPEND_RESOLUTION) && 
!destinationExists)) {
+if  (
+writingStrategy.equals(WRITE_AND_RENAME)
+&& (!conflictResponse.equals(APPEND_RESOLUTION) || 
(conflictResponse.equals(APPEND_RESOLUTION) && !destinationExists))

Review comment:
   This can be simplified to:
   ```suggestion
   writingStrategy.equals(WRITE_AND_RENAME) && 
(!conflictResponse.equals(APPEND_RESOLUTION) || !destinationExists)
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-9241) Review CORS Security Configuration

2021-09-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17423058#comment-17423058
 ] 

ASF subversion and git services commented on NIFI-9241:
---

Commit e16a6c2b89879034be65cca56b33724914b54033 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=e16a6c2 ]

NIFI-9241 Refactored CSRF mitigation using random Request-Token

- Replaced use of Authorization header with custom Request-Token header for 
CSRF mitigation
- Added Request-Token cookie for CSRF mitigation
- Replaced session storage of JWT with expiration in seconds
- Removed and disabled CORS configuration
- Disabled HTTP OPTIONS method
- Refactored HTTP Proxy URI construction using RequestUriBuilder

Signed-off-by: Nathan Gough 

This closes #5417.


> Review CORS Security Configuration
> --
>
> Key: NIFI-9241
> URL: https://issues.apache.org/jira/browse/NIFI-9241
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI, Security
>Affects Versions: 1.8.0, 1.14.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The NiFi Web Security Configuration includes a custom CORS Configuration 
> Source that disallows HTTP POST requests for Template Uploads. The works as 
> expected with direct access to the NiFi UI, but causes issues when attempting 
> to upload a template to NiFi through a reverse proxy.
> When a web browser sends a template upload request that includes an 
> unexpected {{Origin}} header, the Spring CORS Filter returns HTTP 403 
> Forbidden with a response body containing the message {{Invalid CORS 
> Request}}.  NIFI-6080 describes a workaround that involves setting a 
> different {{Origin}} header.  The current approach as implemented in 
> NIFI-5595 should be evaluated for potential improvements to avoid this 
> behavior when running NiFi with a reverse proxy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] Lehel44 commented on a change in pull request #5423: NIFI-9260 Making the 'write and rename' behaviour optional for PutHDFS

2021-09-30 Thread GitBox


Lehel44 commented on a change in pull request #5423:
URL: https://github.com/apache/nifi/pull/5423#discussion_r719863380



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
##
@@ -365,9 +388,12 @@ public void process(InputStream in) throws IOException {
 final long millis = 
stopWatch.getDuration(TimeUnit.MILLISECONDS);
 tempDotCopyFile = tempCopyFile;
 
-if (!conflictResponse.equals(APPEND_RESOLUTION)
-|| (conflictResponse.equals(APPEND_RESOLUTION) && 
!destinationExists)) {
+if  (
+writingStrategy.equals(WRITE_AND_RENAME)

Review comment:
   This can be simplified to:
   ```suggestion
   writingStrategy.equals(WRITE_AND_RENAME) && 
(!conflictResponse.equals(APPEND_RESOLUTION) || !destinationExists)
   ```
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] thenatog closed pull request #5417: NIFI-9241 Refactor CSRF mitigation using random Request-Token

2021-09-30 Thread GitBox


thenatog closed pull request #5417:
URL: https://github.com/apache/nifi/pull/5417


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #5423: NIFI-9260 Making the 'write and rename' behaviour optional for PutHDFS

2021-09-30 Thread GitBox


Lehel44 commented on a change in pull request #5423:
URL: https://github.com/apache/nifi/pull/5423#discussion_r719858254



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
##
@@ -269,6 +287,11 @@ public Object run() {
 final Path tempCopyFile = new Path(dirPath, "." + 
filename);
 final Path copyFile = new Path(dirPath, filename);
 
+// Depending on the writing strategy, we might need a 
temporary file
+final Path actualCopyFile = 
(writingStrategy.equals(WRITE_AND_RENAME))

Review comment:
   Since temporary files are now only created if the WRITE AND RENAME 
strategy is chosen, I believe the comment on 335 is worth extending.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] github-actions[bot] commented on pull request #5038: NIFI-8498: Optional removal of fields with UpdateRecord

2021-09-30 Thread GitBox


github-actions[bot] commented on pull request #5038:
URL: https://github.com/apache/nifi/pull/5038#issuecomment-931791857


   We're marking this PR as stale due to lack of updates in the past few 
months. If after another couple of weeks the stale label has not been removed 
this PR will be closed. This stale marker and eventual auto close does not 
indicate a judgement of the PR just lack of reviewer bandwidth and helps us 
keep the PR queue more manageable.  If you would like this PR re-opened you can 
do so and a committer can remove the stale tag.  Or you can open a new PR.  Try 
to help review other PRs to increase PR review bandwidth which in turn helps 
yours.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #5423: NIFI-9260 Making the 'write and rename' behaviour optional for PutHDFS

2021-09-30 Thread GitBox


Lehel44 commented on a change in pull request #5423:
URL: https://github.com/apache/nifi/pull/5423#discussion_r719843019



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
##
@@ -129,6 +137,15 @@
 .allowableValues(REPLACE_RESOLUTION_AV, IGNORE_RESOLUTION_AV, 
FAIL_RESOLUTION_AV, APPEND_RESOLUTION_AV)
 .build();
 
+protected static final PropertyDescriptor WRITING_STRATEGY = new 
PropertyDescriptor.Builder()
+.name("writing-strategy")
+.displayName("Writing Strategy")
+.description("Defines the approach for writing the FlowFile data.")

Review comment:
   Minor: I'd advise using "method" rather than "approach" here because it 
refers to the process.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #5423: NIFI-9260 Making the 'write and rename' behaviour optional for PutHDFS

2021-09-30 Thread GitBox


Lehel44 commented on a change in pull request #5423:
URL: https://github.com/apache/nifi/pull/5423#discussion_r719841369



##
File path: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
##
@@ -121,6 +124,11 @@
 protected static final AllowableValue APPEND_RESOLUTION_AV = new 
AllowableValue(APPEND_RESOLUTION, APPEND_RESOLUTION,
 "Appends to the existing file if any, creates a new file 
otherwise.");
 
+protected static final AllowableValue WRITE_AND_RENAME_AV = new 
AllowableValue(WRITE_AND_RENAME, "Write and rename",
+"The processor writes FlowFile data into a temporary file and 
renames it after completion. This might prevent other processes from reading 
half-written files.");

Review comment:
   Do you think "partially written" would make more sense here?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] thenatog commented on pull request #5417: NIFI-9241 Refactor CSRF mitigation using random Request-Token

2021-09-30 Thread GitBox


thenatog commented on pull request #5417:
URL: https://github.com/apache/nifi/pull/5417#issuecomment-931776405


   I tested this out with a standalone node using X509, LDAP with a nginx 
reverse proxy. I also tested on a secure cluster with X509 and LDAP without the 
reverse proxy. Everything appears to be working.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #5391: NIFI-9174: Adding AWS SecretsManager ParamValueProvider for Stateless

2021-09-30 Thread GitBox


markap14 commented on pull request #5391:
URL: https://github.com/apache/nifi/pull/5391#issuecomment-931741211


   @gresockj @exceptionfactory thanks for the feedback. Yes, I agree there 
could be some benefit to also allowing for a 'plaintext' approach, but I also 
think we should choose an approach to start and say this is how it works. If a 
need arises to allow for plaintext, we can look into it then. I do not think we 
should attempt to parse JSON and if it fails, fallback to 'plaintext', though. 
Instead, it makes sense to me to either have two separate Provider 
implementations or to have a configuration option in the Provider that says how 
to handle the data. A really good thing to consider is for Google Cloud 
credentials, you generally will be given a JSON document that contains a Base64 
encoded certificate. So we have a secret JSON document, which would cause a lot 
of confusion if we attempted to separate those into separate key/value pairs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-9263) Make HashiCorpVault Param Value Provider consistent

2021-09-30 Thread Joe Gresock (Jira)
Joe Gresock created NIFI-9263:
-

 Summary: Make HashiCorpVault Param Value Provider consistent
 Key: NIFI-9263
 URL: https://issues.apache.org/jira/browse/NIFI-9263
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Joe Gresock
Assignee: Joe Gresock
 Fix For: 1.15.0


The HashiCorpVaultParameterValueProvider for Stateless NiFi should be updated 
to be in line with the expected behavior of ParameterValueProviders:
* It should return null if the secret or value could not be found
* It should use a default context if a matching parameter could not be found in 
a specific context
* It should map each Parameter Context to a single Vault secret, with a 
key/value pair for each set of parameters



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] gresockj commented on pull request #5391: NIFI-9174: Adding AWS SecretsManager ParamValueProvider for Stateless

2021-09-30 Thread GitBox


gresockj commented on pull request #5391:
URL: https://github.com/apache/nifi/pull/5391#issuecomment-931597824


   > > Thoughts on this approach?
   > 
   > @markap14 Thanks for describing some options for implementation, 
particularly in relation to the AWS Secrets Manager web user interface. PR 
#5410 for NIFI-9221 provides similar capabilities for NiFi Sensitive 
Properties. Using a plain string is easier to handle in code since it avoids 
the need for JSON parsing, but it also looks like the AWS Secrets Manager UI 
encourages using a JSON object as the default representation for generic secret 
values.
   > 
   > Going with the JSON object approach provides the ability to store multiple 
keys and values in a single Secret as you described, which could be useful. On 
the other hand, requiring a JSON obejct representation would break use cases 
where the Secret is a simple string. Without getting too complicated, a 
potential hybrid approach might be to attempt JSON parsing, and otherwise 
return the plain string, at least in the case of the NiFi Sensitive Property 
Provider for NIFI-9221.
   > 
   > Either way, it would be helpful to have a consistent approach, even though 
these are different use cases.
   
   An additional benefit of the JSON approach is that it would store fewer 
secrets (less cost).  In the case of the AWS Secrets Manager Sensitive Property 
Provider, in order to stay consistent we could map the 
`ProtectedPropertyContext.contextName` to the Secret name, and 
`ProtectedPropertyContext.propertyName` to the key within the secret.
   
   As for allowing Parameter Contexts to be arbitrarily mapped to different 
Secret names, @markap14, I'm going to suggest we go for simplicity here and 
simply enforce that a Parameter Context represents exactly one Secret, and so 
its name would become the Secret name.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #5391: NIFI-9174: Adding AWS SecretsManager ParamValueProvider for Stateless

2021-09-30 Thread GitBox


exceptionfactory commented on pull request #5391:
URL: https://github.com/apache/nifi/pull/5391#issuecomment-931591215


   > Thoughts on this approach?
   
   @markap14 Thanks for describing some options for implementation, 
particularly in relation to the AWS Secrets Manager web user interface.  PR 
#5410 for NIFI-9221 provides similar capabilities for NiFi Sensitive 
Properties.  Using a plain string is easier to handle in code since it avoids 
the need for JSON parsing, but it also looks like the AWS Secrets Manager UI 
encourages using a JSON object as the default representation for generic secret 
values.
   
   Going with the JSON object approach provides the ability to store multiple 
keys and values in a single Secret as you described, which could be useful. On 
the other hand, requiring a JSON obejct representation would break use cases 
where the Secret is a simple string.  Without getting too complicated, a 
potential hybrid approach might be to attempt JSON parsing, and otherwise 
return the plain string, at least in the case of the NiFi Sensitive Property 
Provider for NIFI-9221.
   
   Either way, it would be helpful to have a consistent approach, even though 
these are different use cases.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-9223) ListenSyslog cannot listen on 0.0.0.0

2021-09-30 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-9223:
---
Status: Patch Available  (was: Open)

Thanks for reporting this issue [~bcharron], I have submitted GitHub Pull 
Request 5426 to restore the expected default behavior of {{ListenSyslog}}.

> ListenSyslog cannot listen on 0.0.0.0
> -
>
> Key: NIFI-9223
> URL: https://issues.apache.org/jira/browse/NIFI-9223
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2, 1.14.0
>Reporter: Benjamin Charron
>Assignee: David Handermann
>Priority: Major
>  Labels: ListenSyslog, Netty
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As of 1.13.2, ListenSyslog now defaults to listening on 127.0.0.1. It 
> previously listened on 0.0.0.0.
> It is no longer possible to listen on 0.0.0.0 because the processor only 
> supports specifying a network interface, via "Local Network Interface", 
> rather than a network address.
> I think the default should be reverted to 0.0.0.0, as it caught us off-guard 
> during upgrade:
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenSyslog.java#L197



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory opened a new pull request #5426: NIFI-9223 Correct ListenSyslog with default address of 0.0.0.0

2021-09-30 Thread GitBox


exceptionfactory opened a new pull request #5426:
URL: https://github.com/apache/nifi/pull/5426


    Description of PR
   
   NIFI-9223 Corrects the behavior of `ListenSyslog` to listen on `0.0.0.0` 
when the `Local Network Interface` property is not configured. The was an 
unexpected change in the default behavior released in NiFi 1.13.2 as part of PR 
#5044 for NIFI-8462.
   
   Changes include modifying the `NettyEventServerFactory` and sub-classes to 
accept a nullable `InetAddress` argument instead of a `String` for the 
listening address.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [X] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-9056) Content Repository Filling Up

2021-09-30 Thread Andrew Heys (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422956#comment-17422956
 ] 

Andrew Heys commented on NIFI-9056:
---

Just to follow-up, I was able to stand up the test environment which captured 
the issue with content claims not being removed. It was indeed stemming from a 
custom processor using the clone() followed by a write(). After deploying a 
code change, I can see that it is no longer an issue. Thank you for the help 
with this.

> Content Repository Filling Up
> -
>
> Key: NIFI-9056
> URL: https://issues.apache.org/jira/browse/NIFI-9056
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.2
>Reporter: Andrew Heys
>Priority: Critical
> Fix For: 1.15.0
>
>
> We have a clustered nifi setup that has recently been upgraded to 1.13.2 from 
> 1.11.4. After upgrading, one of the issues we have run into is that the 
> Content Repository will fill up to the 
> nifi.content.repository.archive.backpressure.percentage mark and lock the 
> processing & canvas. The only solution is to restart nifi at this point. We 
> have the following properties set:
> nifi.content.repository.archive.backpressure.percentage=95%
> nifi.content.repository.archive.max.usage.percentage=25%
> nifi.content.repository.archive.max.retention.period=2 hours
> The max usage property seems to be completed ignored. Monitoring the nifi 
> cluster disk % for content repository shows that it slowly fills up over time 
> and never decreasing. If we pause the input to entire nifi flow and let all 
> the processing clear out with 0 flowfiles remaining on the canvas for 15+ 
> minutes, the content repository disk usage does not decrease. Currently, our 
> only solution is to restart nifi on a daily cron schedule. After restarting 
> the nifi, it will clear out the 80+ GB of the content repository and usage 
> falls down to 0%. 
>  
> There seems to be an issue removing the older content claims in 1.13.2.
> Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory commented on a change in pull request #5412: NIFI-9239: Updated Consume/Publish Kafka processors to support Exactl…

2021-09-30 Thread GitBox


exceptionfactory commented on a change in pull request #5412:
URL: https://github.com/apache/nifi/pull/5412#discussion_r719626419



##
File path: 
nifi-nar-bundles/nifi-stateless-processor-bundle/nifi-stateless-processor-tests/pom.xml
##
@@ -0,0 +1,174 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+nifi-stateless-processor-bundle
+org.apache.nifi
+1.15.0-SNAPSHOT
+
+4.0.0
+
+nifi-stateless-processor-tests
+
+
+
+org.apache.nifi
+nifi-api
+1.15.0-SNAPSHOT
+compile
+
+
+org.apache.nifi
+nifi-framework-api
+1.15.0-SNAPSHOT
+compile
+
+
+org.apache.nifi
+nifi-server-api
+1.15.0-SNAPSHOT
+compile
+
+
+org.apache.nifi
+nifi-runtime
+1.15.0-SNAPSHOT
+compile
+
+
+org.apache.nifi
+nifi-nar-utils
+1.15.0-SNAPSHOT
+compile
+
+
+
+
+

Review comment:
   Sorry for the confusion, I mixed up the context of this Maven 
configuration.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #5412: NIFI-9239: Updated Consume/Publish Kafka processors to support Exactl…

2021-09-30 Thread GitBox


markap14 commented on a change in pull request #5412:
URL: https://github.com/apache/nifi/pull/5412#discussion_r719603059



##
File path: 
nifi-nar-bundles/nifi-stateless-processor-bundle/nifi-stateless-processor-tests/pom.xml
##
@@ -0,0 +1,174 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+nifi-stateless-processor-bundle
+org.apache.nifi
+1.15.0-SNAPSHOT
+
+4.0.0
+
+nifi-stateless-processor-tests
+
+
+
+org.apache.nifi
+nifi-api
+1.15.0-SNAPSHOT
+compile
+
+
+org.apache.nifi
+nifi-framework-api
+1.15.0-SNAPSHOT
+compile
+
+
+org.apache.nifi
+nifi-server-api
+1.15.0-SNAPSHOT
+compile
+
+
+org.apache.nifi
+nifi-runtime
+1.15.0-SNAPSHOT
+compile
+
+
+org.apache.nifi
+nifi-nar-utils
+1.15.0-SNAPSHOT
+compile
+
+
+
+
+

Review comment:
   We don't use `ObjectMapper` here, in the processor-tests. I did add a 
dependency on `jackson-databind` in the processor module.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5412: NIFI-9239: Updated Consume/Publish Kafka processors to support Exactl…

2021-09-30 Thread GitBox


exceptionfactory commented on a change in pull request #5412:
URL: https://github.com/apache/nifi/pull/5412#discussion_r719601928



##
File path: 
nifi-nar-bundles/nifi-stateless-processor-bundle/nifi-stateless-processor/src/main/java/org/apache/nifi/processors/stateless/retrieval/CachingDataflowRetrieval.java
##
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.stateless.retrieval;
+
+import com.fasterxml.jackson.databind.DeserializationFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processors.stateless.ExecuteStateless;
+import org.apache.nifi.registry.flow.VersionedFlowSnapshot;
+
+import java.io.File;
+import java.io.IOException;
+
+public class CachingDataflowRetrieval implements DataflowRetrieval {
+private final String processorId;
+private final ComponentLog logger;
+private final DataflowRetrieval delegate;
+private final ObjectMapper objectMapper;
+
+
+public CachingDataflowRetrieval(final String processorId, final 
ComponentLog logger, final DataflowRetrieval delegate) {
+this.processorId = processorId;
+this.logger = logger;
+this.delegate = delegate;
+
+objectMapper = new ObjectMapper();
+
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, 
false);
+objectMapper.setAnnotationIntrospector(new 
JaxbAnnotationIntrospector(objectMapper.getTypeFactory()));
+}
+
+@Override
+public VersionedFlowSnapshot retrieveDataflowContents(final ProcessContext 
context) throws IOException {

Review comment:
   Thanks for confirming.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #5412: NIFI-9239: Updated Consume/Publish Kafka processors to support Exactl…

2021-09-30 Thread GitBox


markap14 commented on a change in pull request #5412:
URL: https://github.com/apache/nifi/pull/5412#discussion_r719601610



##
File path: 
nifi-nar-bundles/nifi-stateless-processor-bundle/nifi-stateless-processor/src/main/java/org/apache/nifi/processors/stateless/retrieval/FileSystemDataflowRetrieval.java
##
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.stateless.retrieval;
+
+import com.fasterxml.jackson.databind.DeserializationFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processors.stateless.ExecuteStateless;
+import org.apache.nifi.registry.flow.VersionedFlowSnapshot;
+
+import java.io.IOException;
+import java.io.InputStream;
+
+public class FileSystemDataflowRetrieval implements DataflowRetrieval {
+@Override
+public VersionedFlowSnapshot retrieveDataflowContents(final ProcessContext 
context) throws IOException {
+final ObjectMapper objectMapper = new ObjectMapper();
+
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, 
false);

Review comment:
   It's not needed in this case. It's needed for the caching case because 
when a dataflow is retrieved from registry, there is more information available 
in the dataflow snapshot wrapper element, which is not provided when we 
download a flow locally.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #5412: NIFI-9239: Updated Consume/Publish Kafka processors to support Exactl…

2021-09-30 Thread GitBox


markap14 commented on a change in pull request #5412:
URL: https://github.com/apache/nifi/pull/5412#discussion_r719600576



##
File path: 
nifi-nar-bundles/nifi-stateless-processor-bundle/nifi-stateless-processor/src/main/java/org/apache/nifi/processors/stateless/retrieval/DataflowRetrieval.java
##
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.stateless.retrieval;
+
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.registry.flow.VersionedFlowSnapshot;
+
+import java.io.IOException;
+
+public interface DataflowRetrieval {

Review comment:
   Yeah I'm ok with `DataflowProvider`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #5412: NIFI-9239: Updated Consume/Publish Kafka processors to support Exactl…

2021-09-30 Thread GitBox


markap14 commented on a change in pull request #5412:
URL: https://github.com/apache/nifi/pull/5412#discussion_r719598198



##
File path: 
nifi-nar-bundles/nifi-stateless-processor-bundle/nifi-stateless-processor/src/main/java/org/apache/nifi/processors/stateless/retrieval/CachingDataflowRetrieval.java
##
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.stateless.retrieval;
+
+import com.fasterxml.jackson.databind.DeserializationFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processors.stateless.ExecuteStateless;
+import org.apache.nifi.registry.flow.VersionedFlowSnapshot;
+
+import java.io.File;
+import java.io.IOException;
+
+public class CachingDataflowRetrieval implements DataflowRetrieval {
+private final String processorId;
+private final ComponentLog logger;
+private final DataflowRetrieval delegate;
+private final ObjectMapper objectMapper;
+
+
+public CachingDataflowRetrieval(final String processorId, final 
ComponentLog logger, final DataflowRetrieval delegate) {
+this.processorId = processorId;
+this.logger = logger;
+this.delegate = delegate;
+
+objectMapper = new ObjectMapper();
+
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, 
false);
+objectMapper.setAnnotationIntrospector(new 
JaxbAnnotationIntrospector(objectMapper.getTypeFactory()));
+}
+
+@Override
+public VersionedFlowSnapshot retrieveDataflowContents(final ProcessContext 
context) throws IOException {
+try {
+final VersionedFlowSnapshot retrieved = 
delegate.retrieveDataflowContents(context);
+cacheFlowSnapshot(context, retrieved);
+return retrieved;
+} catch (final Exception e) {
+final File cacheFile = getFlowCacheFile(context, processorId);
+if (cacheFile.exists()) {
+logger.warn("Failed to retrieve Flow Snapshot from Registry. 
Will restore Flow Snapshot from cached version at {}", 
cacheFile.getAbsolutePath(), e);

Review comment:
   Good catch.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #5412: NIFI-9239: Updated Consume/Publish Kafka processors to support Exactl…

2021-09-30 Thread GitBox


markap14 commented on a change in pull request #5412:
URL: https://github.com/apache/nifi/pull/5412#discussion_r719597117



##
File path: 
nifi-nar-bundles/nifi-stateless-processor-bundle/nifi-stateless-processor/src/main/java/org/apache/nifi/processors/stateless/retrieval/CachingDataflowRetrieval.java
##
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.stateless.retrieval;
+
+import com.fasterxml.jackson.databind.DeserializationFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processors.stateless.ExecuteStateless;
+import org.apache.nifi.registry.flow.VersionedFlowSnapshot;
+
+import java.io.File;
+import java.io.IOException;
+
+public class CachingDataflowRetrieval implements DataflowRetrieval {
+private final String processorId;
+private final ComponentLog logger;
+private final DataflowRetrieval delegate;
+private final ObjectMapper objectMapper;
+
+
+public CachingDataflowRetrieval(final String processorId, final 
ComponentLog logger, final DataflowRetrieval delegate) {
+this.processorId = processorId;
+this.logger = logger;
+this.delegate = delegate;
+
+objectMapper = new ObjectMapper();
+
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, 
false);
+objectMapper.setAnnotationIntrospector(new 
JaxbAnnotationIntrospector(objectMapper.getTypeFactory()));
+}
+
+@Override
+public VersionedFlowSnapshot retrieveDataflowContents(final ProcessContext 
context) throws IOException {

Review comment:
   This isn't a concern. It's fetched only in the @OnScheduled, which is 
single-threaded




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (MINIFICPP-1406) Check for swig dependency for docker integration tests

2021-09-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Gyimesi resolved MINIFICPP-1406.
--
Resolution: Fixed

> Check for swig dependency for docker integration tests
> --
>
> Key: MINIFICPP-1406
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1406
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Gábor Gyimesi
>Priority: Trivial
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Swig is a compile time dependency for m2crypto pip package used in docker 
> integration tests. This dependency is not explicit in the code and can be 
> problematic to find for users of the test framework. It needs to have a more 
> explicit error message if the package is missing and should also be 
> documented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (MINIFICPP-1653) Add SQL extension to Linux docker builds

2021-09-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Gyimesi reassigned MINIFICPP-1653:


Assignee: Gábor Gyimesi

> Add SQL extension to Linux docker builds
> 
>
> Key: MINIFICPP-1653
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1653
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Gábor Gyimesi
>Assignee: Gábor Gyimesi
>Priority: Trivial
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] Lehel44 commented on a change in pull request #5356: NIFI-9183: Add a command-line option to save status history

2021-09-30 Thread GitBox


Lehel44 commented on a change in pull request #5356:
URL: https://github.com/apache/nifi/pull/5356#discussion_r719510005



##
File path: nifi-bootstrap/src/main/java/org/apache/nifi/bootstrap/RunNiFi.java
##
@@ -216,6 +223,46 @@ public static void main(String[] args) throws IOException, 
InterruptedException
 dumpFile = null;
 verbose = false;
 }
+} else if (cmd.equalsIgnoreCase("status-history")) {

Review comment:
   - **TC1.:**
   Fixed the outdated error message.
   - **TC2.:**
   Added the log to the cmdLogger as well.
   - **TC3-4-5.:**
   The statusHistoryDays default value was not assigned. I fixed it, thank you. 
   - **TC6.:** 
   The beginning of the main method does the validation based on the number of 
parameters globally, which I think I can't change.
   ```
   if (args.length < 1 || args.length > 3) {
   printUsage();
   return;
   }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5412: NIFI-9239: Updated Consume/Publish Kafka processors to support Exactl…

2021-09-30 Thread GitBox


exceptionfactory commented on a change in pull request #5412:
URL: https://github.com/apache/nifi/pull/5412#discussion_r719488902



##
File path: 
nifi-nar-bundles/nifi-stateless-processor-bundle/nifi-stateless-processor/src/main/java/org/apache/nifi/processors/stateless/retrieval/CachingDataflowRetrieval.java
##
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.stateless.retrieval;
+
+import com.fasterxml.jackson.databind.DeserializationFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processors.stateless.ExecuteStateless;
+import org.apache.nifi.registry.flow.VersionedFlowSnapshot;
+
+import java.io.File;
+import java.io.IOException;
+
+public class CachingDataflowRetrieval implements DataflowRetrieval {
+private final String processorId;
+private final ComponentLog logger;
+private final DataflowRetrieval delegate;
+private final ObjectMapper objectMapper;
+
+
+public CachingDataflowRetrieval(final String processorId, final 
ComponentLog logger, final DataflowRetrieval delegate) {
+this.processorId = processorId;
+this.logger = logger;
+this.delegate = delegate;
+
+objectMapper = new ObjectMapper();
+
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, 
false);
+objectMapper.setAnnotationIntrospector(new 
JaxbAnnotationIntrospector(objectMapper.getTypeFactory()));
+}
+
+@Override
+public VersionedFlowSnapshot retrieveDataflowContents(final ProcessContext 
context) throws IOException {
+try {
+final VersionedFlowSnapshot retrieved = 
delegate.retrieveDataflowContents(context);
+cacheFlowSnapshot(context, retrieved);
+return retrieved;
+} catch (final Exception e) {
+final File cacheFile = getFlowCacheFile(context, processorId);
+if (cacheFile.exists()) {
+logger.warn("Failed to retrieve Flow Snapshot from Registry. 
Will restore Flow Snapshot from cached version at {}", 
cacheFile.getAbsolutePath(), e);

Review comment:
   This message implies retrieval from Registry, but this caching class 
does not appear specific to Registry.

##
File path: 
nifi-nar-bundles/nifi-stateless-processor-bundle/nifi-stateless-processor/src/main/java/org/apache/nifi/processors/stateless/retrieval/CachingDataflowRetrieval.java
##
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.stateless.retrieval;
+
+import com.fasterxml.jackson.databind.DeserializationFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processors.stateless.ExecuteStateless;
+import org.apache.nifi.registry.flow.VersionedFlowSnapshot;
+
+import java.io.File;
+import java.io.IOException;
+
+public class CachingDataflowRetrieval implements DataflowRetrieval {
+private final String processorId;
+private final ComponentLog logger;
+private final DataflowRetrieval delegate;
+private final ObjectMapper 

[GitHub] [nifi] markap14 commented on pull request #5391: NIFI-9174: Adding AWS SecretsManager ParamValueProvider for Stateless

2021-09-30 Thread GitBox


markap14 commented on pull request #5391:
URL: https://github.com/apache/nifi/pull/5391#issuecomment-931391414


   Actually @gresockj after playing with the AWS Secrets Manager a bit more, 
I'm wondering if we should actually go a slightly different route here. Rather 
than having the value of a secret be a 'plaintext' value, perhaps it makes more 
sense to use the key/value pairs that the UI is tailored to?
   
   So then, instead of treating each secret as a separate parameter, we would 
treat each AWS Secret like a Parameter Context. And each parameter would then 
map to one of those key/value pairs. Then users can just go into AWS Secrets 
Manager, create a new Secret, and enter all of their parameters that they care 
about. Then the provider would be configured with a mapping of ParameterContext 
Name to Secret Name, with a default Secret Name to be used if there is no 
mapping.
   
   For example, if my flow has ContextA and ContextB, I could configure the 
Provider so that ContextA maps to secret SecretA and ContextB maps to SecretB. 
Or I could configure it so that ContextA maps to SecretA and ContextB maps to 
SecretA. Or configure it so that ContextA maps to SecretA and not provide a 
mapping for ContextB, just specifying SecretA as the default secret name.
   
   Thoughts on this approach?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #5414: NIFI-9248 Use hive-exec:core instead of the "regular" hive-exec dependency

2021-09-30 Thread GitBox


exceptionfactory commented on pull request #5414:
URL: https://github.com/apache/nifi/pull/5414#issuecomment-931390541


   Thanks for the helpful summary @adenes! I have not been able to perform any 
runtime verification yet, but the approach looks good.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #5391: NIFI-9174: Adding AWS SecretsManager ParamValueProvider for Stateless

2021-09-30 Thread GitBox


markap14 commented on a change in pull request #5391:
URL: https://github.com/apache/nifi/pull/5391#discussion_r719465065



##
File path: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-parameter-value-providers/src/main/java/org/apache/nifi/stateless/parameter/aws/SecretsManagerParameterValueProvider.java
##
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.parameter.aws;
+
+import com.amazonaws.auth.AWSStaticCredentialsProvider;
+import com.amazonaws.auth.BasicAWSCredentials;
+import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
+import com.amazonaws.services.secretsmanager.AWSSecretsManager;
+import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder;
+import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest;
+import com.amazonaws.services.secretsmanager.model.GetSecretValueResult;
+import com.amazonaws.services.secretsmanager.model.ListSecretsRequest;
+import com.amazonaws.services.secretsmanager.model.ListSecretsResult;
+import com.amazonaws.services.secretsmanager.model.SecretListEntry;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stateless.parameter.AbstractParameterValueProvider;
+import org.apache.nifi.stateless.parameter.ParameterValueProvider;
+import 
org.apache.nifi.stateless.parameter.ParameterValueProviderInitializationContext;
+
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Paths;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ * Reads secrets from AWS Secrets Manager to provide parameter values.  
Secrets must be created similar to the following AWS cli command: 
+ * aws secretsmanager create-secret --name 
"[ParamContextName]/[ParamName]" --secret-string '[ParamValue]' 

+ *
+ * A standard configuration for this provider would be: 
+ *
+ * 
+ *  nifi.stateless.parameter.provider.AWSSecretsManager.name=AWS Secrets 
Manager Value Provider
+ *  
nifi.stateless.parameter.provider.AWSSecretsManager.type=org.apache.nifi.stateless.parameter.aws.SecretsManagerParameterValueProvider
+ *  
nifi.stateless.parameter.provider.AWSSecretsManager.properties.aws-credentials-file=./conf/bootstrap-aws.conf
+ * 
+ */
+public class SecretsManagerParameterValueProvider extends 
AbstractParameterValueProvider implements ParameterValueProvider {
+private static final String QUALIFIED_SECRET_FORMAT = "%s/%s";
+private static final String ACCESS_KEY_PROPS_NAME = "aws.access.key.id";
+private static final String SECRET_KEY_PROPS_NAME = 
"aws.secret.access.key";
+private static final String REGION_KEY_PROPS_NAME = "aws.region";
+
+public static final PropertyDescriptor AWS_CREDENTIALS_FILE = new 
PropertyDescriptor.Builder()
+.displayName("AWS Credentials File")
+.name("aws-credentials-file")
+.required(false)
+.defaultValue("./conf/bootstrap-aws.conf")
+.description("Location of the bootstrap-aws.conf file that 
configures the AWS credentials.  If not provided, the default AWS credentials 
will be used.")
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+
+private final Set supportedParameterNames = new HashSet<>();
+
+private List descriptors;
+
+private AWSSecretsManager secretsManager;
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return descriptors;
+}
+
+@Override
+protected void init(final ParameterValueProviderInitializationContext 
context) {
+super.init(context);
+
+this.descriptors = Collections.singletonList(AWS_CREDENTIALS_FILE);
+
+final String awsCredentialsFilename = 
context.getProperty(AWS_CREDENTIALS_FILE).getValue();
+try {
+this.secretsManager = this.configureClient(awsCredentialsFilename);
+} catch (final IOException e) {
+throw new IllegalStateException("Could not configure AWS Secrets 
Manager Client", e);
+}
+
+   

[GitHub] [nifi] gresockj commented on a change in pull request #5391: NIFI-9174: Adding AWS SecretsManager ParamValueProvider for Stateless

2021-09-30 Thread GitBox


gresockj commented on a change in pull request #5391:
URL: https://github.com/apache/nifi/pull/5391#discussion_r719455044



##
File path: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-parameter-value-providers/src/main/java/org/apache/nifi/stateless/parameter/aws/SecretsManagerParameterValueProvider.java
##
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.parameter.aws;
+
+import com.amazonaws.auth.AWSStaticCredentialsProvider;
+import com.amazonaws.auth.BasicAWSCredentials;
+import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
+import com.amazonaws.services.secretsmanager.AWSSecretsManager;
+import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder;
+import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest;
+import com.amazonaws.services.secretsmanager.model.GetSecretValueResult;
+import com.amazonaws.services.secretsmanager.model.ListSecretsRequest;
+import com.amazonaws.services.secretsmanager.model.ListSecretsResult;
+import com.amazonaws.services.secretsmanager.model.SecretListEntry;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stateless.parameter.AbstractParameterValueProvider;
+import org.apache.nifi.stateless.parameter.ParameterValueProvider;
+import 
org.apache.nifi.stateless.parameter.ParameterValueProviderInitializationContext;
+
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Paths;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ * Reads secrets from AWS Secrets Manager to provide parameter values.  
Secrets must be created similar to the following AWS cli command: 
+ * aws secretsmanager create-secret --name 
"[ParamContextName]/[ParamName]" --secret-string '[ParamValue]' 

+ *
+ * A standard configuration for this provider would be: 
+ *
+ * 
+ *  nifi.stateless.parameter.provider.AWSSecretsManager.name=AWS Secrets 
Manager Value Provider
+ *  
nifi.stateless.parameter.provider.AWSSecretsManager.type=org.apache.nifi.stateless.parameter.aws.SecretsManagerParameterValueProvider
+ *  
nifi.stateless.parameter.provider.AWSSecretsManager.properties.aws-credentials-file=./conf/bootstrap-aws.conf
+ * 
+ */
+public class SecretsManagerParameterValueProvider extends 
AbstractParameterValueProvider implements ParameterValueProvider {
+private static final String QUALIFIED_SECRET_FORMAT = "%s/%s";
+private static final String ACCESS_KEY_PROPS_NAME = "aws.access.key.id";
+private static final String SECRET_KEY_PROPS_NAME = 
"aws.secret.access.key";
+private static final String REGION_KEY_PROPS_NAME = "aws.region";
+
+public static final PropertyDescriptor AWS_CREDENTIALS_FILE = new 
PropertyDescriptor.Builder()
+.displayName("AWS Credentials File")
+.name("aws-credentials-file")
+.required(false)
+.defaultValue("./conf/bootstrap-aws.conf")
+.description("Location of the bootstrap-aws.conf file that 
configures the AWS credentials.  If not provided, the default AWS credentials 
will be used.")
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+
+private final Set supportedParameterNames = new HashSet<>();
+
+private List descriptors;
+
+private AWSSecretsManager secretsManager;
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return descriptors;
+}
+
+@Override
+protected void init(final ParameterValueProviderInitializationContext 
context) {
+super.init(context);
+
+this.descriptors = Collections.singletonList(AWS_CREDENTIALS_FILE);
+
+final String awsCredentialsFilename = 
context.getProperty(AWS_CREDENTIALS_FILE).getValue();
+try {
+this.secretsManager = this.configureClient(awsCredentialsFilename);
+} catch (final IOException e) {
+throw new IllegalStateException("Could not configure AWS Secrets 
Manager Client", e);
+}
+
+   

[GitHub] [nifi] markap14 commented on a change in pull request #5391: NIFI-9174: Adding AWS SecretsManager ParamValueProvider for Stateless

2021-09-30 Thread GitBox


markap14 commented on a change in pull request #5391:
URL: https://github.com/apache/nifi/pull/5391#discussion_r719453065



##
File path: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-parameter-value-providers/src/main/java/org/apache/nifi/stateless/parameter/aws/SecretsManagerParameterValueProvider.java
##
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.parameter.aws;
+
+import com.amazonaws.auth.AWSStaticCredentialsProvider;
+import com.amazonaws.auth.BasicAWSCredentials;
+import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
+import com.amazonaws.services.secretsmanager.AWSSecretsManager;
+import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder;
+import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest;
+import com.amazonaws.services.secretsmanager.model.GetSecretValueResult;
+import com.amazonaws.services.secretsmanager.model.ListSecretsRequest;
+import com.amazonaws.services.secretsmanager.model.ListSecretsResult;
+import com.amazonaws.services.secretsmanager.model.SecretListEntry;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stateless.parameter.AbstractParameterValueProvider;
+import org.apache.nifi.stateless.parameter.ParameterValueProvider;
+import 
org.apache.nifi.stateless.parameter.ParameterValueProviderInitializationContext;
+
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Paths;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ * Reads secrets from AWS Secrets Manager to provide parameter values.  
Secrets must be created similar to the following AWS cli command: 
+ * aws secretsmanager create-secret --name 
"[ParamContextName]/[ParamName]" --secret-string '[ParamValue]' 

+ *
+ * A standard configuration for this provider would be: 
+ *
+ * 
+ *  nifi.stateless.parameter.provider.AWSSecretsManager.name=AWS Secrets 
Manager Value Provider
+ *  
nifi.stateless.parameter.provider.AWSSecretsManager.type=org.apache.nifi.stateless.parameter.aws.SecretsManagerParameterValueProvider
+ *  
nifi.stateless.parameter.provider.AWSSecretsManager.properties.aws-credentials-file=./conf/bootstrap-aws.conf
+ * 
+ */
+public class SecretsManagerParameterValueProvider extends 
AbstractParameterValueProvider implements ParameterValueProvider {
+private static final String QUALIFIED_SECRET_FORMAT = "%s/%s";
+private static final String ACCESS_KEY_PROPS_NAME = "aws.access.key.id";
+private static final String SECRET_KEY_PROPS_NAME = 
"aws.secret.access.key";
+private static final String REGION_KEY_PROPS_NAME = "aws.region";
+
+public static final PropertyDescriptor AWS_CREDENTIALS_FILE = new 
PropertyDescriptor.Builder()
+.displayName("AWS Credentials File")
+.name("aws-credentials-file")
+.required(false)
+.defaultValue("./conf/bootstrap-aws.conf")
+.description("Location of the bootstrap-aws.conf file that 
configures the AWS credentials.  If not provided, the default AWS credentials 
will be used.")
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+
+private final Set supportedParameterNames = new HashSet<>();
+
+private List descriptors;
+
+private AWSSecretsManager secretsManager;
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return descriptors;
+}
+
+@Override
+protected void init(final ParameterValueProviderInitializationContext 
context) {
+super.init(context);
+
+this.descriptors = Collections.singletonList(AWS_CREDENTIALS_FILE);
+
+final String awsCredentialsFilename = 
context.getProperty(AWS_CREDENTIALS_FILE).getValue();
+try {
+this.secretsManager = this.configureClient(awsCredentialsFilename);
+} catch (final IOException e) {
+throw new IllegalStateException("Could not configure AWS Secrets 
Manager Client", e);
+}
+
+   

[GitHub] [nifi-minifi-cpp] lordgamez opened a new pull request #1186: MINIFICPP-1655 Upgrade gsl-lite to version 0.39.0

2021-09-30 Thread GitBox


lordgamez opened a new pull request #1186:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1186


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #5391: NIFI-9174: Adding AWS SecretsManager ParamValueProvider for Stateless

2021-09-30 Thread GitBox


markap14 commented on a change in pull request #5391:
URL: https://github.com/apache/nifi/pull/5391#discussion_r719404692



##
File path: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-parameter-value-providers/src/main/java/org/apache/nifi/stateless/parameter/aws/SecretsManagerParameterValueProvider.java
##
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.parameter.aws;
+
+import com.amazonaws.auth.AWSStaticCredentialsProvider;
+import com.amazonaws.auth.BasicAWSCredentials;
+import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
+import com.amazonaws.services.secretsmanager.AWSSecretsManager;
+import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder;
+import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest;
+import com.amazonaws.services.secretsmanager.model.GetSecretValueResult;
+import com.amazonaws.services.secretsmanager.model.ListSecretsRequest;
+import com.amazonaws.services.secretsmanager.model.ListSecretsResult;
+import com.amazonaws.services.secretsmanager.model.SecretListEntry;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stateless.parameter.AbstractParameterValueProvider;
+import org.apache.nifi.stateless.parameter.ParameterValueProvider;
+import 
org.apache.nifi.stateless.parameter.ParameterValueProviderInitializationContext;
+
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Paths;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ * Reads secrets from AWS Secrets Manager to provide parameter values.  
Secrets must be created similar to the following AWS cli command: 
+ * aws secretsmanager create-secret --name 
"[ParamContextName]/[ParamName]" --secret-string '[ParamValue]' 

+ *
+ * A standard configuration for this provider would be: 
+ *
+ * 
+ *  nifi.stateless.parameter.provider.AWSSecretsManager.name=AWS Secrets 
Manager Value Provider
+ *  
nifi.stateless.parameter.provider.AWSSecretsManager.type=org.apache.nifi.stateless.parameter.aws.SecretsManagerParameterValueProvider
+ *  
nifi.stateless.parameter.provider.AWSSecretsManager.properties.aws-credentials-file=./conf/bootstrap-aws.conf
+ * 
+ */
+public class SecretsManagerParameterValueProvider extends 
AbstractParameterValueProvider implements ParameterValueProvider {
+private static final String QUALIFIED_SECRET_FORMAT = "%s/%s";
+private static final String ACCESS_KEY_PROPS_NAME = "aws.access.key.id";
+private static final String SECRET_KEY_PROPS_NAME = 
"aws.secret.access.key";
+private static final String REGION_KEY_PROPS_NAME = "aws.region";
+
+public static final PropertyDescriptor AWS_CREDENTIALS_FILE = new 
PropertyDescriptor.Builder()
+.displayName("AWS Credentials File")
+.name("aws-credentials-file")
+.required(false)
+.defaultValue("./conf/bootstrap-aws.conf")
+.description("Location of the bootstrap-aws.conf file that 
configures the AWS credentials.  If not provided, the default AWS credentials 
will be used.")
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+
+private final Set supportedParameterNames = new HashSet<>();
+
+private List descriptors;
+
+private AWSSecretsManager secretsManager;
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return descriptors;
+}
+
+@Override
+protected void init(final ParameterValueProviderInitializationContext 
context) {
+super.init(context);
+
+this.descriptors = Collections.singletonList(AWS_CREDENTIALS_FILE);
+
+final String awsCredentialsFilename = 
context.getProperty(AWS_CREDENTIALS_FILE).getValue();
+try {
+this.secretsManager = this.configureClient(awsCredentialsFilename);
+} catch (final IOException e) {
+throw new IllegalStateException("Could not configure AWS Secrets 
Manager Client", e);
+}
+
+   

[jira] [Resolved] (MINIFICPP-1125) Deprecate sqlite extension

2021-09-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Gyimesi resolved MINIFICPP-1125.
--
Resolution: Fixed

> Deprecate sqlite extension
> --
>
> Key: MINIFICPP-1125
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1125
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Dániel Bakai
>Priority: Major
> Fix For: 1.0.0
>
>
> The new ODBC SQL extension will supersede the SQLite one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] gresockj commented on a change in pull request #5391: NIFI-9174: Adding AWS SecretsManager ParamValueProvider for Stateless

2021-09-30 Thread GitBox


gresockj commented on a change in pull request #5391:
URL: https://github.com/apache/nifi/pull/5391#discussion_r718961733



##
File path: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-parameter-value-providers/src/main/java/org/apache/nifi/stateless/parameter/aws/SecretsManagerParameterValueProvider.java
##
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.parameter.aws;
+
+import com.amazonaws.auth.AWSStaticCredentialsProvider;
+import com.amazonaws.auth.BasicAWSCredentials;
+import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
+import com.amazonaws.services.secretsmanager.AWSSecretsManager;
+import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder;
+import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest;
+import com.amazonaws.services.secretsmanager.model.GetSecretValueResult;
+import com.amazonaws.services.secretsmanager.model.ListSecretsRequest;
+import com.amazonaws.services.secretsmanager.model.ListSecretsResult;
+import com.amazonaws.services.secretsmanager.model.SecretListEntry;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stateless.parameter.AbstractParameterValueProvider;
+import org.apache.nifi.stateless.parameter.ParameterValueProvider;
+import 
org.apache.nifi.stateless.parameter.ParameterValueProviderInitializationContext;
+
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Paths;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ * Reads secrets from AWS Secrets Manager to provide parameter values.  
Secrets must be created similar to the following AWS cli command: 
+ * aws secretsmanager create-secret --name 
"[ParamContextName]/[ParamName]" --secret-string '[ParamValue]' 

+ *
+ * A standard configuration for this provider would be: 
+ *
+ * 
+ *  nifi.stateless.parameter.provider.AWSSecretsManager.name=AWS Secrets 
Manager Value Provider
+ *  
nifi.stateless.parameter.provider.AWSSecretsManager.type=org.apache.nifi.stateless.parameter.aws.SecretsManagerParameterValueProvider
+ *  
nifi.stateless.parameter.provider.AWSSecretsManager.properties.aws-credentials-file=./conf/bootstrap-aws.conf
+ * 
+ */
+public class SecretsManagerParameterValueProvider extends 
AbstractParameterValueProvider implements ParameterValueProvider {
+private static final String QUALIFIED_SECRET_FORMAT = "%s/%s";
+private static final String ACCESS_KEY_PROPS_NAME = "aws.access.key.id";
+private static final String SECRET_KEY_PROPS_NAME = 
"aws.secret.access.key";
+private static final String REGION_KEY_PROPS_NAME = "aws.region";
+
+public static final PropertyDescriptor AWS_CREDENTIALS_FILE = new 
PropertyDescriptor.Builder()
+.displayName("AWS Credentials File")
+.name("aws-credentials-file")
+.required(false)
+.defaultValue("./conf/bootstrap-aws.conf")
+.description("Location of the bootstrap-aws.conf file that 
configures the AWS credentials.  If not provided, the default AWS credentials 
will be used.")
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+
+private final Set supportedParameterNames = new HashSet<>();
+
+private List descriptors;
+
+private AWSSecretsManager secretsManager;
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return descriptors;
+}
+
+@Override
+protected void init(final ParameterValueProviderInitializationContext 
context) {
+super.init(context);
+
+this.descriptors = Collections.singletonList(AWS_CREDENTIALS_FILE);
+
+final String awsCredentialsFilename = 
context.getProperty(AWS_CREDENTIALS_FILE).getValue();
+try {
+this.secretsManager = this.configureClient(awsCredentialsFilename);
+} catch (final IOException e) {
+throw new IllegalStateException("Could not configure AWS Secrets 
Manager Client", e);
+}
+
+   

[GitHub] [nifi] gresockj commented on a change in pull request #5369: NIFI-9003: Adding ParameterProviders to NiFi

2021-09-30 Thread GitBox


gresockj commented on a change in pull request #5369:
URL: https://github.com/apache/nifi/pull/5369#discussion_r719389091



##
File path: 
nifi-api/src/main/java/org/apache/nifi/parameter/SensitiveParameterProvider.java
##
@@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.parameter;
+
+/**
+ * A base interface for all sensitive ParameterProviders
+ */
+public interface SensitiveParameterProvider extends ParameterProvider {

Review comment:
   I believe I was initially going for strong type enforcement of parameter 
provider sensitivity here, but on reconsideration I agree.  Further, I'd like 
to go in the direction of making `ParameterProviderNode` responsible for 
enforcing/setting sensitivity based on how a ParameterProvider is referenced 
rather than exposing a method like this to the `ParameterProvider` interface.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #5391: NIFI-9174: Adding AWS SecretsManager ParamValueProvider for Stateless

2021-09-30 Thread GitBox


markap14 commented on a change in pull request #5391:
URL: https://github.com/apache/nifi/pull/5391#discussion_r719387912



##
File path: nifi-stateless/nifi-stateless-assembly/README.md
##
@@ -515,3 +515,31 @@ nifi.stateless.parameter.provider.Vault.name=HashiCorp 
Vault Provider
 
nifi.stateless.parameter.provider.Vault.type=org.apache.nifi.stateless.parameter.HashiCorpVaultParameterValueProvider
 
nifi.stateless.parameter.provider.Vault.properties.vault-configuration-file=./conf/bootstrap-hashicorp-vault.conf
 ```
+
+**AWS SecretsManagerParameterValueProvider**
+
+This provider reads parameter values from AWS SecretsManager.  The AWS 
credentials can be configured
+via the `./conf/bootstrap-aws.conf` file, which comes with NiFi.

Review comment:
   Might be worth mentioning here that whatever credentials are supplied 
must have permissions both to List Secrets and to retrieve the values of the 
secrets.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gresockj commented on pull request #5369: NIFI-9003: Adding ParameterProviders to NiFi

2021-09-30 Thread GitBox


gresockj commented on pull request #5369:
URL: https://github.com/apache/nifi/pull/5369#issuecomment-931306105


   > @gresockj this is great! This is a huge lift, to add a completely new type 
of extension point! I've not done any testing at all yet. But I've spent a good 
bit of time reviewing all of the code. Looks good for the most part - and you 
found _A LOT_ of minor typos in the javadocs - thanks for fixing those! :)
   > 
   > Wanted to go ahead and submit my review, before jumping in to try it out, 
just because I feel like some of my thoughts may end up warranting some 
discussions. Especially around the notion of SensitiveParameterProvider vs 
NonSensitiveParameterProvider and those really becoming a single interface. I 
think this would yield a cleaner design. And we need to be really careful that 
we nail this down really well the first time because once we introduce 
something into `nifi-api` we can't really break backward compatibility until a 
2.0 release comes, so let's take the time to make sure we're both in agreement 
on what makes the most sense there.
   > 
   > Thanks again!
   
   Thanks for a thorough review, @markap14, this is very helpful!  I believe 
I've addressed everything in the latest commit, except for collapsing 
Sensitive/NonSensitive ParameterProviders into one interface.  I agree with 
your assessment, and will work on this in a separate commit since it will be a 
non-trivial change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gresockj commented on a change in pull request #5369: NIFI-9003: Adding ParameterProviders to NiFi

2021-09-30 Thread GitBox


gresockj commented on a change in pull request #5369:
URL: https://github.com/apache/nifi/pull/5369#discussion_r719372458



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/flow/AbstractFlowManager.java
##
@@ -405,6 +421,66 @@ public void onReportingTaskAdded(final ReportingTaskNode 
taskNode) {
 allReportingTasks.put(taskNode.getIdentifier(), taskNode);
 }
 
+@Override
+public ParameterProviderNode getParameterProvider(final String id) {
+if (id == null) {

Review comment:
   Yes, `ConcurrentHashMap` throws NPE if the key is null.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1655) Upgrade gsl-lite library to version 0.39.0

2021-09-30 Thread Jira
Gábor Gyimesi created MINIFICPP-1655:


 Summary: Upgrade gsl-lite library to version 0.39.0
 Key: MINIFICPP-1655
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1655
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Gábor Gyimesi
Assignee: Gábor Gyimesi






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


lordgamez commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719358796



##
File path: extensions/azure/processors/PutAzureBlobStorage.cpp
##
@@ -179,53 +232,43 @@ void PutAzureBlobStorage::onTrigger(const 
std::shared_ptr
 return;
   }
 
-  auto connection_string = getConnectionString(context, flow_file);
-  if (connection_string.empty()) {
-logger_->log_error("Connection string is empty!");
-session->transfer(flow_file, Failure);
-return;
-  }
-
-  std::string container_name;
-  if (!context->getProperty(ContainerName, container_name, flow_file) || 
container_name.empty()) {
-logger_->log_error("Container Name is invalid or empty!");
-session->transfer(flow_file, Failure);
-return;
-  }
-
-  std::string blob_name;
-  if (!context->getProperty(Blob, blob_name, flow_file) || blob_name.empty()) {
-logger_->log_error("Blob name is invalid or empty!");
+  auto params = buildAzureBlobStorageParameters(context, flow_file);
+  if (!params) {
 session->transfer(flow_file, Failure);
 return;
   }
 
   std::optional upload_result;
   {
 // TODO(lordgamez): This can be removed after maximum allowed threads are 
implemented. See https://issues.apache.org/jira/browse/MINIFICPP-1566
+// When used in multithreaded environment make sure to use the 
azure_storage_mutex_ to lock the wrapper so the
+// client is not reset with different configuration while another thread 
is using it.
 std::lock_guard lock(azure_storage_mutex_);
-createAzureStorageClient(connection_string, container_name);
 if (create_container_) {

Review comment:
   That's true, in theory anyone could call it, but we usually handle 
onSchedule to be only run once before onTrigger calls, and we do not handle the 
possible concurrency in any other processor either. Also as the feature which 
will remove the mutexes from onTriggers and instead will mark the processors if 
they can be run in parallel or not is already in progress so I would not add 
additional mutex usages to it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


lordgamez commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719358796



##
File path: extensions/azure/processors/PutAzureBlobStorage.cpp
##
@@ -179,53 +232,43 @@ void PutAzureBlobStorage::onTrigger(const 
std::shared_ptr
 return;
   }
 
-  auto connection_string = getConnectionString(context, flow_file);
-  if (connection_string.empty()) {
-logger_->log_error("Connection string is empty!");
-session->transfer(flow_file, Failure);
-return;
-  }
-
-  std::string container_name;
-  if (!context->getProperty(ContainerName, container_name, flow_file) || 
container_name.empty()) {
-logger_->log_error("Container Name is invalid or empty!");
-session->transfer(flow_file, Failure);
-return;
-  }
-
-  std::string blob_name;
-  if (!context->getProperty(Blob, blob_name, flow_file) || blob_name.empty()) {
-logger_->log_error("Blob name is invalid or empty!");
+  auto params = buildAzureBlobStorageParameters(context, flow_file);
+  if (!params) {
 session->transfer(flow_file, Failure);
 return;
   }
 
   std::optional upload_result;
   {
 // TODO(lordgamez): This can be removed after maximum allowed threads are 
implemented. See https://issues.apache.org/jira/browse/MINIFICPP-1566
+// When used in multithreaded environment make sure to use the 
azure_storage_mutex_ to lock the wrapper so the
+// client is not reset with different configuration while another thread 
is using it.
 std::lock_guard lock(azure_storage_mutex_);
-createAzureStorageClient(connection_string, container_name);
 if (create_container_) {

Review comment:
   That's true, in theory anyone could call it, but we usually handle 
onSchedule to be only run once before onTrigger calls, and we do not handle the 
possible concurrency in any other processor either. Also as the feature which 
will remove the mutexes from onTriggers and instead will mark the processors if 
they can be run in parallel or not is already in progress I would not add 
additional mutex usages to it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


martinzink commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719347416



##
File path: extensions/azure/processors/PutAzureBlobStorage.cpp
##
@@ -179,53 +232,43 @@ void PutAzureBlobStorage::onTrigger(const 
std::shared_ptr
 return;
   }
 
-  auto connection_string = getConnectionString(context, flow_file);
-  if (connection_string.empty()) {
-logger_->log_error("Connection string is empty!");
-session->transfer(flow_file, Failure);
-return;
-  }
-
-  std::string container_name;
-  if (!context->getProperty(ContainerName, container_name, flow_file) || 
container_name.empty()) {
-logger_->log_error("Container Name is invalid or empty!");
-session->transfer(flow_file, Failure);
-return;
-  }
-
-  std::string blob_name;
-  if (!context->getProperty(Blob, blob_name, flow_file) || blob_name.empty()) {
-logger_->log_error("Blob name is invalid or empty!");
+  auto params = buildAzureBlobStorageParameters(context, flow_file);
+  if (!params) {
 session->transfer(flow_file, Failure);
 return;
   }
 
   std::optional upload_result;
   {
 // TODO(lordgamez): This can be removed after maximum allowed threads are 
implemented. See https://issues.apache.org/jira/browse/MINIFICPP-1566
+// When used in multithreaded environment make sure to use the 
azure_storage_mutex_ to lock the wrapper so the
+// client is not reset with different configuration while another thread 
is using it.
 std::lock_guard lock(azure_storage_mutex_);
-createAzureStorageClient(connection_string, container_name);
 if (create_container_) {

Review comment:
   Yeah I see your point, however onSchedule is on the public interface (so 
in theory anyone could call that) and the overhead from the mutex would be 
non-existent (onSchedule is rarely called) so I would still prefer if it was 
protected as well, purely for object oriented principles. But I am not hell 
bent on that. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


lordgamez commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719334534



##
File path: extensions/azure/processors/PutAzureDataLakeStorage.cpp
##
@@ -72,19 +72,26 @@ void PutAzureDataLakeStorage::initialize() {
 }
 
 void PutAzureDataLakeStorage::onSchedule(const 
std::shared_ptr& context, const 
std::shared_ptr& /*sessionFactory*/) {
-  connection_string_ = getConnectionStringFromControllerService(context);
-  if (connection_string_.empty()) {
+  auto credentials = getCredentialsFromControllerService(context);
+  if (!credentials) {
 throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Azure Storage Credentials 
Service property missing or invalid");
   }
 
+  if ((!credentials->getUseManagedIdentityCredentials() && 
credentials->buildConnectionString().empty()) ||
+  (credentials->getUseManagedIdentityCredentials() && 
credentials->getStorageAccountName().empty())) {
+throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Azure Storage Credentials 
Service properties are not set or invalid");
+  }
+
+  credentials_ = *credentials;

Review comment:
   Same as before.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


lordgamez commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719334026



##
File path: extensions/azure/processors/PutAzureBlobStorage.cpp
##
@@ -179,53 +232,43 @@ void PutAzureBlobStorage::onTrigger(const 
std::shared_ptr
 return;
   }
 
-  auto connection_string = getConnectionString(context, flow_file);
-  if (connection_string.empty()) {
-logger_->log_error("Connection string is empty!");
-session->transfer(flow_file, Failure);
-return;
-  }
-
-  std::string container_name;
-  if (!context->getProperty(ContainerName, container_name, flow_file) || 
container_name.empty()) {
-logger_->log_error("Container Name is invalid or empty!");
-session->transfer(flow_file, Failure);
-return;
-  }
-
-  std::string blob_name;
-  if (!context->getProperty(Blob, blob_name, flow_file) || blob_name.empty()) {
-logger_->log_error("Blob name is invalid or empty!");
+  auto params = buildAzureBlobStorageParameters(context, flow_file);
+  if (!params) {
 session->transfer(flow_file, Failure);
 return;
   }
 
   std::optional upload_result;
   {
 // TODO(lordgamez): This can be removed after maximum allowed threads are 
implemented. See https://issues.apache.org/jira/browse/MINIFICPP-1566
+// When used in multithreaded environment make sure to use the 
azure_storage_mutex_ to lock the wrapper so the
+// client is not reset with different configuration while another thread 
is using it.
 std::lock_guard lock(azure_storage_mutex_);
-createAzureStorageClient(connection_string, container_name);
 if (create_container_) {

Review comment:
   It should not be a problem as onSchedule is only run once and the 
onTrigger calls that could be running in parallel may only read that variable. 
The mutex is used for protecting the Azure client when multiple onTrigger calls 
could reset the client with different parameters while running in parallel.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


martinzink commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719326342



##
File path: extensions/azure/processors/PutAzureDataLakeStorage.cpp
##
@@ -72,19 +72,26 @@ void PutAzureDataLakeStorage::initialize() {
 }
 
 void PutAzureDataLakeStorage::onSchedule(const 
std::shared_ptr& context, const 
std::shared_ptr& /*sessionFactory*/) {
-  connection_string_ = getConnectionStringFromControllerService(context);
-  if (connection_string_.empty()) {
+  auto credentials = getCredentialsFromControllerService(context);
+  if (!credentials) {
 throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Azure Storage Credentials 
Service property missing or invalid");
   }
 
+  if ((!credentials->getUseManagedIdentityCredentials() && 
credentials->buildConnectionString().empty()) ||
+  (credentials->getUseManagedIdentityCredentials() && 
credentials->getStorageAccountName().empty())) {
+throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Azure Storage Credentials 
Service properties are not set or invalid");
+  }
+
+  credentials_ = *credentials;

Review comment:
   This onSchedule also sets members but isnt protected by the mutex

##
File path: extensions/azure/processors/PutAzureBlobStorage.cpp
##
@@ -179,53 +232,43 @@ void PutAzureBlobStorage::onTrigger(const 
std::shared_ptr
 return;
   }
 
-  auto connection_string = getConnectionString(context, flow_file);
-  if (connection_string.empty()) {
-logger_->log_error("Connection string is empty!");
-session->transfer(flow_file, Failure);
-return;
-  }
-
-  std::string container_name;
-  if (!context->getProperty(ContainerName, container_name, flow_file) || 
container_name.empty()) {
-logger_->log_error("Container Name is invalid or empty!");
-session->transfer(flow_file, Failure);
-return;
-  }
-
-  std::string blob_name;
-  if (!context->getProperty(Blob, blob_name, flow_file) || blob_name.empty()) {
-logger_->log_error("Blob name is invalid or empty!");
+  auto params = buildAzureBlobStorageParameters(context, flow_file);
+  if (!params) {
 session->transfer(flow_file, Failure);
 return;
   }
 
   std::optional upload_result;
   {
 // TODO(lordgamez): This can be removed after maximum allowed threads are 
implemented. See https://issues.apache.org/jira/browse/MINIFICPP-1566
+// When used in multithreaded environment make sure to use the 
azure_storage_mutex_ to lock the wrapper so the
+// client is not reset with different configuration while another thread 
is using it.
 std::lock_guard lock(azure_storage_mutex_);
-createAzureStorageClient(connection_string, container_name);
 if (create_container_) {

Review comment:
   onSchedule sets this create_container_ member but onSchedule is not 
protected by the azure_storage_mutex_
   Could that cause unexpected behaviour in multithreaded enviroment?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #1185: MINIFICPP-1654 - Framework for attribute information

2021-09-30 Thread GitBox


adamdebreceni opened a new pull request #1185:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1185


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8760) VolatileContentRepository fails to retrieve content from claims with several processors

2021-09-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthieu RÉ updated NIFI-8760:
--
   Attachment: 
0001-fix-2-set-VolatileContentRepository-as-non-supportiv.patch
Fix Version/s: 1.15.0
   Status: Patch Available  (was: Open)

Hello,

As said in the last commentary, I propose one solution (the easiest) to fix the 
bug but maybe a bigger discussion can be starter in order to improve the 
management of the VolatileContentRepository.

For now this fixes the issue, I applied it successfully on each branch failing 
before.

 

> VolatileContentRepository fails to retrieve content from claims with several 
> processors
> ---
>
> Key: NIFI-8760
> URL: https://issues.apache.org/jira/browse/NIFI-8760
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.2, 1.13.1, 1.14.0, 1.15.0
>Reporter: Matthieu RÉ
>Priority: Major
>  Labels: content-repository, volatile
> Fix For: 1.15.0
>
> Attachments: 
> 0001-fix-2-set-VolatileContentRepository-as-non-supportiv.patch, 
> 0001-fix-2-set-VolatileContentRepository-as-non-supportiv.patch, flow.xml.gz, 
> nifi.properties
>
>
> For several processors such as MergeRecord, QueryRecord, SplitJson, the use 
> of VolatileContentRepository implementation infers errors while retrieving 
> Flowfiles from claims. The following logs are generated using NiFi 1.13.1 
> from Docker and the flow.xml.gz and nifi.properties file attached.
> MergeRecord (with JsonTreeReader, JsonRecordSetWriter with default 
> configuration):
> {{2021-07-06 10:15:09,170 ERROR [Timer-Driven Process Thread-1] 
> o.a.nifi.processors.standard.MergeRecord 
> MergeRecord[id=7b425cff-017a-1000-6a20-58c4e064df3d] Failed to bin 
> StandardFlowFileRecord[uuid=3e894a96-883a-4ac2-8121-b8200964cf20,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, 
> length=5655],offset=0,name=b2c7cf61-b421-477d-902e-daeb2ed58f0d,size=5655] 
> due to org.apache.nifi.controller.repository.ContentNotFoundException: Could 
> not find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]: 
> org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  {{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:473)}}
>  {{at 
> org.apache.nifi.controller.repository.StandardProcessSession.getInputStream(StandardProcessSession.java:2302)}}
>  {{at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2409)}}
>  {{at 
> org.apache.nifi.processors.standard.MergeRecord.binFlowFile(MergeRecord.java:383)}}
>  {{at 
> org.apache.nifi.processors.standard.MergeRecord.onTrigger(MergeRecord.java:346)}}
>  {{at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)}}
>  {{at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)}}
>  {{at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)}}
>  {{at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)}}
>  {{at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}
>  {{at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}}
>  {{at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}}
>  {{at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}}
>  {{at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
>  {{at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
>  {{at java.lang.Thread.run(Thread.java:748)}}
> QueryRecord:
> {{2021-07-06 10:15:09,174 ERROR [Timer-Driven Process Thread-4] 
> o.a.nifi.processors.standard.QueryRecord 
> QueryRecord[id=673fe9f6-017a-1000-8041-dfde9d02d976] Failed to determine 

[jira] [Updated] (NIFI-8760) VolatileContentRepository fails to retrieve content from claims with several processors

2021-09-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthieu RÉ updated NIFI-8760:
--
Flags: Patch
Affects Version/s: 1.15.0
   1.14.0

> VolatileContentRepository fails to retrieve content from claims with several 
> processors
> ---
>
> Key: NIFI-8760
> URL: https://issues.apache.org/jira/browse/NIFI-8760
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.14.0, 1.13.1, 1.13.2, 1.15.0
>Reporter: Matthieu RÉ
>Priority: Major
>  Labels: content-repository, volatile
> Attachments: 
> 0001-fix-2-set-VolatileContentRepository-as-non-supportiv.patch, flow.xml.gz, 
> nifi.properties
>
>
> For several processors such as MergeRecord, QueryRecord, SplitJson, the use 
> of VolatileContentRepository implementation infers errors while retrieving 
> Flowfiles from claims. The following logs are generated using NiFi 1.13.1 
> from Docker and the flow.xml.gz and nifi.properties file attached.
> MergeRecord (with JsonTreeReader, JsonRecordSetWriter with default 
> configuration):
> {{2021-07-06 10:15:09,170 ERROR [Timer-Driven Process Thread-1] 
> o.a.nifi.processors.standard.MergeRecord 
> MergeRecord[id=7b425cff-017a-1000-6a20-58c4e064df3d] Failed to bin 
> StandardFlowFileRecord[uuid=3e894a96-883a-4ac2-8121-b8200964cf20,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, 
> length=5655],offset=0,name=b2c7cf61-b421-477d-902e-daeb2ed58f0d,size=5655] 
> due to org.apache.nifi.controller.repository.ContentNotFoundException: Could 
> not find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]: 
> org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  {{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:473)}}
>  {{at 
> org.apache.nifi.controller.repository.StandardProcessSession.getInputStream(StandardProcessSession.java:2302)}}
>  {{at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2409)}}
>  {{at 
> org.apache.nifi.processors.standard.MergeRecord.binFlowFile(MergeRecord.java:383)}}
>  {{at 
> org.apache.nifi.processors.standard.MergeRecord.onTrigger(MergeRecord.java:346)}}
>  {{at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)}}
>  {{at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)}}
>  {{at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)}}
>  {{at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)}}
>  {{at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}
>  {{at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}}
>  {{at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}}
>  {{at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}}
>  {{at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
>  {{at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
>  {{at java.lang.Thread.run(Thread.java:748)}}
> QueryRecord:
> {{2021-07-06 10:15:09,174 ERROR [Timer-Driven Process Thread-4] 
> o.a.nifi.processors.standard.QueryRecord 
> QueryRecord[id=673fe9f6-017a-1000-8041-dfde9d02d976] Failed to determine 
> Record Schema from 
> StandardFlowFileRecord[uuid=090e3058-67e6-4436-bea9-d511132848e3,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=2, container=in-memory, 
> section=section], offset=0, 
> length=5655],offset=0,name=090e3058-67e6-4436-bea9-d511132848e3,size=5655]; 
> routing to failure: 
> org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> 

[jira] [Commented] (NIFI-8760) VolatileContentRepository fails to retrieve content from claims with several processors

2021-09-30 Thread Jira


[ 
https://issues.apache.org/jira/browse/NIFI-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422674#comment-17422674
 ] 

Matthieu RÉ commented on NIFI-8760:
---

Today I have two simple fixes equivalent in terms of performance (tested on 
GenerateFF and MergeRecord, SplitJson, QueryRecord) :


 * First is to follow [the idea of the first 
implementation|https://github.com/apache/nifi/blob/528fce2407d092d4ced1a58fcc14d0bc6e660b89/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/VolatileContentRepository.java#L473],
 that was for a ResourceClaim to call the corresponding ContentClaim at the 
offset 0. It doesn't work when the searched ContentClaim has a length, because 
the ContentClaim implements an "equalsTo" that takes the length into account 
and its constructor called by read(ResourceClaim) initializes it to -1. So a 
fix could be to search for the ContentClaim in the map matching the 
ResourceClaim and the offset 0.
As I said, even if this implementation seems poor since it does not benefit 
from the structure of the Map of Comparable keys to search for a ContentClaim, 
the performance of this solution seems equivalent to the second one.
 * Second is to simply consider the VolatileContentRepository as non-compatible 
with the read(ResourceClaim) and to only allow read(ContentClaim) as it is the 
case for the EncryptedFileSystemRepository.

Since the structure of the data storage(s) in this implementation is 
Map, I lake of experience to answer the question :
 * Does it make sense to try to use the ResourceClaim to call ContentBlock(s) 
in case of a VolatileContentRepository ?
 * If yes, could there be a benefit to call ContentBlock from all the offset 
matching the ResourceClaim, instead of only the offset 0 as it intended to be ?
 * Else, the second fix is probably the good one

Please don't hesitate to correct me if I'm wrong or misunderstood something.

For now, I will link the second fix as a Git Patch here : 
[^0001-fix-2-set-VolatileContentRepository-as-non-supportiv.patch], to help 
anyone in the need of a fix.

> VolatileContentRepository fails to retrieve content from claims with several 
> processors
> ---
>
> Key: NIFI-8760
> URL: https://issues.apache.org/jira/browse/NIFI-8760
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.1, 1.13.2
>Reporter: Matthieu RÉ
>Priority: Major
>  Labels: content-repository, volatile
> Attachments: 
> 0001-fix-2-set-VolatileContentRepository-as-non-supportiv.patch, flow.xml.gz, 
> nifi.properties
>
>
> For several processors such as MergeRecord, QueryRecord, SplitJson, the use 
> of VolatileContentRepository implementation infers errors while retrieving 
> Flowfiles from claims. The following logs are generated using NiFi 1.13.1 
> from Docker and the flow.xml.gz and nifi.properties file attached.
> MergeRecord (with JsonTreeReader, JsonRecordSetWriter with default 
> configuration):
> {{2021-07-06 10:15:09,170 ERROR [Timer-Driven Process Thread-1] 
> o.a.nifi.processors.standard.MergeRecord 
> MergeRecord[id=7b425cff-017a-1000-6a20-58c4e064df3d] Failed to bin 
> StandardFlowFileRecord[uuid=3e894a96-883a-4ac2-8121-b8200964cf20,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, 
> length=5655],offset=0,name=b2c7cf61-b421-477d-902e-daeb2ed58f0d,size=5655] 
> due to org.apache.nifi.controller.repository.ContentNotFoundException: Could 
> not find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]: 
> org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  {{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:473)}}
>  {{at 
> org.apache.nifi.controller.repository.StandardProcessSession.getInputStream(StandardProcessSession.java:2302)}}
>  {{at 
> 

[jira] [Updated] (NIFI-8760) VolatileContentRepository fails to retrieve content from claims with several processors

2021-09-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthieu RÉ updated NIFI-8760:
--
Attachment: 0001-fix-2-set-VolatileContentRepository-as-non-supportiv.patch

> VolatileContentRepository fails to retrieve content from claims with several 
> processors
> ---
>
> Key: NIFI-8760
> URL: https://issues.apache.org/jira/browse/NIFI-8760
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.1, 1.13.2
>Reporter: Matthieu RÉ
>Priority: Major
>  Labels: content-repository, volatile
> Attachments: 
> 0001-fix-2-set-VolatileContentRepository-as-non-supportiv.patch, flow.xml.gz, 
> nifi.properties
>
>
> For several processors such as MergeRecord, QueryRecord, SplitJson, the use 
> of VolatileContentRepository implementation infers errors while retrieving 
> Flowfiles from claims. The following logs are generated using NiFi 1.13.1 
> from Docker and the flow.xml.gz and nifi.properties file attached.
> MergeRecord (with JsonTreeReader, JsonRecordSetWriter with default 
> configuration):
> {{2021-07-06 10:15:09,170 ERROR [Timer-Driven Process Thread-1] 
> o.a.nifi.processors.standard.MergeRecord 
> MergeRecord[id=7b425cff-017a-1000-6a20-58c4e064df3d] Failed to bin 
> StandardFlowFileRecord[uuid=3e894a96-883a-4ac2-8121-b8200964cf20,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, 
> length=5655],offset=0,name=b2c7cf61-b421-477d-902e-daeb2ed58f0d,size=5655] 
> due to org.apache.nifi.controller.repository.ContentNotFoundException: Could 
> not find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]: 
> org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  {{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:473)}}
>  {{at 
> org.apache.nifi.controller.repository.StandardProcessSession.getInputStream(StandardProcessSession.java:2302)}}
>  {{at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2409)}}
>  {{at 
> org.apache.nifi.processors.standard.MergeRecord.binFlowFile(MergeRecord.java:383)}}
>  {{at 
> org.apache.nifi.processors.standard.MergeRecord.onTrigger(MergeRecord.java:346)}}
>  {{at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)}}
>  {{at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)}}
>  {{at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)}}
>  {{at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)}}
>  {{at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}
>  {{at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}}
>  {{at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}}
>  {{at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}}
>  {{at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
>  {{at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
>  {{at java.lang.Thread.run(Thread.java:748)}}
> QueryRecord:
> {{2021-07-06 10:15:09,174 ERROR [Timer-Driven Process Thread-4] 
> o.a.nifi.processors.standard.QueryRecord 
> QueryRecord[id=673fe9f6-017a-1000-8041-dfde9d02d976] Failed to determine 
> Record Schema from 
> StandardFlowFileRecord[uuid=090e3058-67e6-4436-bea9-d511132848e3,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=2, container=in-memory, 
> section=section], offset=0, 
> length=5655],offset=0,name=090e3058-67e6-4436-bea9-d511132848e3,size=5655]; 
> routing to failure: 
> org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=2, 

[GitHub] [nifi] gresockj commented on a change in pull request #5369: NIFI-9003: Adding ParameterProviders to NiFi

2021-09-30 Thread GitBox


gresockj commented on a change in pull request #5369:
URL: https://github.com/apache/nifi/pull/5369#discussion_r719264527



##
File path: 
nifi-stateless/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/engine/StatelessFlowManager.java
##
@@ -296,6 +298,53 @@ public ReportingTaskNode createReportingTask(final String 
type, final String id,
 return taskNode;
 }
 
+@Override
+public ParameterProviderNode createParameterProvider(final String type, 
final String id, final BundleCoordinate bundleCoordinate, final Set 
additionalUrls, final boolean firstTimeAdded,

Review comment:
   I agree some more careful thought is needed regarding these two 
concepts.  For this PR, I'll just throw an `UnsupportedOperationException` here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gresockj commented on a change in pull request #5369: NIFI-9003: Adding ParameterProviders to NiFi

2021-09-30 Thread GitBox


gresockj commented on a change in pull request #5369:
URL: https://github.com/apache/nifi/pull/5369#discussion_r719262030



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/AbstractParameterResource.java
##
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.web.api;
+
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.cluster.manager.NodeResponse;
+import org.apache.nifi.web.api.entity.ParameterContextEntity;
+import org.apache.nifi.web.util.LifecycleManagementException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.ws.rs.HttpMethod;
+import java.net.URI;
+import java.util.Map;
+
+public abstract class AbstractParameterResource extends ApplicationResource {

Review comment:
   This method needs access to the protected methods 
`getReplicationTarget()` and `getRequestReplicator()`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


lordgamez commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719208083



##
File path: libminifi/test/azure-tests/PutAzureBlobStorageTests.cpp
##
@@ -283,24 +278,75 @@ TEST_CASE_METHOD(PutAzureBlobStorageTestsFixture, "Test 
credentials settings", "
 REQUIRE(failed_flowfiles.size() == 1);
 REQUIRE(failed_flowfiles[0] == TEST_DATA);
   }
+
+  SECTION("Account name and managed identity are used in properties") {
+plan->setProperty(put_azure_blob_storage, "Storage Account Name", 
STORAGE_ACCOUNT_NAME);
+plan->setProperty(put_azure_blob_storage, "Use Managed Identity 
Credentials", "true");
+test_controller.runSession(plan, true);
+REQUIRE(getFailedFlowFileContents().size() == 0);
+auto passed_params = mock_blob_storage_ptr->getPassedParams();
+REQUIRE(passed_params.credentials.buildConnectionString().empty());
+REQUIRE(passed_params.credentials.getStorageAccountName() == 
STORAGE_ACCOUNT_NAME);

Review comment:
   Changed it in both Blob and Data Lake storage tests with some assertion 
order changes to be more logical in 184b900f78ed72755f3d0e1876cc69d694da7a9e




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


lordgamez commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719207278



##
File path: extensions/azure/storage/AzureDataLakeStorageClient.cpp
##
@@ -20,19 +20,28 @@
 
 #include "AzureDataLakeStorageClient.h"
 
+#include "azure/identity.hpp"
+
 namespace org::apache::nifi::minifi::azure::storage {
 
-void AzureDataLakeStorageClient::resetClientIfNeeded(const std::string& 
connection_string, const std::string& file_system_name) {
-  if (client_ == nullptr || connection_string_ != connection_string || 
file_system_name_ != file_system_name) {
-client_ = 
std::make_unique(
-  
Azure::Storage::Files::DataLake::DataLakeFileSystemClient::CreateFromConnectionString(connection_string,
 file_system_name));
+void AzureDataLakeStorageClient::resetClientIfNeeded(const 
AzureStorageCredentials& credentials, const std::string& file_system_name) {
+  if (client_ == nullptr || credentials_ != credentials || file_system_name_ 
!= file_system_name) {

Review comment:
   Updated in 184b900f78ed72755f3d0e1876cc69d694da7a9e

##
File path: extensions/azure/storage/DataLakeStorageClient.h
##
@@ -20,24 +20,27 @@
 #pragma once
 
 #include 
+#include 
+
+#include "AzureStorageClient.h"
 
 #include "gsl/gsl-lite.hpp"
 
 namespace org::apache::nifi::minifi::azure::storage {
 
 struct PutAzureDataLakeStorageParameters {
-  std::string connection_string;
+  AzureStorageCredentials credentials;
   std::string file_system_name;
   std::string directory_name;
   std::string filename;
   bool replace_file = false;
 };
 
-class DataLakeStorageClient {
+class DataLakeStorageClient : public AzureStorageClient {
  public:
   virtual bool createFile(const PutAzureDataLakeStorageParameters& params) = 0;
   virtual std::string uploadFile(const PutAzureDataLakeStorageParameters& 
params, gsl::span buffer) = 0;
-  virtual ~DataLakeStorageClient() {}
+  virtual ~DataLakeStorageClient() = default;

Review comment:
   Removed in 184b900f78ed72755f3d0e1876cc69d694da7a9e




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


lordgamez commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719207167



##
File path: extensions/azure/storage/AzureBlobStorageClient.cpp
##
@@ -0,0 +1,65 @@
+/**
+ * @file AzureBlobStorageClient.cpp
+ * AzureBlobStorageClient class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "AzureBlobStorageClient.h"
+
+#include "azure/identity.hpp"
+
+namespace org::apache::nifi::minifi::azure::storage {
+
+void AzureBlobStorageClient::resetClientIfNeeded(const AzureStorageCredentials 
, const std::string _name) {
+  if (container_client_ && credentials == credentials_ && container_name == 
container_name_) {
+logger_->log_debug("Client credentials have not changed, no need to reset 
client");
+return;
+  }
+
+  if (credentials.getUseManagedIdentityCredentials()) {
+logger_->log_debug("Client has been reset with new managed identity 
credentials.");

Review comment:
   I could not check all internal implementation so it could be possible, I 
moved the logs to the end in 184b900f78ed72755f3d0e1876cc69d694da7a9e




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


lordgamez commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719206363



##
File path: 
extensions/azure/controllerservices/AzureStorageCredentialsService.cpp
##
@@ -22,53 +22,69 @@
 
 #include "core/Resource.h"
 
-namespace org {
-namespace apache {
-namespace nifi {
-namespace minifi {
-namespace azure {
-namespace controllers {
+namespace org::apache::nifi::minifi::azure::controllers {
 
 const core::Property AzureStorageCredentialsService::StorageAccountName(
 core::PropertyBuilder::createProperty("Storage Account Name")
   ->withDescription("The storage account name.")
+  ->supportsExpressionLanguage(true)
   ->build());
 const core::Property AzureStorageCredentialsService::StorageAccountKey(
 core::PropertyBuilder::createProperty("Storage Account Key")
   ->withDescription("The storage account key. This is an admin-like 
password providing access to every container in this account. "
 "It is recommended one uses Shared Access Signature 
(SAS) token instead for fine-grained control with policies.")
+  ->supportsExpressionLanguage(true)
   ->build());
 const core::Property AzureStorageCredentialsService::SASToken(
 core::PropertyBuilder::createProperty("SAS Token")
-  ->withDescription("Shared Access Signature token. Specify either SAS 
Token (recommended) or Account Key.")
+  ->withDescription("Shared Access Signature token. Specify either SAS 
Token (recommended) or Account Key together with Storage Account Key if Managed 
Identity is not used.")
+  ->supportsExpressionLanguage(true)
   ->build());
 const core::Property 
AzureStorageCredentialsService::CommonStorageAccountEndpointSuffix(
 core::PropertyBuilder::createProperty("Common Storage Account Endpoint 
Suffix")
   ->withDescription("Storage accounts in public Azure always use a common 
FQDN suffix. Override this endpoint suffix with a "
 "different suffix in certain circumstances (like Azure 
Stack or non-public Azure regions).")
+  ->supportsExpressionLanguage(true)
   ->build());
 const core::Property AzureStorageCredentialsService::ConnectionString(
   core::PropertyBuilder::createProperty("Connection String")
-->withDescription("Connection string used to connect to Azure Storage 
service. This overrides all other set credential properties.")
+->withDescription("Connection string used to connect to Azure Storage 
service. This overrides all other set credential properties if Managed Identity 
is not used.")
+->supportsExpressionLanguage(true)
+->build());
+const core::Property 
AzureStorageCredentialsService::UseManagedIdentityCredentials(
+  core::PropertyBuilder::createProperty("Use Managed Identity Credentials")
+->withDescription("If true Managed Identity credentials will be used 
together with the Storage Account Name for authentication.")
+->isRequired(true)
+->withDefaultValue(false)
 ->build());
 
 void AzureStorageCredentialsService::initialize() {
-  setSupportedProperties({StorageAccountName, StorageAccountKey, SASToken, 
CommonStorageAccountEndpointSuffix, ConnectionString});
+  setSupportedProperties({StorageAccountName, StorageAccountKey, SASToken, 
CommonStorageAccountEndpointSuffix, ConnectionString, 
UseManagedIdentityCredentials});
 }
 
 void AzureStorageCredentialsService::onEnable() {
-  getProperty(StorageAccountName.getName(), credentials_.storage_account_name);
-  getProperty(StorageAccountKey.getName(), credentials_.storage_account_key);
-  getProperty(SASToken.getName(), credentials_.sas_token);
-  getProperty(CommonStorageAccountEndpointSuffix.getName(), 
credentials_.endpoint_suffix);
-  getProperty(ConnectionString.getName(), credentials_.connection_string);
+  std::string value;
+  if (getProperty(StorageAccountName.getName(), value)) {

Review comment:
   You are right, I'm not sure why it was used in AWS controller service 
and also documented that way. I removed all signs of expression language 
support for controller services in 184b900f78ed72755f3d0e1876cc69d694da7a9e

##
File path: extensions/azure/storage/AzureStorageCredentials.h
##
@@ -21,53 +21,31 @@
 
 #include 
 
-#include "utils/StringUtils.h"
-
-namespace org {
-namespace apache {
-namespace nifi {
-namespace minifi {
-namespace azure {
-namespace storage {
-
-struct AzureStorageCredentials {
-  std::string storage_account_name;
-  std::string storage_account_key;
-  std::string sas_token;
-  std::string endpoint_suffix;
-  std::string connection_string;
-
-  std::string getConnectionString() const {
-if (!connection_string.empty()) {
-  return connection_string;
-}
-
-if (storage_account_name.empty() || (storage_account_key.empty() && 
sas_token.empty())) {
-  return "";
-}
-
-std::string credentials;
-credentials += "AccountName=" + storage_account_name;
-
-if 

[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


lordgamez commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719205325



##
File path: CONTROLLERS.md
##
@@ -56,8 +56,9 @@ default values, and whether a property supports the NiFi 
Expression Language.
 
 | Name | Default Value | Allowable Values | Expression Language Supported? | 
Description |
 | - | - | - | - | - |
-|Storage Account NameThe storage account name.|
-|Storage Account KeyThe storage account key. This is an admin-like 
password providing access to every container in this account. It is recommended 
one uses Shared Access Signature (SAS) token instead for fine-grained control 
with policies.|
-|SAS TokenShared Access Signature token. Specify either SAS Token 
(recommended) or Account Key.|
+|Storage Account Name|||Yes|The storage account name.|
+|Storage Account Key|||Yes|The storage account key. This is an admin-like 
password providing access to every container in this account. It is recommended 
one uses Shared Access Signature (SAS) token instead for fine-grained control 
with policies.|
+|SAS Token|||Yes|Shared Access Signature token. Specify either SAS Token 
(recommended) or Account Key together with Storage Account Key if Managed 
Identity is not used.|

Review comment:
   Updated the descriptions in 184b900f78ed72755f3d0e1876cc69d694da7a9e




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1181: MINIFICPP-1650: ProcessSession::append sets the flowfile size

2021-09-30 Thread GitBox


martinzink commented on a change in pull request #1181:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1181#discussion_r719196114



##
File path: libminifi/src/core/ProcessSession.cpp
##
@@ -269,14 +269,15 @@ void ProcessSession::append(const 
std::shared_ptr , OutputS
 }
 // Call the callback to write the content
 
-size_t oldPos = stream->size();
+size_t flow_file_size = flow->getSize();
+size_t stream_size_before_callback = stream->size();
 // this prevents an issue if we write, above, with zero length.
-if (oldPos > 0)
-  stream->seek(oldPos + 1);
+if (stream_size_before_callback > 0)
+  stream->seek(stream_size_before_callback + 1);
 if (callback->process(stream) < 0) {
   throw Exception(FILE_OPERATION_EXCEPTION, "Failed to process flowfile 
content");
 }
-flow->setSize(stream->size());
+flow->setSize(flow_file_size - stream_size_before_callback + 
stream->size());

Review comment:
   good idea, changed it in 
https://github.com/apache/nifi-minifi-cpp/pull/1181/commits/4debe9813863f259af20e6919a8521b5ef97d28c




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1181: MINIFICPP-1650: ProcessSession::append sets the flowfile size

2021-09-30 Thread GitBox


adamdebreceni commented on a change in pull request #1181:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1181#discussion_r719173733



##
File path: libminifi/src/core/ProcessSession.cpp
##
@@ -269,14 +269,15 @@ void ProcessSession::append(const 
std::shared_ptr , OutputS
 }
 // Call the callback to write the content
 
-size_t oldPos = stream->size();
+size_t flow_file_size = flow->getSize();
+size_t stream_size_before_callback = stream->size();
 // this prevents an issue if we write, above, with zero length.
-if (oldPos > 0)
-  stream->seek(oldPos + 1);
+if (stream_size_before_callback > 0)
+  stream->seek(stream_size_before_callback + 1);
 if (callback->process(stream) < 0) {
   throw Exception(FILE_OPERATION_EXCEPTION, "Failed to process flowfile 
content");
 }
-flow->setSize(stream->size());
+flow->setSize(flow_file_size - stream_size_before_callback + 
stream->size());

Review comment:
   nitpick: `flow_file_size + (stream->size() - 
stream_size_before_callback)` might convey more "intention"




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-09-30 Thread GitBox


lordgamez commented on a change in pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178#discussion_r719173632



##
File path: extensions/azure/storage/AzureBlobStorage.cpp
##
@@ -23,61 +23,39 @@
 #include 
 #include 
 
-namespace org {
-namespace apache {
-namespace nifi {
-namespace minifi {
-namespace azure {
-namespace storage {
+#include "azure/identity.hpp"
+#include "AzureBlobStorageClient.h"
 
-AzureBlobStorage::AzureBlobStorage(std::string connection_string, std::string 
container_name)
-  : BlobStorage(std::move(connection_string), std::move(container_name))
-  , 
container_client_(std::make_unique(
-  
Azure::Storage::Blobs::BlobContainerClient::CreateFromConnectionString(connection_string_,
 container_name_))) {
-}
+namespace org::apache::nifi::minifi::azure::storage {
 
-void AzureBlobStorage::resetClientIfNeeded(const std::string 
_string, const std::string _name) {
-  if (connection_string == connection_string_ && container_name_ == 
container_name) {
-logger_->log_debug("Client credentials have not changed, no need to reset 
client");
-return;
-  }
-  connection_string_ = connection_string;
-  container_name_ = container_name;
-  logger_->log_debug("Client has been reset with new credentials");
-  container_client_ = 
std::make_unique(Azure::Storage::Blobs::BlobContainerClient::CreateFromConnectionString(connection_string,
 container_name));
+AzureBlobStorage::AzureBlobStorage(std::unique_ptr 
blob_storage_client)
+  : blob_storage_client_(blob_storage_client ? std::move(blob_storage_client) 
: std::make_unique()) {
 }
 
-void AzureBlobStorage::createContainer() {
+std::optional AzureBlobStorage::createContainerIfNotExists(const 
PutAzureBlobStorageParameters& params) {

Review comment:
   Currently we only care if there was an error or not while trying to 
create the container, but I would keep the possibility to check if the 
container already existed or not if we want to differentiate later.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1654) Extend c2 protocol to handle processor attribute requirements/guarantees

2021-09-30 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1654:
-

 Summary: Extend c2 protocol to handle processor attribute 
requirements/guarantees
 Key: MINIFICPP-1654
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1654
 Project: Apache NiFi MiNiFi C++
  Issue Type: New Feature
Reporter: Adam Debreceni
Assignee: Adam Debreceni


Currently the c2 protocol does not contain information on a processor's input 
attribute requirements or the guaranteed attributes on its output 
relationships. Create a framework that allows each processor to announce such 
properties through the agent manifest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1642) Add information regarding the read/written attributes to processor information

2021-09-30 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni updated MINIFICPP-1642:
--
Issue Type: Epic  (was: New Feature)

> Add information regarding the read/written attributes to processor information
> --
>
> Key: MINIFICPP-1642
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1642
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Epic
>Reporter: Adam Debreceni
>Assignee: Adam Debreceni
>Priority: Major
>
> We would like to receive the read/written attributes as part of a processor 
> description in the agent manifest, when sent through the c2 protocol.
> We would also like to get a warning in case a processor reads/writes 
> attributes not listed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1181: MINIFICPP-1650: ProcessSession::append sets the flowfile size

2021-09-30 Thread GitBox


martinzink commented on a change in pull request #1181:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1181#discussion_r719129867



##
File path: libminifi/src/core/ProcessSession.cpp
##
@@ -269,14 +269,15 @@ void ProcessSession::append(const 
std::shared_ptr , OutputS
 }
 // Call the callback to write the content
 
-size_t oldPos = stream->size();
+size_t flow_file_size = flow->getSize();
+size_t stream_size_before_callback = stream->size();
 // this prevents an issue if we write, above, with zero length.
-if (oldPos > 0)
-  stream->seek(oldPos + 1);
+if (stream_size_before_callback > 0)
+  stream->seek(stream_size_before_callback + 1);
 if (callback->process(stream) < 0) {
   throw Exception(FILE_OPERATION_EXCEPTION, "Failed to process flowfile 
content");
 }
-flow->setSize(stream->size());
+flow->setSize(flow_file_size - stream_size_before_callback + 
stream->size());

Review comment:
   The problem was that ContentSession::write with WriteMode::APPEND can 
give back a new stream which it will merge with the original stream later. (see 
ContentSession::extendedResources_) However the flowfile size is only changed 
here. So appending to the flowfile overwrote the size with the appended 
content's size.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1181: MINIFICPP-1650: ProcessSession::append sets the flowfile size

2021-09-30 Thread GitBox


martinzink commented on a change in pull request #1181:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1181#discussion_r719129867



##
File path: libminifi/src/core/ProcessSession.cpp
##
@@ -269,14 +269,15 @@ void ProcessSession::append(const 
std::shared_ptr , OutputS
 }
 // Call the callback to write the content
 
-size_t oldPos = stream->size();
+size_t flow_file_size = flow->getSize();
+size_t stream_size_before_callback = stream->size();
 // this prevents an issue if we write, above, with zero length.
-if (oldPos > 0)
-  stream->seek(oldPos + 1);
+if (stream_size_before_callback > 0)
+  stream->seek(stream_size_before_callback + 1);
 if (callback->process(stream) < 0) {
   throw Exception(FILE_OPERATION_EXCEPTION, "Failed to process flowfile 
content");
 }
-flow->setSize(stream->size());
+flow->setSize(flow_file_size - stream_size_before_callback + 
stream->size());

Review comment:
   The problem was that ContentSession::write with WriteMode::APPEND can 
give back a new stream which it will merge with the original stream later. 
However the flowfile size is only changed here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org