[GitHub] [nifi] exceptionfactory commented on a change in pull request #4852: NIFI-8212: Refactored StandardExtensionDiscoveringManager to avoid us…
exceptionfactory commented on a change in pull request #4852: URL: https://github.com/apache/nifi/pull/4852#discussion_r584030215 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/repository/scheduling/ConnectableProcessContext.java ## @@ -41,18 +33,31 @@ import org.apache.nifi.scheduling.ExecutionNode; import org.apache.nifi.util.Connectables; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.function.Supplier; + /** * This class is essentially an empty shell for {@link Connectable}s that are not Processors */ public class ConnectableProcessContext implements ProcessContext { private final Connectable connectable; -private final PropertyEncryptor encryptor; +private final Supplier encryptorFactory; Review comment: One other option to consider instead of changing references for `PropertyEncryptor` to `Supplier` would be to create a lazily initialized PropertyEncryptor specific to the stateless engine. This implementation would take the sensitive properties key as a constructor argument, but only call `PropertyEncryptorFactory` on initial calls to `encrypt()` or `decrypt()`. Not a big difference either way, just fewer classes impacted. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache commented on pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on pull request #4846: URL: https://github.com/apache/nifi/pull/4846#issuecomment-786974249 Just pushed a commit - - Hide the Upload File button when grouping components @pvillard31 - Refactored `uploadProcessGroup` and exception handling @exceptionfactory - Refactored `nf-ng-group-component.js` @sardell Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #4843: NIFI-8258: Add support for Service Principal authentication in ADLS p…
exceptionfactory commented on a change in pull request #4843: URL: https://github.com/apache/nifi/pull/4843#discussion_r583961824 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/ADLSCredentialsControllerService.java ## @@ -97,16 +142,38 @@ boolean sasTokenSet = StringUtils.isNotBlank(validationContext.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).getValue()); boolean useManagedIdentitySet = validationContext.getProperty(USE_MANAGED_IDENTITY).asBoolean(); -if (!onlyOneSet(accountKeySet, sasTokenSet, useManagedIdentitySet)) { -StringJoiner options = new StringJoiner(", ") -.add(AzureStorageUtils.ACCOUNT_KEY.getDisplayName()) -.add(AzureStorageUtils.PROP_SAS_TOKEN.getDisplayName()) -.add(USE_MANAGED_IDENTITY.getDisplayName()); +boolean servicePrincipalTenantIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_TENANT_ID).getValue()); +boolean servicePrincipalClientIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_ID).getValue()); +boolean servicePrincipalClientSecretSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_SECRET).getValue()); +boolean servicePrincipalClientCertificateSet = validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_CERTIFICATE).isSet(); + +boolean servicePrincipalSet = servicePrincipalTenantIdSet || servicePrincipalClientIdSet || servicePrincipalClientSecretSet || servicePrincipalClientCertificateSet; Review comment: @turcsanyip That sounds like a good approach. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] turcsanyip commented on a change in pull request #4843: NIFI-8258: Add support for Service Principal authentication in ADLS p…
turcsanyip commented on a change in pull request #4843: URL: https://github.com/apache/nifi/pull/4843#discussion_r583931321 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/ADLSCredentialsControllerService.java ## @@ -97,16 +142,38 @@ boolean sasTokenSet = StringUtils.isNotBlank(validationContext.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).getValue()); boolean useManagedIdentitySet = validationContext.getProperty(USE_MANAGED_IDENTITY).asBoolean(); -if (!onlyOneSet(accountKeySet, sasTokenSet, useManagedIdentitySet)) { -StringJoiner options = new StringJoiner(", ") -.add(AzureStorageUtils.ACCOUNT_KEY.getDisplayName()) -.add(AzureStorageUtils.PROP_SAS_TOKEN.getDisplayName()) -.add(USE_MANAGED_IDENTITY.getDisplayName()); +boolean servicePrincipalTenantIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_TENANT_ID).getValue()); +boolean servicePrincipalClientIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_ID).getValue()); +boolean servicePrincipalClientSecretSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_SECRET).getValue()); +boolean servicePrincipalClientCertificateSet = validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_CERTIFICATE).isSet(); + +boolean servicePrincipalSet = servicePrincipalTenantIdSet || servicePrincipalClientIdSet || servicePrincipalClientSecretSet || servicePrincipalClientCertificateSet; Review comment: My idea would be: - in case of `Auth Type = AUTO`: all props seen, the current validation running (eg. `Account Key` cannot be filled in and `Use Managed Identity` cannot be true at the same time) - in case of `Auth Type = Account Key`: only `Account Key` prop seen, only this property validated (must be filled in), the other invisible properties can have any values (it would be weird to show warnings for those properties in this case) - in case of `Auth Type = Managed Identity`: no other property can be seen, neither `Use Managed Identity`, no further validation needed, this Auth Type simply turns on Managed Identity authentication regardless of the value of the invisible `Use Managed Identity` property With these rules, the visible properties are always consistent. The invisible ones may not be but that seems to be acceptable for me. What do you think of it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] jfrazee commented on a change in pull request #4702: NIFI-8063: Added profile (enabled) to include most NARs, can be disabled
jfrazee commented on a change in pull request #4702: URL: https://github.com/apache/nifi/pull/4702#discussion_r583923559 ## File path: nifi-assembly/pom.xml ## @@ -451,356 +218,597 @@ language governing permissions and limitations under the License. --> org.apache.nifi -nifi-azure-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-azure-services-api-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-scripting-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-groovyx-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-elasticsearch-nar +nifi-registry-nar 1.13.0-SNAPSHOT nar org.apache.nifi -nifi-elasticsearch-client-service-api-nar +nifi-record-serialization-services-nar 1.13.0-SNAPSHOT nar org.apache.nifi -nifi-elasticsearch-client-service-nar +nifi-tcp-nar 1.13.0-SNAPSHOT nar org.apache.nifi -nifi-elasticsearch-restapi-nar +nifi-kerberos-credentials-service-nar 1.13.0-SNAPSHOT nar org.apache.nifi -nifi-lumberjack-nar +nifi-proxy-configuration-nar 1.13.0-SNAPSHOT nar + + -org.apache.nifi -nifi-beats-nar -1.13.0-SNAPSHOT -nar +javax.xml.bind +jaxb-api +2.3.0 -org.apache.nifi -nifi-cybersecurity-nar -1.13.0-SNAPSHOT -nar +com.sun.xml.bind +jaxb-impl +2.3.0 -org.apache.nifi -nifi-email-nar -1.13.0-SNAPSHOT -nar +com.sun.xml.bind +jaxb-core +2.3.0 -org.apache.nifi -nifi-amqp-nar -1.13.0-SNAPSHOT -nar +javax.annotation +javax.annotation-api +1.3.2 -org.apache.nifi -nifi-splunk-nar -1.13.0-SNAPSHOT -nar +javax.activation +javax.activation-api +1.2.0 + -org.apache.nifi -nifi-jms-cf-service-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-jms-processors-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-cassandra-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-cassandra-services-api-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-cassandra-services-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-spring-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-registry-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-hive-services-api-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-hive-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-site-to-site-reporting-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-record-serialization-services-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-mqtt-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-snmp-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-evtx-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-slack-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-smb-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -nifi-windows-event-log-nar -1.13.0-SNAPSHOT -nar - - -org.apache.nifi -
[GitHub] [nifi] mtien-apache commented on pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on pull request #4846: URL: https://github.com/apache/nifi/pull/4846#issuecomment-786894887 @pvillard31 Agreed the upload file button should be removed when a user clicks on the "Group" button. Will make the change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583918059 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessGroupResource.java ## @@ -4173,6 +4187,205 @@ private void sanitizeRegistryInfo(final VersionedProcessGroup versionedProcessGr } } +/** + * Uploads the specified versioned flow definition and adds it to a new process group. + * + * @param httpServletRequest request + * @param in The flow definition stream + * @return A processGroupEntity + * @throws IOException if there is an error during deserialization of the InputStream + */ +@POST +@Consumes(MediaType.MULTIPART_FORM_DATA) +@Produces(MediaType.APPLICATION_JSON) +@Path("{id}/process-groups/upload") +@ApiOperation( +value = "Uploads a versioned flow definition and creates a process group", +response = ProcessGroupEntity.class, +authorizations = { +@Authorization(value = "Write - /process-groups/{uuid}") +} +) +@ApiResponses( +value = { +@ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), +@ApiResponse(code = 401, message = "Client could not be authenticated."), +@ApiResponse(code = 403, message = "Client is not authorized to make this request."), +@ApiResponse(code = 404, message = "The specified resource could not be found."), +@ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") +} +) +public Response uploadProcessGroup( +@Context final HttpServletRequest httpServletRequest, +@ApiParam( +value = "The process group id.", +required = true +) +@PathParam("id") final String groupId, +@ApiParam( +value = "The process group name.", +required = true +) +@FormDataParam("groupName") final String groupName, +@ApiParam( +value = "The process group X position.", +required = true +) +@FormDataParam("position-x") final Double positionX, +@ApiParam( +value = "The process group Y position.", +required = true +) +@FormDataParam("position-y") final Double positionY, +@ApiParam( +value = "The client id.", +required = true +) +@FormDataParam("clientId") final String clientId, +@ApiParam( +value = "Acknowledges that this node is disconnected to allow for mutable requests to proceed.", +required = false +) +@FormDataParam(DISCONNECTED_NODE_ACKNOWLEDGED) @DefaultValue("false") final Boolean disconnectedNodeAcknowledged, +@FormDataParam("file") final InputStream in) throws IOException { + +// ensure the group name is specified +if (StringUtils.isBlank(groupName)) { +throw new IllegalArgumentException("The process group name is required."); +} + +if (StringUtils.isBlank(groupId)) { +throw new IllegalArgumentException("The parent process group id is required"); +} + +if (positionX == null) { +throw new IllegalArgumentException("The x coordinate of the proposed position must be specified."); +} + +if (positionY == null) { +throw new IllegalArgumentException("The y coordinate of the proposed position must be specified."); +} + +if (StringUtils.isBlank(clientId)) { +throw new IllegalArgumentException("The client id must be specified"); +} + +// get the contents of the InputStream as a String +String stringContent; +if (in != null) { +try { +stringContent = IOUtils.toString(in, StandardCharsets.UTF_8); +} catch (IOException e) { +throw new IOException("Unable to read the InputStream", e); +} +} else { +logger.warn("The InputStream is null"); +throw new NullPointerException("The InputStream is null"); +} + +// deserialize content to a VersionedFlowSnapshot +VersionedFlowSnapshot deserializedSnapshot; + +if (stringContent.length() > 0) { +try { +deserializedSnapshot = MAPPER.readValue(stringContent,
[GitHub] [nifi] pgyori commented on a change in pull request #4847: NIFI-8263: Maximum Thread Pool Size property introduced in ListenHTTP
pgyori commented on a change in pull request #4847: URL: https://github.com/apache/nifi/pull/4847#discussion_r583917079 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenHTTP.java ## @@ -226,6 +226,20 @@ public AllowableValue getAllowableValue() { .defaultValue(ClientAuthentication.AUTO.name()) .dependsOn(SSL_CONTEXT_SERVICE) .build(); +public static final PropertyDescriptor MAX_THREAD_POOL_SIZE = new PropertyDescriptor.Builder() +.name("max-thread-pool-size") +.displayName("Maximum Thread Pool Size") +.description("The maximum number of threads to be used by the embedded Jetty server. " ++ "The value can be set between 8 and 1000. " ++ "The value of this property affects the performance of the flows and the operating system, therefore " ++ "the default value should only be changed in justified cases. " ++ "A value that is less than the default value may be suitable " ++ "if only a small number of HTTP clients connect to the server. A greater value may be suitable " ++ "if a large number of HTTP clients are expected to make requests to the server simultaneously.") +.required(true) +.addValidator(StandardValidators.createLongValidator(8L, 1000L, true)) Review comment: Thank you, @exceptionfactory ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #4847: NIFI-8263: Maximum Thread Pool Size property introduced in ListenHTTP
exceptionfactory commented on a change in pull request #4847: URL: https://github.com/apache/nifi/pull/4847#discussion_r583916702 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenHTTP.java ## @@ -226,6 +226,20 @@ public AllowableValue getAllowableValue() { .defaultValue(ClientAuthentication.AUTO.name()) .dependsOn(SSL_CONTEXT_SERVICE) .build(); +public static final PropertyDescriptor MAX_THREAD_POOL_SIZE = new PropertyDescriptor.Builder() +.name("max-thread-pool-size") +.displayName("Maximum Thread Pool Size") +.description("The maximum number of threads to be used by the embedded Jetty server. " ++ "The value can be set between 8 and 1000. " ++ "The value of this property affects the performance of the flows and the operating system, therefore " ++ "the default value should only be changed in justified cases. " ++ "A value that is less than the default value may be suitable " ++ "if only a small number of HTTP clients connect to the server. A greater value may be suitable " ++ "if a large number of HTTP clients are expected to make requests to the server simultaneously.") +.required(true) +.addValidator(StandardValidators.createLongValidator(8L, 1000L, true)) Review comment: Thanks for providing that background @pgyori, the lower boundary makes sense given the behavior of `QueuedThreadPool`. Having `1000` as the upper boundary sounds like a reasonable limit since more is not always better when it comes to threads. With that explanation and the unit tests, the PR looks good! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #4847: NIFI-8263: Maximum Thread Pool Size property introduced in ListenHTTP
exceptionfactory commented on a change in pull request #4847: URL: https://github.com/apache/nifi/pull/4847#discussion_r583915058 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestListenHTTP.java ## @@ -405,6 +409,84 @@ public void testSecureServerRejectsUnsupportedTlsProtocolVersion() throws Except assertThrows(SSLHandshakeException.class, sslSocket::startHandshake); } +@Test +public void testMaxThreadPoolSizeTooLow() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "7"); + +// THEN +runner.assertNotValid(); +} + +@Test +public void testMaxThreadPoolSizeTooHigh() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "1001"); + +// THEN +runner.assertNotValid(); +} + +@Test +public void testMaxThreadPoolSizeOkLowerBound() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "8"); + +// THEN +runner.assertValid(); +} + +@Test +public void testMaxThreadPoolSizeOkUpperBound() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "1000"); + +// THEN +runner.assertValid(); +} + +@Test +public void testWhenServerIsStartedCreateQueuedThreadPoolIsCalledWithMaxThreadPoolSize() { +// GIVEN +int maxThreadPoolSize = 200; +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, Integer.toString(maxThreadPoolSize)); + +// WHEN +startWebServer(); + +// THEN +Mockito.verify(proc).createQueuedThreadPool(maxThreadPoolSize); +} + +@Test +public void testWhenServerIsStartedCreateCreateServerIsCalledWithTheRightQueuedThreadPool() { +// GIVEN +int maxThreadPoolSize = 200; +QueuedThreadPool queuedThreadPool = new QueuedThreadPool(maxThreadPoolSize); + +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, Integer.toString(maxThreadPoolSize)); + + Mockito.when(proc.createQueuedThreadPool(maxThreadPoolSize)).thenReturn(queuedThreadPool); Review comment: Thanks for the explanation @pgyori, that makes sense. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #4852: NIFI-8212: Refactored StandardExtensionDiscoveringManager to avoid us…
exceptionfactory commented on a change in pull request #4852: URL: https://github.com/apache/nifi/pull/4852#discussion_r583908315 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/flow/StandardStatelessDataflowFactory.java ## @@ -148,7 +149,19 @@ public StatelessDataflow createDataflow(final StatelessEngineConfiguration engin narClassLoaders, extensionClients); final VariableRegistry variableRegistry = VariableRegistry.EMPTY_REGISTRY; -final PropertyEncryptor encryptor = getPropertyEncryptor(engineConfiguration.getSensitivePropsKey()); +final Supplier encryptorFactory = new Supplier() { +private PropertyEncryptor created = null; + +@Override +public synchronized PropertyEncryptor get() { Review comment: Is it worth considering a more optimized locking strategy than method-level synchronization since this may get called multiple times? Some options include checking the `PropertyEncryptor` instance for null and then synchronizing around that object, or leveraging Apache Commons `AtomicInitializer`, or implementing something along the lines of that class. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #4852: NIFI-8212: Refactored StandardExtensionDiscoveringManager to avoid us…
exceptionfactory commented on a change in pull request #4852: URL: https://github.com/apache/nifi/pull/4852#discussion_r583902446 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/provenance/ComponentIdentifierLookup.java ## @@ -46,14 +48,15 @@ public ComponentIdentifierLookup(final FlowController flowController) { @Override public List getComponentTypes() { -final Set procClasses = flowController.getExtensionManager().getExtensions(Processor.class); +final Set procDefinitions = flowController.getExtensionManager().getExtensions(Processor.class); -final List componentTypes = new ArrayList<>(procClasses.size() + 2); +final List componentTypes = new ArrayList<>(procDefinitions.size() + 2); componentTypes.add(ProvenanceEventRecord.REMOTE_INPUT_PORT_TYPE); componentTypes.add(ProvenanceEventRecord.REMOTE_OUTPUT_PORT_TYPE); -procClasses.stream() -.map(Class::getSimpleName) +procDefinitions.stream() +.map(ExtensionDefinition::getImplementationClassName) +.map(className -> className.contains(".") ? StringUtils.substringAfterLast(className, ".") : className) Review comment: The `StandardStatelessEngine` uses the same approach attempting to derive the simple class name, does it make sense to add a method on `ExtensionDefinition` to return the simple class name? ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-nar-utils/src/main/java/org/apache/nifi/nar/NarUnpacker.java ## @@ -288,17 +280,20 @@ private static void unpackBundleDocs(final File docsDirectory, final ExtensionMa * * @param nar the nar to unpack * @param baseWorkingDirectory the directory to unpack to + * @param verifyHash if the NAR has already been unpacked, indicates whether or not the hash should be verified. If this value is true, + * and the NAR's hash does not match the hash written to the unpacked directory, the working directory will be deleted and the NAR will be + * unpacked again. If false, the NAR will not be unpacked again and its hash will not be checked. * @return the directory to the unpacked NAR * @throws IOException if unable to explode nar */ -public static File unpackNar(final File nar, final File baseWorkingDirectory) throws IOException { +public static File unpackNar(final File nar, final File baseWorkingDirectory, final boolean verifyHash) throws IOException { final File narWorkingDirectory = new File(baseWorkingDirectory, nar.getName() + "-unpacked"); // if the working directory doesn't exist, unpack the nar if (!narWorkingDirectory.exists()) { unpack(nar, narWorkingDirectory, FileDigestUtils.getDigest(nar)); -} else { -// the working directory does exist. Run digest against the nar +} else if (verifyHash) { +// the working directory does exist. Run MD5 sum against the nar Review comment: NIFI-8132 changed the hash to SHA-256, so the previous comment used the general term `digest` as opposed to referencing the particular algorithm. This could be changed to say `SHA-256`, or reverted. ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-nar-utils/src/main/java/org/apache/nifi/nar/StandardExtensionDiscoveringManager.java ## @@ -145,74 +152,103 @@ public void discoverExtensions(final Set narBundles) { * * @param bundle from which to load extensions */ -@SuppressWarnings("unchecked") private void loadExtensions(final Bundle bundle) { -for (final Map.Entry> entry : definitionMap.entrySet()) { -final boolean isControllerService = ControllerService.class.equals(entry.getKey()); -final boolean isProcessor = Processor.class.equals(entry.getKey()); -final boolean isReportingTask = ReportingTask.class.equals(entry.getKey()); - -final ServiceLoader serviceLoader = ServiceLoader.load(entry.getKey(), bundle.getClassLoader()); -for (final Object o : serviceLoader) { -try { -loadExtension(o, entry.getKey(), bundle); -} catch (Exception e) { -logger.warn("Failed to register extension {} due to: {}" , new Object[]{o.getClass().getCanonicalName(), e.getMessage()}); -if (logger.isDebugEnabled()) { -logger.debug("", e); +for (final Class extensionType : definitionMap.keySet()) { +final String serviceType = extensionType.getName(); + +try { +final Set serviceResourceUrls = getServiceFileURLs(bundle, extensionType); +logger.debug("Bundle {} has the following Services File URLs for {}: {}", bundle, serviceType,
[GitHub] [nifi] jfrazee commented on a change in pull request #4843: NIFI-8258: Add support for Service Principal authentication in ADLS p…
jfrazee commented on a change in pull request #4843: URL: https://github.com/apache/nifi/pull/4843#discussion_r583913854 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/ADLSCredentialsControllerService.java ## @@ -97,16 +142,38 @@ boolean sasTokenSet = StringUtils.isNotBlank(validationContext.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).getValue()); boolean useManagedIdentitySet = validationContext.getProperty(USE_MANAGED_IDENTITY).asBoolean(); -if (!onlyOneSet(accountKeySet, sasTokenSet, useManagedIdentitySet)) { -StringJoiner options = new StringJoiner(", ") -.add(AzureStorageUtils.ACCOUNT_KEY.getDisplayName()) -.add(AzureStorageUtils.PROP_SAS_TOKEN.getDisplayName()) -.add(USE_MANAGED_IDENTITY.getDisplayName()); +boolean servicePrincipalTenantIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_TENANT_ID).getValue()); +boolean servicePrincipalClientIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_ID).getValue()); +boolean servicePrincipalClientSecretSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_SECRET).getValue()); +boolean servicePrincipalClientCertificateSet = validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_CERTIFICATE).isSet(); + +boolean servicePrincipalSet = servicePrincipalTenantIdSet || servicePrincipalClientIdSet || servicePrincipalClientSecretSet || servicePrincipalClientCertificateSet; Review comment: That makes sense. I think it'll be easier to see how it feels with it in action. We don't have an answer re: the redundancy with the value of Auth Type = Managed Identity and Use Managed Identity = t/f, right? Is the idea for them to co-exist and then validation ensures it's consistent? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] pgyori commented on a change in pull request #4847: NIFI-8263: Maximum Thread Pool Size property introduced in ListenHTTP
pgyori commented on a change in pull request #4847: URL: https://github.com/apache/nifi/pull/4847#discussion_r583911727 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenHTTP.java ## @@ -226,6 +226,20 @@ public AllowableValue getAllowableValue() { .defaultValue(ClientAuthentication.AUTO.name()) .dependsOn(SSL_CONTEXT_SERVICE) .build(); +public static final PropertyDescriptor MAX_THREAD_POOL_SIZE = new PropertyDescriptor.Builder() +.name("max-thread-pool-size") +.displayName("Maximum Thread Pool Size") +.description("The maximum number of threads to be used by the embedded Jetty server. " ++ "The value can be set between 8 and 1000. " ++ "The value of this property affects the performance of the flows and the operating system, therefore " ++ "the default value should only be changed in justified cases. " ++ "A value that is less than the default value may be suitable " ++ "if only a small number of HTTP clients connect to the server. A greater value may be suitable " ++ "if a large number of HTTP clients are expected to make requests to the server simultaneously.") +.required(true) +.addValidator(StandardValidators.createLongValidator(8L, 1000L, true)) Review comment: Yes, a minimum value of 1 is supported. However, the relevant constructor of QueuedThreadPool uses the minimum value of 8 hard coded if the max value is greater than 8. Based on that, I do not think it is advised to set a very low number of threads, to avoid performance issues with the Processor. Also, there is no limit for the max value (except for what an integer can hold), but we want to avoid abuse of this setting because it may lead to performance issues on the entire system NiFi is running on. Although the upper limit I wrote there is up for discussion. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] pgyori commented on a change in pull request #4847: NIFI-8263: Maximum Thread Pool Size property introduced in ListenHTTP
pgyori commented on a change in pull request #4847: URL: https://github.com/apache/nifi/pull/4847#discussion_r583907815 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestListenHTTP.java ## @@ -405,6 +409,84 @@ public void testSecureServerRejectsUnsupportedTlsProtocolVersion() throws Except assertThrows(SSLHandshakeException.class, sslSocket::startHandshake); } +@Test +public void testMaxThreadPoolSizeTooLow() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "7"); + +// THEN +runner.assertNotValid(); +} + +@Test +public void testMaxThreadPoolSizeTooHigh() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "1001"); + +// THEN +runner.assertNotValid(); +} + +@Test +public void testMaxThreadPoolSizeOkLowerBound() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "8"); + +// THEN +runner.assertValid(); +} + +@Test +public void testMaxThreadPoolSizeOkUpperBound() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "1000"); + +// THEN +runner.assertValid(); +} + +@Test +public void testWhenServerIsStartedCreateQueuedThreadPoolIsCalledWithMaxThreadPoolSize() { +// GIVEN +int maxThreadPoolSize = 200; +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, Integer.toString(maxThreadPoolSize)); + +// WHEN +startWebServer(); + +// THEN +Mockito.verify(proc).createQueuedThreadPool(maxThreadPoolSize); +} + +@Test +public void testWhenServerIsStartedCreateCreateServerIsCalledWithTheRightQueuedThreadPool() { +// GIVEN +int maxThreadPoolSize = 200; +QueuedThreadPool queuedThreadPool = new QueuedThreadPool(maxThreadPoolSize); + +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, Integer.toString(maxThreadPoolSize)); + + Mockito.when(proc.createQueuedThreadPool(maxThreadPoolSize)).thenReturn(queuedThreadPool); Review comment: I don't think we need to test whether the Server(ThreadPool pool) constructor works correctly, because that should be covered in the Jetty server's unit tests. All we need to make sure is that the constructor is called with the right parameter. The other thing is that the server creation is called from onTrigger. I believe that what we need to test here is not that the server creator method can correctly create a server (in isolation) with the right thread pool size, but that when the processor is started, server creation is executed with the right parameter. That is why I called startWebServer() in the unit test, which calls onTrigger() in which all the server initialization happens, and I needed the Spy to make sure that the createServer() method is called with the right parameter object. This way, if someone removes my modification from the code by replacing the new: final Server server = createServer(threadPool); with the old: final Server server = new Server(threadPool); the unit test will fail. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8230) Remove default Sensitive Properties Key
[ https://issues.apache.org/jira/browse/NIFI-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17291910#comment-17291910 ] David Handermann commented on NIFI-8230: Thanks for the feedback [~cef111]! I appreciate your contribution and look forward to your feedback when I submit the PR. > Remove default Sensitive Properties Key > --- > > Key: NIFI-8230 > URL: https://issues.apache.org/jira/browse/NIFI-8230 > Project: Apache NiFi > Issue Type: Sub-task > Components: Security >Affects Versions: 1.13.0 >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > > Support for encryption of sensitive properties relies on configuration of the > Sensitive Properties Key specified using {{nifi.sensitive.props.key}} in > {{nifi.properties}}. The default behavior of {{StringEncryptor}} allows for > the key to be blank and falls back to a default value, logging a verbose > error message indicating that an explicit key should be provided. > The fallback to a default value for the Sensitive Properties Key should be > removed and an exception should be thrown indicating that the property value > is required. Deployments that already have an explicit value will not be > impacted. Migration guidance for upgrading should include steps to encrypt > the configuration using a new key. > It may be worthwhile generating a random Sensitive Properties Key for new > installations where there is no existing flow. This would new standalone > installations to run with a secure key without the need for manual steps. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] exceptionfactory commented on a change in pull request #4847: NIFI-8263: Maximum Thread Pool Size property introduced in ListenHTTP
exceptionfactory commented on a change in pull request #4847: URL: https://github.com/apache/nifi/pull/4847#discussion_r583893378 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestListenHTTP.java ## @@ -405,6 +409,84 @@ public void testSecureServerRejectsUnsupportedTlsProtocolVersion() throws Except assertThrows(SSLHandshakeException.class, sslSocket::startHandshake); } +@Test +public void testMaxThreadPoolSizeTooLow() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "7"); + +// THEN +runner.assertNotValid(); +} + +@Test +public void testMaxThreadPoolSizeTooHigh() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "1001"); + +// THEN +runner.assertNotValid(); +} + +@Test +public void testMaxThreadPoolSizeOkLowerBound() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "8"); + +// THEN +runner.assertValid(); +} + +@Test +public void testMaxThreadPoolSizeOkUpperBound() { +// GIVEN, WHEN +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, "1000"); + +// THEN +runner.assertValid(); +} + +@Test +public void testWhenServerIsStartedCreateQueuedThreadPoolIsCalledWithMaxThreadPoolSize() { +// GIVEN +int maxThreadPoolSize = 200; +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, Integer.toString(maxThreadPoolSize)); + +// WHEN +startWebServer(); + +// THEN +Mockito.verify(proc).createQueuedThreadPool(maxThreadPoolSize); +} + +@Test +public void testWhenServerIsStartedCreateCreateServerIsCalledWithTheRightQueuedThreadPool() { +// GIVEN +int maxThreadPoolSize = 200; +QueuedThreadPool queuedThreadPool = new QueuedThreadPool(maxThreadPoolSize); + +runner.setProperty(ListenHTTP.PORT, Integer.toString(availablePort)); +runner.setProperty(ListenHTTP.BASE_PATH, HTTP_BASE_PATH); +runner.setProperty(ListenHTTP.MAX_THREAD_POOL_SIZE, Integer.toString(maxThreadPoolSize)); + + Mockito.when(proc.createQueuedThreadPool(maxThreadPoolSize)).thenReturn(queuedThreadPool); Review comment: Instead of exposing `createThreadPool()` as a protected method for testing, it should only be necessary to expose `createServer()` as protected. The Jetty `Server` class has a `getThreadPool()` method which can then be used for validation. The return interface `ThreadPool` is generic, but it is possible to check that it is an instance of the public `SizedThreadPool` interface and then compare the results of `getMaxThreads()`. That would remove the need to use the Mockito `Spy` annotation on the processor and allow for a simple `assertEquals(maxThreadPoolSize, sizedThreadPool.getMaxThreads())` ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ListenHTTP.java ## @@ -226,6 +226,20 @@ public AllowableValue getAllowableValue() { .defaultValue(ClientAuthentication.AUTO.name()) .dependsOn(SSL_CONTEXT_SERVICE) .build(); +public static final PropertyDescriptor MAX_THREAD_POOL_SIZE = new PropertyDescriptor.Builder() +.name("max-thread-pool-size") +.displayName("Maximum Thread Pool Size") +.description("The maximum number of threads to be used by the embedded Jetty server. " ++ "The value can be set between 8 and 1000. " ++ "The value of this property affects the performance of the flows and the operating system, therefore " ++ "the default value should only be changed in justified cases. " ++ "A value that is less than the default value may be suitable " ++ "if only a small number of HTTP clients connect to the server. A greater value may be suitable " ++ "if a large number of HTTP clients are expected to make requests to the server
[jira] [Created] (NIFI-8269) Improve ForkRecord to inherit schema from the array being forked
Pierre Villard created NIFI-8269: Summary: Improve ForkRecord to inherit schema from the array being forked Key: NIFI-8269 URL: https://issues.apache.org/jira/browse/NIFI-8269 Project: Apache NiFi Issue Type: Improvement Reporter: Pierre Villard Assignee: Pierre Villard At the moment the ForkRecord processor will require the writer to be configured with a schema corresponding to what will be output. We could make things more clever to inherit the schema from the reader and change it based on the operation being performed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] pgyori commented on pull request #4847: NIFI-8263: Maximum Thread Pool Size property introduced in ListenHTTP
pgyori commented on pull request #4847: URL: https://github.com/apache/nifi/pull/4847#issuecomment-786865501 Thank you @exceptionfactory ! You are right, there is no simple way to check the thread pool size within the Server instance. After your comment, I thought about it and found that we do not need to check the server's thread pool size as long as we can test what value the server's constructor is called with. So I organized the server instantiation into a separate method and created unit tests for it. My modifications are in the recent commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] bbende commented on a change in pull request #4852: NIFI-8212: Refactored StandardExtensionDiscoveringManager to avoid us…
bbende commented on a change in pull request #4852: URL: https://github.com/apache/nifi/pull/4852#discussion_r583882100 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/ExtensionBuilder.java ## @@ -463,10 +524,12 @@ private ControllerServiceNode createGhostControllerServiceNode() { Thread.currentThread().setContextClassLoader(detectedClassLoader); final Object extensionInstance = rawClass.newInstance(); +final T cast = nodeType.cast(extensionInstance); +verifyControllerServiceReferences(cast, bundle.getClassLoader()); Review comment: When testing creating some processors I decided to try adding PutHDFS since it has `@RequiresInstanceClassLoading`, and it produced a ghost component with this exception in the logs... ``` Caused by: java.lang.NullPointerException: null at java.util.ArrayList.(ArrayList.java:178) at org.apache.nifi.processors.hadoop.PutHDFS.getSupportedPropertyDescriptors(PutHDFS.java:202) at org.apache.nifi.components.AbstractConfigurableComponent.getPropertyDescriptors(AbstractConfigurableComponent.java:247) at org.apache.nifi.controller.ExtensionBuilder.verifyControllerServiceReferences(ExtensionBuilder.java:433) at org.apache.nifi.controller.ExtensionBuilder.createLoggableComponent(ExtensionBuilder.java:528) at org.apache.nifi.controller.ExtensionBuilder.createLoggableProcessor(ExtensionBuilder.java:485) ... 170 common frames omitted ``` I think the issue is that verify is calls `getSupportedPropertyDescritptors`, but we haven't called `initialize` on the component yet and we don't know if initialize is what sets up the component's property descriptors. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] pvillard31 commented on pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
pvillard31 commented on pull request #4846: URL: https://github.com/apache/nifi/pull/4846#issuecomment-786855079 Note sure if this has been addressed but I noticed that: if selecting a flow on the canvas ans clicking on the "Group" icon (to make a process group from the selection), then the icon to upload a JSON file appears when it should just ask for the new process group to create from the selection. https://user-images.githubusercontent.com/11541012/109346830-ad681980-788b-11eb-9793-891cb90fe332.png;> https://user-images.githubusercontent.com/11541012/109346855-b48f2780-788b-11eb-9b06-e8162451f9b7.png;> I don't think this is expected. And if I do select a JSON file at this step (instead of just giving a name for my selection to be added in a process group), the process group for my JSON file will be added instead of putting my selection into a process group. So... I guess it's just about removing the option to upload the file when we click on the "Group" button. It should only be available when drag-and-dropping a process group on the canvas. Thoughts? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #4843: NIFI-8258: Add support for Service Principal authentication in ADLS p…
exceptionfactory commented on a change in pull request #4843: URL: https://github.com/apache/nifi/pull/4843#discussion_r583865099 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/ADLSCredentialsControllerService.java ## @@ -97,16 +142,38 @@ boolean sasTokenSet = StringUtils.isNotBlank(validationContext.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).getValue()); boolean useManagedIdentitySet = validationContext.getProperty(USE_MANAGED_IDENTITY).asBoolean(); -if (!onlyOneSet(accountKeySet, sasTokenSet, useManagedIdentitySet)) { -StringJoiner options = new StringJoiner(", ") -.add(AzureStorageUtils.ACCOUNT_KEY.getDisplayName()) -.add(AzureStorageUtils.PROP_SAS_TOKEN.getDisplayName()) -.add(USE_MANAGED_IDENTITY.getDisplayName()); +boolean servicePrincipalTenantIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_TENANT_ID).getValue()); +boolean servicePrincipalClientIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_ID).getValue()); +boolean servicePrincipalClientSecretSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_SECRET).getValue()); +boolean servicePrincipalClientCertificateSet = validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_CERTIFICATE).isSet(); + +boolean servicePrincipalSet = servicePrincipalTenantIdSet || servicePrincipalClientIdSet || servicePrincipalClientSecretSet || servicePrincipalClientCertificateSet; Review comment: As @turcsanyip described, the default value of a new `Authentication Type` property would be `AUTO`, indicating that currently visible properties would be displayed and the existing logic to infer the desired authentication type would be followed. Selecting a more specific `Authentication Type` value would hide properties that are not applicable using the depend on feature. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-8212) Improve startup times for Stateless
[ https://issues.apache.org/jira/browse/NIFI-8212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-8212: - Status: Patch Available (was: Open) > Improve startup times for Stateless > --- > > Key: NIFI-8212 > URL: https://issues.apache.org/jira/browse/NIFI-8212 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, NiFi Stateless >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > There are a handful of things that happen in NiFi (and stateless) startup > that take quite a while that can be improved: > * We unpack all NAR's if they are not already unpacked. If they are > unpacked, we perform a hash of the nar to ensure that it matches with what > was unpacked. With stateless, though, we download nars on demand and name > them a temporary file until the download completes, then rename them. As a > result, we don't expect any hash conflicts unless it's a SNAPSHOT nar. We > should avoid calculating hashes against all of the nars on startup. > * We use the ServiceLoader to load all extensions. Given the number of > extensions that we have and the prevalence of things like ObjectMapper as > member variables, this can be rather expensive. We should instead just read > the ServiceLoader file and create the objects on-demand. The temp components > are needed for generating documentation, for example. But otherwise they are > not needed. And stateless doesn't bother generating docs (since there's no UI > to view them). This avoids creating lots of expensive objects and can also > prevent loading huge numbers of classes into memory. > * StringEncryptor is almost never used in stateless. We should lazy load it > if needed. > * Controller Service enabling is very slow in stateless. Currently, it > asynchronously starts all services. Then it keeps polling to see if services > are enabled and if not sleeps for a short period. We should instead move to a > method of using Futures or wait/notify, etc. so that service enablement can > completely quickly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8212) Improve startup times for Stateless
[ https://issues.apache.org/jira/browse/NIFI-8212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-8212: - Fix Version/s: 1.14.0 > Improve startup times for Stateless > --- > > Key: NIFI-8212 > URL: https://issues.apache.org/jira/browse/NIFI-8212 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, NiFi Stateless >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.14.0 > > Time Spent: 10m > Remaining Estimate: 0h > > There are a handful of things that happen in NiFi (and stateless) startup > that take quite a while that can be improved: > * We unpack all NAR's if they are not already unpacked. If they are > unpacked, we perform a hash of the nar to ensure that it matches with what > was unpacked. With stateless, though, we download nars on demand and name > them a temporary file until the download completes, then rename them. As a > result, we don't expect any hash conflicts unless it's a SNAPSHOT nar. We > should avoid calculating hashes against all of the nars on startup. > * We use the ServiceLoader to load all extensions. Given the number of > extensions that we have and the prevalence of things like ObjectMapper as > member variables, this can be rather expensive. We should instead just read > the ServiceLoader file and create the objects on-demand. The temp components > are needed for generating documentation, for example. But otherwise they are > not needed. And stateless doesn't bother generating docs (since there's no UI > to view them). This avoids creating lots of expensive objects and can also > prevent loading huge numbers of classes into memory. > * StringEncryptor is almost never used in stateless. We should lazy load it > if needed. > * Controller Service enabling is very slow in stateless. Currently, it > asynchronously starts all services. Then it keeps polling to see if services > are enabled and if not sleeps for a short period. We should instead move to a > method of using Futures or wait/notify, etc. so that service enablement can > completely quickly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 opened a new pull request #4852: NIFI-8212: Refactored StandardExtensionDiscoveringManager to avoid us…
markap14 opened a new pull request #4852: URL: https://github.com/apache/nifi/pull/4852 …ing ServiceLoader. Instead, it will look at the ServiceLoader file and read the names of the classes but avoid instantiating all of the objects or loading the classes into memory. - Updated Doc Generation so that if the documentation for a given NAR already exists, it doesn't delete it and re-generate it. This was necessary because we are no longer instantiating an instance of each component and instead lazily creating the components as necessary. - Removed stateless version of extension registry because it's no longer necessary Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8230) Remove default Sensitive Properties Key
[ https://issues.apache.org/jira/browse/NIFI-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17291842#comment-17291842 ] Moncef ABBOUD commented on NIFI-8230: - Hello [~exceptionfactory], I have just looked over the PR. Kudos for the refactoring. It has brought more clarity and elegance to the code. I was actually familiarizing myself with the NiFiProperties loading process and the initialization of an empty flow. But since you have put up so much work in the 4809 PR and you already are the assignee for this issue, far be it from me the idea to steal your thunder. Best of luck working on this. > Remove default Sensitive Properties Key > --- > > Key: NIFI-8230 > URL: https://issues.apache.org/jira/browse/NIFI-8230 > Project: Apache NiFi > Issue Type: Sub-task > Components: Security >Affects Versions: 1.13.0 >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > > Support for encryption of sensitive properties relies on configuration of the > Sensitive Properties Key specified using {{nifi.sensitive.props.key}} in > {{nifi.properties}}. The default behavior of {{StringEncryptor}} allows for > the key to be blank and falls back to a default value, logging a verbose > error message indicating that an explicit key should be provided. > The fallback to a default value for the Sensitive Properties Key should be > removed and an exception should be thrown indicating that the property value > is required. Deployments that already have an explicit value will not be > impacted. Migration guidance for upgrading should include steps to encrypt > the configuration using a new key. > It may be worthwhile generating a random Sensitive Properties Key for new > installations where there is no existing flow. This would new standalone > installations to run with a secure key without the need for manual steps. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] turcsanyip commented on a change in pull request #4843: NIFI-8258: Add support for Service Principal authentication in ADLS p…
turcsanyip commented on a change in pull request #4843: URL: https://github.com/apache/nifi/pull/4843#discussion_r583813130 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/ADLSCredentialsControllerService.java ## @@ -97,16 +142,38 @@ boolean sasTokenSet = StringUtils.isNotBlank(validationContext.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).getValue()); boolean useManagedIdentitySet = validationContext.getProperty(USE_MANAGED_IDENTITY).asBoolean(); -if (!onlyOneSet(accountKeySet, sasTokenSet, useManagedIdentitySet)) { -StringJoiner options = new StringJoiner(", ") -.add(AzureStorageUtils.ACCOUNT_KEY.getDisplayName()) -.add(AzureStorageUtils.PROP_SAS_TOKEN.getDisplayName()) -.add(USE_MANAGED_IDENTITY.getDisplayName()); +boolean servicePrincipalTenantIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_TENANT_ID).getValue()); +boolean servicePrincipalClientIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_ID).getValue()); +boolean servicePrincipalClientSecretSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_SECRET).getValue()); +boolean servicePrincipalClientCertificateSet = validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_CERTIFICATE).isSet(); + +boolean servicePrincipalSet = servicePrincipalTenantIdSet || servicePrincipalClientIdSet || servicePrincipalClientSecretSet || servicePrincipalClientCertificateSet; Review comment: "_the UX will be a bit weird at first because all the fields will initially be hidden_" : I don't think so because the default AUTO would mean `all properties displayed` if I understand @exceptionfactory 's idea correctly Regarding repurposing "Use Managed Identity": we could rename the property on the UI and also the existing `true` to `Managed Identity` but `false` covers `Account Key` and `SAS Token` and we don;t know which one, so it is not clear for me how to handle it This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] turcsanyip commented on a change in pull request #4843: NIFI-8258: Add support for Service Principal authentication in ADLS p…
turcsanyip commented on a change in pull request #4843: URL: https://github.com/apache/nifi/pull/4843#discussion_r583803512 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/ADLSCredentialsControllerService.java ## @@ -74,12 +75,56 @@ .addValidator(StandardValidators.BOOLEAN_VALIDATOR) .build(); +public static final PropertyDescriptor SERVICE_PRINCIPAL_TENANT_ID = new PropertyDescriptor.Builder() +.name("service-principal-tenant-id") +.displayName("Service Principal Tenant ID") +.description("Tenant ID of the Azure Active Directory hosting the Service Principal. The property is required when Service Principal authentication is used.") +.sensitive(true) +.required(false) +.addValidator(StandardValidators.NON_BLANK_VALIDATOR) + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) Review comment: Yes, I believe this is the consensus and I'll remove EL support. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
exceptionfactory commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583786018 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessGroupResource.java ## @@ -4173,6 +4187,205 @@ private void sanitizeRegistryInfo(final VersionedProcessGroup versionedProcessGr } } +/** + * Uploads the specified versioned flow definition and adds it to a new process group. + * + * @param httpServletRequest request + * @param in The flow definition stream + * @return A processGroupEntity + * @throws IOException if there is an error during deserialization of the InputStream + */ +@POST +@Consumes(MediaType.MULTIPART_FORM_DATA) +@Produces(MediaType.APPLICATION_JSON) +@Path("{id}/process-groups/upload") +@ApiOperation( +value = "Uploads a versioned flow definition and creates a process group", +response = ProcessGroupEntity.class, +authorizations = { +@Authorization(value = "Write - /process-groups/{uuid}") +} +) +@ApiResponses( +value = { +@ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), +@ApiResponse(code = 401, message = "Client could not be authenticated."), +@ApiResponse(code = 403, message = "Client is not authorized to make this request."), +@ApiResponse(code = 404, message = "The specified resource could not be found."), +@ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") +} +) +public Response uploadProcessGroup( +@Context final HttpServletRequest httpServletRequest, +@ApiParam( +value = "The process group id.", +required = true +) +@PathParam("id") final String groupId, +@ApiParam( +value = "The process group name.", +required = true +) +@FormDataParam("groupName") final String groupName, +@ApiParam( +value = "The process group X position.", +required = true +) +@FormDataParam("position-x") final Double positionX, +@ApiParam( +value = "The process group Y position.", +required = true +) +@FormDataParam("position-y") final Double positionY, +@ApiParam( +value = "The client id.", +required = true +) +@FormDataParam("clientId") final String clientId, +@ApiParam( +value = "Acknowledges that this node is disconnected to allow for mutable requests to proceed.", +required = false +) +@FormDataParam(DISCONNECTED_NODE_ACKNOWLEDGED) @DefaultValue("false") final Boolean disconnectedNodeAcknowledged, +@FormDataParam("file") final InputStream in) throws IOException { + +// ensure the group name is specified +if (StringUtils.isBlank(groupName)) { +throw new IllegalArgumentException("The process group name is required."); +} + +if (StringUtils.isBlank(groupId)) { +throw new IllegalArgumentException("The parent process group id is required"); +} + +if (positionX == null) { +throw new IllegalArgumentException("The x coordinate of the proposed position must be specified."); +} + +if (positionY == null) { +throw new IllegalArgumentException("The y coordinate of the proposed position must be specified."); +} + +if (StringUtils.isBlank(clientId)) { +throw new IllegalArgumentException("The client id must be specified"); +} + +// get the contents of the InputStream as a String +String stringContent; +if (in != null) { +try { +stringContent = IOUtils.toString(in, StandardCharsets.UTF_8); +} catch (IOException e) { +throw new IOException("Unable to read the InputStream", e); +} +} else { +logger.warn("The InputStream is null"); +throw new NullPointerException("The InputStream is null"); +} + +// deserialize content to a VersionedFlowSnapshot +VersionedFlowSnapshot deserializedSnapshot; + +if (stringContent.length() > 0) { +try { +deserializedSnapshot = MAPPER.readValue(stringContent,
[GitHub] [nifi] jfrazee commented on a change in pull request #4843: NIFI-8258: Add support for Service Principal authentication in ADLS p…
jfrazee commented on a change in pull request #4843: URL: https://github.com/apache/nifi/pull/4843#discussion_r583790293 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/ADLSCredentialsControllerService.java ## @@ -74,12 +75,56 @@ .addValidator(StandardValidators.BOOLEAN_VALIDATOR) .build(); +public static final PropertyDescriptor SERVICE_PRINCIPAL_TENANT_ID = new PropertyDescriptor.Builder() +.name("service-principal-tenant-id") +.displayName("Service Principal Tenant ID") +.description("Tenant ID of the Azure Active Directory hosting the Service Principal. The property is required when Service Principal authentication is used.") +.sensitive(true) +.required(false) +.addValidator(StandardValidators.NON_BLANK_VALIDATOR) + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) Review comment: Ok. So seems consensus is to not support EL? And we can entertain an alternate CS, lookup service, or change if there's specific demand? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] jfrazee commented on a change in pull request #4843: NIFI-8258: Add support for Service Principal authentication in ADLS p…
jfrazee commented on a change in pull request #4843: URL: https://github.com/apache/nifi/pull/4843#discussion_r583790293 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/ADLSCredentialsControllerService.java ## @@ -74,12 +75,56 @@ .addValidator(StandardValidators.BOOLEAN_VALIDATOR) .build(); +public static final PropertyDescriptor SERVICE_PRINCIPAL_TENANT_ID = new PropertyDescriptor.Builder() +.name("service-principal-tenant-id") +.displayName("Service Principal Tenant ID") +.description("Tenant ID of the Azure Active Directory hosting the Service Principal. The property is required when Service Principal authentication is used.") +.sensitive(true) +.required(false) +.addValidator(StandardValidators.NON_BLANK_VALIDATOR) + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) Review comment: Ok. So seems consensus is to not support EL? And we can entertain an alternate CS or lookup service if there's specific demand? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-8268) FreeFormTextRecordSetWriter doesn't expose the Schema Access properties
[ https://issues.apache.org/jira/browse/NIFI-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-8268: --- Fix Version/s: 1.13.1 1.14.0 > FreeFormTextRecordSetWriter doesn't expose the Schema Access properties > --- > > Key: NIFI-8268 > URL: https://issues.apache.org/jira/browse/NIFI-8268 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.14.0, 1.13.1 > > > I found (via NIFI-8266) that the parent properties (Schema Access Strategy, > e.g.) for FreeFormTextRecordSetWriter aren't inherited and are thus not > exposed. Cases like NIFI-3912 addressed the issue in a different way, but the > properties should just be inherited, since the defaults are what are > currently being used under the hood. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8266) InvokeScriptedProcessor can throw NPE during custom property validation
[ https://issues.apache.org/jira/browse/NIFI-8266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-8266: --- Fix Version/s: 1.13.1 1.14.0 > InvokeScriptedProcessor can throw NPE during custom property validation > --- > > Key: NIFI-8266 > URL: https://issues.apache.org/jira/browse/NIFI-8266 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.14.0, 1.13.1 > > Time Spent: 10m > Remaining Estimate: 0h > > If a script in InvokeScriptedProcessor (ISP) specifies additional properties > to be added to the configuration dialog, a NullPointerException can occur > during validation: > This appears to be the result of the ValidationContext for the ISP being > created before the script is loaded, and thus the properties are not present > in the context during validation. However the list iterated over in > AbstractConfigurableComponent.validate() is the list of supported property > descriptors, which may or may not include the script's properties when > validate() is called. > The list to be iterated over should be the list of properties in the > ValidationContext, to avoid an NPE. Having said that, for components that can > provide properties dynamically (via script, e.g.) the ValidationContext > should be recreated rather than reusing the first one created. This could be > accomplished with a component annotation (ProvidesCustomProperties or > something). If the annotation exists for the component, the ValidationContext > should be recreated before validation; otherwise, the first one created > should be retained for performance purposes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] exceptionfactory commented on pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
exceptionfactory commented on pull request #4846: URL: https://github.com/apache/nifi/pull/4846#issuecomment-786770245 > @exceptionfactory Thanks for reviewing! I've addressed your feedback and committed the changes. Thanks for making the changes @mtien-apache. I will take another look following the updates. I noticed that `StandardProcessGroup`, `NiFiServiceFacade`, and `StandardNiFiServiceFacade` still show up with reordered imports, can you remove those changes? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] jfrazee commented on a change in pull request #4843: NIFI-8258: Add support for Service Principal authentication in ADLS p…
jfrazee commented on a change in pull request #4843: URL: https://github.com/apache/nifi/pull/4843#discussion_r583783182 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/ADLSCredentialsControllerService.java ## @@ -97,16 +142,38 @@ boolean sasTokenSet = StringUtils.isNotBlank(validationContext.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).getValue()); boolean useManagedIdentitySet = validationContext.getProperty(USE_MANAGED_IDENTITY).asBoolean(); -if (!onlyOneSet(accountKeySet, sasTokenSet, useManagedIdentitySet)) { -StringJoiner options = new StringJoiner(", ") -.add(AzureStorageUtils.ACCOUNT_KEY.getDisplayName()) -.add(AzureStorageUtils.PROP_SAS_TOKEN.getDisplayName()) -.add(USE_MANAGED_IDENTITY.getDisplayName()); +boolean servicePrincipalTenantIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_TENANT_ID).getValue()); +boolean servicePrincipalClientIdSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_ID).getValue()); +boolean servicePrincipalClientSecretSet = StringUtils.isNotBlank(validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_SECRET).getValue()); +boolean servicePrincipalClientCertificateSet = validationContext.getProperty(SERVICE_PRINCIPAL_CLIENT_CERTIFICATE).isSet(); + +boolean servicePrincipalSet = servicePrincipalTenantIdSet || servicePrincipalClientIdSet || servicePrincipalClientSecretSet || servicePrincipalClientCertificateSet; Review comment: I like this, but for an existing flow the UX will be a bit weird at first because all the fields will initially be hidden I think. I was trying to think through whether there's a way to repurpose "Use Managed Identity" for this without being excessively dirty? I think there is. It's possible to add allowable values without breaking anything, display name can of course change, if you change to `AllowableValue` the existing values will look natural in the UI, and everything except the `name()` value in the code might look pretty much like this is how it originally was done. Can we get past having this property still being identified as "storage-use-managed-identity"? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache commented on pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on pull request #4846: URL: https://github.com/apache/nifi/pull/4846#issuecomment-786770072 My last commit includes the disabled buttons, which completes this PR and is ready for merge once it's approved. Thanks. @pvillard31 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-8268) FreeFormTextRecordSetWriter doesn't expose the Schema Access properties
[ https://issues.apache.org/jira/browse/NIFI-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-8268: --- Status: Patch Available (was: In Progress) > FreeFormTextRecordSetWriter doesn't expose the Schema Access properties > --- > > Key: NIFI-8268 > URL: https://issues.apache.org/jira/browse/NIFI-8268 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > I found (via NIFI-8266) that the parent properties (Schema Access Strategy, > e.g.) for FreeFormTextRecordSetWriter aren't inherited and are thus not > exposed. Cases like NIFI-3912 addressed the issue in a different way, but the > properties should just be inherited, since the defaults are what are > currently being used under the hood. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8268) FreeFormTextRecordSetWriter doesn't expose the Schema Access properties
[ https://issues.apache.org/jira/browse/NIFI-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17291769#comment-17291769 ] Matt Burgess commented on NIFI-8268: PR available: https://github.com/apache/nifi/pull/4851 > FreeFormTextRecordSetWriter doesn't expose the Schema Access properties > --- > > Key: NIFI-8268 > URL: https://issues.apache.org/jira/browse/NIFI-8268 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > I found (via NIFI-8266) that the parent properties (Schema Access Strategy, > e.g.) for FreeFormTextRecordSetWriter aren't inherited and are thus not > exposed. Cases like NIFI-3912 addressed the issue in a different way, but the > properties should just be inherited, since the defaults are what are > currently being used under the hood. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-8268) FreeFormTextRecordSetWriter doesn't expose the Schema Access properties
Matt Burgess created NIFI-8268: -- Summary: FreeFormTextRecordSetWriter doesn't expose the Schema Access properties Key: NIFI-8268 URL: https://issues.apache.org/jira/browse/NIFI-8268 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Matt Burgess Assignee: Matt Burgess I found (via NIFI-8266) that the parent properties (Schema Access Strategy, e.g.) for FreeFormTextRecordSetWriter aren't inherited and are thus not exposed. Cases like NIFI-3912 addressed the issue in a different way, but the properties should just be inherited, since the defaults are what are currently being used under the hood. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] mtien-apache commented on pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on pull request #4846: URL: https://github.com/apache/nifi/pull/4846#issuecomment-786767902 @exceptionfactory Thanks for reviewing! I've addressed your feedback and committed the changes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583779054 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessGroupResource.java ## @@ -4173,6 +4179,227 @@ private void sanitizeRegistryInfo(final VersionedProcessGroup versionedProcessGr } } +/** + * Uploads the specified versioned flow file and adds it to a new process group. + * + * @param httpServletRequest request + * @param in The flow file stream + * @return A processGroupEntity + * @throws IOException if there is an error during deserialization of the InputStream + */ +@POST +@Consumes(MediaType.MULTIPART_FORM_DATA) +@Produces(MediaType.APPLICATION_JSON) +@Path("{id}/process-groups/upload") +@ApiOperation( +value = "Uploads a versioned flow file and creates a process group", +response = ProcessGroupEntity.class, +authorizations = { +@Authorization(value = "Write - /process-groups/{uuid}") +} +) +@ApiResponses( +value = { +@ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), +@ApiResponse(code = 401, message = "Client could not be authenticated."), +@ApiResponse(code = 403, message = "Client is not authorized to make this request."), +@ApiResponse(code = 404, message = "The specified resource could not be found."), +@ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") +} +) +public Response uploadProcessGroup( +@Context final HttpServletRequest httpServletRequest, +@ApiParam( +value = "The process group id.", +required = true +) +@PathParam("id") final String groupId, +@ApiParam( +value = "The process group name.", +required = true +) +@FormDataParam("groupName") final String groupName, +@ApiParam( +value = "The process group X position.", +required = true +) +@FormDataParam("position-x") final Double positionX, +@ApiParam( +value = "The process group Y position.", +required = true +) +@FormDataParam("position-y") final Double positionY, +@ApiParam( +value = "The client id.", +required = true +) +@FormDataParam("clientId") final String clientId, +@ApiParam( +value = "Acknowledges that this node is disconnected to allow for mutable requests to proceed.", +required = false +) +@FormDataParam(DISCONNECTED_NODE_ACKNOWLEDGED) @DefaultValue("false") final Boolean disconnectedNodeAcknowledged, +@FormDataParam("file") final InputStream in) throws IOException { + +// ensure the group name is specified +if (StringUtils.isBlank(groupName)) { +throw new IllegalArgumentException("The process group name is required."); +} + +if (StringUtils.isBlank(groupId)) { +throw new IllegalArgumentException("The parent process group id must be the same as specified in the URI."); +} + +if (positionX == null) { +throw new IllegalArgumentException("The x coordinate of the proposed position must be specified."); +} + +if (positionY == null) { +throw new IllegalArgumentException("The y coordinate of the proposed position must be specified."); +} + +if (StringUtils.isBlank(clientId)) { +throw new IllegalArgumentException("The client id must be specified"); +} + +// create a new process group +final ProcessGroupEntity newProcessGroupEntity = new ProcessGroupEntity(); + +// get the contents of the InputStream as a String +String stringContent = null; +if (in != null) { Review comment: Fixed - a `NullPointerException` can now be thrown. ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java ## @@ -15,9 +15,35 @@ * limitations under the License. */ package org.apache.nifi.web; - import com.google.common.collect.Sets; import io.prometheus.client.CollectorRegistry; +import java.io.IOException; Review comment:
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583776586 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessGroupResource.java ## @@ -4173,6 +4179,227 @@ private void sanitizeRegistryInfo(final VersionedProcessGroup versionedProcessGr } } +/** + * Uploads the specified versioned flow file and adds it to a new process group. + * + * @param httpServletRequest request + * @param in The flow file stream + * @return A processGroupEntity + * @throws IOException if there is an error during deserialization of the InputStream + */ +@POST +@Consumes(MediaType.MULTIPART_FORM_DATA) +@Produces(MediaType.APPLICATION_JSON) +@Path("{id}/process-groups/upload") +@ApiOperation( +value = "Uploads a versioned flow file and creates a process group", +response = ProcessGroupEntity.class, +authorizations = { +@Authorization(value = "Write - /process-groups/{uuid}") +} +) +@ApiResponses( +value = { +@ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), +@ApiResponse(code = 401, message = "Client could not be authenticated."), +@ApiResponse(code = 403, message = "Client is not authorized to make this request."), +@ApiResponse(code = 404, message = "The specified resource could not be found."), +@ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") +} +) +public Response uploadProcessGroup( +@Context final HttpServletRequest httpServletRequest, +@ApiParam( +value = "The process group id.", +required = true +) +@PathParam("id") final String groupId, +@ApiParam( +value = "The process group name.", +required = true +) +@FormDataParam("groupName") final String groupName, +@ApiParam( +value = "The process group X position.", +required = true +) +@FormDataParam("position-x") final Double positionX, +@ApiParam( +value = "The process group Y position.", +required = true +) +@FormDataParam("position-y") final Double positionY, +@ApiParam( +value = "The client id.", +required = true +) +@FormDataParam("clientId") final String clientId, +@ApiParam( +value = "Acknowledges that this node is disconnected to allow for mutable requests to proceed.", +required = false +) +@FormDataParam(DISCONNECTED_NODE_ACKNOWLEDGED) @DefaultValue("false") final Boolean disconnectedNodeAcknowledged, +@FormDataParam("file") final InputStream in) throws IOException { + +// ensure the group name is specified +if (StringUtils.isBlank(groupName)) { +throw new IllegalArgumentException("The process group name is required."); +} + +if (StringUtils.isBlank(groupId)) { +throw new IllegalArgumentException("The parent process group id must be the same as specified in the URI."); +} + +if (positionX == null) { +throw new IllegalArgumentException("The x coordinate of the proposed position must be specified."); +} + +if (positionY == null) { +throw new IllegalArgumentException("The y coordinate of the proposed position must be specified."); +} + +if (StringUtils.isBlank(clientId)) { +throw new IllegalArgumentException("The client id must be specified"); +} + +// create a new process group +final ProcessGroupEntity newProcessGroupEntity = new ProcessGroupEntity(); + +// get the contents of the InputStream as a String +String stringContent = null; +if (in != null) { +stringContent = IOUtils.toString(in, StandardCharsets.UTF_8); +} + +// deserialize content to a VersionedFlowSnapshot +VersionedFlowSnapshot deserializedSnapshot = null; +final ObjectMapper mapper = new ObjectMapper(); + +if (stringContent.length() > 0) { +try { +mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL); +
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583775267 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessGroupResource.java ## @@ -4173,6 +4179,227 @@ private void sanitizeRegistryInfo(final VersionedProcessGroup versionedProcessGr } } +/** + * Uploads the specified versioned flow file and adds it to a new process group. + * + * @param httpServletRequest request + * @param in The flow file stream + * @return A processGroupEntity + * @throws IOException if there is an error during deserialization of the InputStream + */ +@POST +@Consumes(MediaType.MULTIPART_FORM_DATA) +@Produces(MediaType.APPLICATION_JSON) +@Path("{id}/process-groups/upload") +@ApiOperation( +value = "Uploads a versioned flow file and creates a process group", +response = ProcessGroupEntity.class, +authorizations = { +@Authorization(value = "Write - /process-groups/{uuid}") +} +) +@ApiResponses( +value = { +@ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), +@ApiResponse(code = 401, message = "Client could not be authenticated."), +@ApiResponse(code = 403, message = "Client is not authorized to make this request."), +@ApiResponse(code = 404, message = "The specified resource could not be found."), +@ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") +} +) +public Response uploadProcessGroup( +@Context final HttpServletRequest httpServletRequest, +@ApiParam( +value = "The process group id.", +required = true +) +@PathParam("id") final String groupId, +@ApiParam( +value = "The process group name.", +required = true +) +@FormDataParam("groupName") final String groupName, +@ApiParam( +value = "The process group X position.", +required = true +) +@FormDataParam("position-x") final Double positionX, +@ApiParam( +value = "The process group Y position.", +required = true +) +@FormDataParam("position-y") final Double positionY, +@ApiParam( +value = "The client id.", +required = true +) +@FormDataParam("clientId") final String clientId, +@ApiParam( +value = "Acknowledges that this node is disconnected to allow for mutable requests to proceed.", +required = false +) +@FormDataParam(DISCONNECTED_NODE_ACKNOWLEDGED) @DefaultValue("false") final Boolean disconnectedNodeAcknowledged, +@FormDataParam("file") final InputStream in) throws IOException { + +// ensure the group name is specified +if (StringUtils.isBlank(groupName)) { +throw new IllegalArgumentException("The process group name is required."); +} + +if (StringUtils.isBlank(groupId)) { +throw new IllegalArgumentException("The parent process group id must be the same as specified in the URI."); +} + +if (positionX == null) { +throw new IllegalArgumentException("The x coordinate of the proposed position must be specified."); +} + +if (positionY == null) { +throw new IllegalArgumentException("The y coordinate of the proposed position must be specified."); +} + +if (StringUtils.isBlank(clientId)) { +throw new IllegalArgumentException("The client id must be specified"); +} + +// create a new process group +final ProcessGroupEntity newProcessGroupEntity = new ProcessGroupEntity(); + +// get the contents of the InputStream as a String +String stringContent = null; +if (in != null) { +stringContent = IOUtils.toString(in, StandardCharsets.UTF_8); +} + +// deserialize content to a VersionedFlowSnapshot +VersionedFlowSnapshot deserializedSnapshot = null; +final ObjectMapper mapper = new ObjectMapper(); + +if (stringContent.length() > 0) { +try { +mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL); +
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583775115 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/css/main.css ## @@ -286,45 +290,125 @@ span.details-title { height: 28px; } -#template-file-field { +#select-file-button button { +position: absolute; +float: right; +right: 4px; +top: 22px; +font-size: 18px; +color: #004849; +border: none; +background-color: transparent; +} + +/*#select-file-button button:hover {*/ Review comment: Removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583774943 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/header/components/nf-ng-group-component.js ## @@ -304,7 +461,7 @@ }); }).promise(); } -} +}; Review comment: It's not necessary to have the semicolon, but I thought it would be more complete. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583774545 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/header/components/nf-ng-group-component.js ## @@ -129,17 +133,78 @@ * Initialize the modal. */ init: function () { +var self = this; + +self.fileForm = $('#file-upload-form').ajaxForm({ +url: '../nifi-api/process-groups/', +dataType: 'json', +beforeSubmit: function ($form, options) { +// ensure uploading to the current process group +options.url += (encodeURIComponent(nfCanvasUtils.getGroupId()) + '/process-groups/upload'); +} +}); + // configure the new process group dialog this.getElement().modal({ scrollableContentStyle: 'scrollable', headerText: 'Add Process Group', handler: { close: function () { +self.fileToBeUploaded = null; +$('#selected-file-name').text(''); +$('#upload-file-field').val(''); $('#new-process-group-name').val(''); $('#new-process-group-dialog').removeData('pt'); + +// reset the form to ensure that the change fire will fire +self.fileForm.resetForm(); } } }); + +$('#upload-file-field-button').on('click', function (e) { +$('#upload-file-field').click(); +}); + +$('#upload-file-field').on('change', function (e) { +$('#upload-file-field-button').hide(); + +self.fileToBeUploaded = e.target; +var filename = $(this).val(); +var filenameExtension; +if (!nfCommon.isBlank(filename)) { +filenameExtension = filename.replace(/^.*[\\\/]/, ''); +filename = filename.replace(/^.*[\\\/]/, '').replace(/\..*/, ''); Review comment: Revised - I moved to two separate methods describing its functions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583773895 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessGroupResource.java ## @@ -4173,6 +4179,227 @@ private void sanitizeRegistryInfo(final VersionedProcessGroup versionedProcessGr } } +/** + * Uploads the specified versioned flow file and adds it to a new process group. + * + * @param httpServletRequest request + * @param in The flow file stream + * @return A processGroupEntity + * @throws IOException if there is an error during deserialization of the InputStream + */ +@POST +@Consumes(MediaType.MULTIPART_FORM_DATA) +@Produces(MediaType.APPLICATION_JSON) +@Path("{id}/process-groups/upload") +@ApiOperation( +value = "Uploads a versioned flow file and creates a process group", +response = ProcessGroupEntity.class, +authorizations = { +@Authorization(value = "Write - /process-groups/{uuid}") +} +) +@ApiResponses( +value = { +@ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), +@ApiResponse(code = 401, message = "Client could not be authenticated."), +@ApiResponse(code = 403, message = "Client is not authorized to make this request."), +@ApiResponse(code = 404, message = "The specified resource could not be found."), +@ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") +} +) +public Response uploadProcessGroup( +@Context final HttpServletRequest httpServletRequest, +@ApiParam( +value = "The process group id.", +required = true +) +@PathParam("id") final String groupId, +@ApiParam( +value = "The process group name.", +required = true +) +@FormDataParam("groupName") final String groupName, +@ApiParam( +value = "The process group X position.", +required = true +) +@FormDataParam("position-x") final Double positionX, +@ApiParam( +value = "The process group Y position.", +required = true +) +@FormDataParam("position-y") final Double positionY, +@ApiParam( +value = "The client id.", +required = true +) +@FormDataParam("clientId") final String clientId, +@ApiParam( +value = "Acknowledges that this node is disconnected to allow for mutable requests to proceed.", +required = false +) +@FormDataParam(DISCONNECTED_NODE_ACKNOWLEDGED) @DefaultValue("false") final Boolean disconnectedNodeAcknowledged, +@FormDataParam("file") final InputStream in) throws IOException { + +// ensure the group name is specified +if (StringUtils.isBlank(groupName)) { +throw new IllegalArgumentException("The process group name is required."); +} + +if (StringUtils.isBlank(groupId)) { +throw new IllegalArgumentException("The parent process group id must be the same as specified in the URI."); +} + +if (positionX == null) { +throw new IllegalArgumentException("The x coordinate of the proposed position must be specified."); +} + +if (positionY == null) { +throw new IllegalArgumentException("The y coordinate of the proposed position must be specified."); +} + +if (StringUtils.isBlank(clientId)) { +throw new IllegalArgumentException("The client id must be specified"); +} + +// create a new process group +final ProcessGroupEntity newProcessGroupEntity = new ProcessGroupEntity(); + +// get the contents of the InputStream as a String +String stringContent = null; +if (in != null) { +stringContent = IOUtils.toString(in, StandardCharsets.UTF_8); +} + +// deserialize content to a VersionedFlowSnapshot +VersionedFlowSnapshot deserializedSnapshot = null; +final ObjectMapper mapper = new ObjectMapper(); + +if (stringContent.length() > 0) { +try { +mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL); +
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583773455 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessGroupResource.java ## @@ -4173,6 +4179,227 @@ private void sanitizeRegistryInfo(final VersionedProcessGroup versionedProcessGr } } +/** + * Uploads the specified versioned flow file and adds it to a new process group. + * + * @param httpServletRequest request + * @param in The flow file stream + * @return A processGroupEntity + * @throws IOException if there is an error during deserialization of the InputStream + */ +@POST +@Consumes(MediaType.MULTIPART_FORM_DATA) +@Produces(MediaType.APPLICATION_JSON) +@Path("{id}/process-groups/upload") +@ApiOperation( +value = "Uploads a versioned flow file and creates a process group", +response = ProcessGroupEntity.class, +authorizations = { +@Authorization(value = "Write - /process-groups/{uuid}") +} +) +@ApiResponses( +value = { +@ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), +@ApiResponse(code = 401, message = "Client could not be authenticated."), +@ApiResponse(code = 403, message = "Client is not authorized to make this request."), +@ApiResponse(code = 404, message = "The specified resource could not be found."), +@ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") +} +) +public Response uploadProcessGroup( +@Context final HttpServletRequest httpServletRequest, +@ApiParam( +value = "The process group id.", +required = true +) +@PathParam("id") final String groupId, +@ApiParam( +value = "The process group name.", +required = true +) +@FormDataParam("groupName") final String groupName, +@ApiParam( +value = "The process group X position.", +required = true +) +@FormDataParam("position-x") final Double positionX, +@ApiParam( +value = "The process group Y position.", +required = true +) +@FormDataParam("position-y") final Double positionY, +@ApiParam( +value = "The client id.", +required = true +) +@FormDataParam("clientId") final String clientId, +@ApiParam( +value = "Acknowledges that this node is disconnected to allow for mutable requests to proceed.", +required = false +) +@FormDataParam(DISCONNECTED_NODE_ACKNOWLEDGED) @DefaultValue("false") final Boolean disconnectedNodeAcknowledged, +@FormDataParam("file") final InputStream in) throws IOException { + +// ensure the group name is specified +if (StringUtils.isBlank(groupName)) { +throw new IllegalArgumentException("The process group name is required."); +} + +if (StringUtils.isBlank(groupId)) { +throw new IllegalArgumentException("The parent process group id must be the same as specified in the URI."); +} + +if (positionX == null) { +throw new IllegalArgumentException("The x coordinate of the proposed position must be specified."); +} + +if (positionY == null) { +throw new IllegalArgumentException("The y coordinate of the proposed position must be specified."); +} + +if (StringUtils.isBlank(clientId)) { +throw new IllegalArgumentException("The client id must be specified"); +} + +// create a new process group +final ProcessGroupEntity newProcessGroupEntity = new ProcessGroupEntity(); + +// get the contents of the InputStream as a String +String stringContent = null; +if (in != null) { +stringContent = IOUtils.toString(in, StandardCharsets.UTF_8); +} + +// deserialize content to a VersionedFlowSnapshot +VersionedFlowSnapshot deserializedSnapshot = null; +final ObjectMapper mapper = new ObjectMapper(); + +if (stringContent.length() > 0) { +try { +mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL); +
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583773261 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessGroupResource.java ## @@ -4173,6 +4179,227 @@ private void sanitizeRegistryInfo(final VersionedProcessGroup versionedProcessGr } } +/** + * Uploads the specified versioned flow file and adds it to a new process group. + * + * @param httpServletRequest request + * @param in The flow file stream + * @return A processGroupEntity + * @throws IOException if there is an error during deserialization of the InputStream + */ +@POST +@Consumes(MediaType.MULTIPART_FORM_DATA) +@Produces(MediaType.APPLICATION_JSON) +@Path("{id}/process-groups/upload") +@ApiOperation( +value = "Uploads a versioned flow file and creates a process group", +response = ProcessGroupEntity.class, +authorizations = { +@Authorization(value = "Write - /process-groups/{uuid}") +} +) +@ApiResponses( +value = { +@ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), +@ApiResponse(code = 401, message = "Client could not be authenticated."), +@ApiResponse(code = 403, message = "Client is not authorized to make this request."), +@ApiResponse(code = 404, message = "The specified resource could not be found."), +@ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") +} +) +public Response uploadProcessGroup( +@Context final HttpServletRequest httpServletRequest, +@ApiParam( +value = "The process group id.", +required = true +) +@PathParam("id") final String groupId, +@ApiParam( +value = "The process group name.", +required = true +) +@FormDataParam("groupName") final String groupName, +@ApiParam( +value = "The process group X position.", +required = true +) +@FormDataParam("position-x") final Double positionX, +@ApiParam( +value = "The process group Y position.", +required = true +) +@FormDataParam("position-y") final Double positionY, +@ApiParam( +value = "The client id.", +required = true +) +@FormDataParam("clientId") final String clientId, +@ApiParam( +value = "Acknowledges that this node is disconnected to allow for mutable requests to proceed.", +required = false +) +@FormDataParam(DISCONNECTED_NODE_ACKNOWLEDGED) @DefaultValue("false") final Boolean disconnectedNodeAcknowledged, +@FormDataParam("file") final InputStream in) throws IOException { + +// ensure the group name is specified +if (StringUtils.isBlank(groupName)) { +throw new IllegalArgumentException("The process group name is required."); +} + +if (StringUtils.isBlank(groupId)) { +throw new IllegalArgumentException("The parent process group id must be the same as specified in the URI."); +} + +if (positionX == null) { +throw new IllegalArgumentException("The x coordinate of the proposed position must be specified."); +} + +if (positionY == null) { +throw new IllegalArgumentException("The y coordinate of the proposed position must be specified."); +} + +if (StringUtils.isBlank(clientId)) { +throw new IllegalArgumentException("The client id must be specified"); +} + +// create a new process group +final ProcessGroupEntity newProcessGroupEntity = new ProcessGroupEntity(); + +// get the contents of the InputStream as a String +String stringContent = null; +if (in != null) { +stringContent = IOUtils.toString(in, StandardCharsets.UTF_8); +} + +// deserialize content to a VersionedFlowSnapshot +VersionedFlowSnapshot deserializedSnapshot = null; +final ObjectMapper mapper = new ObjectMapper(); + +if (stringContent.length() > 0) { +try { +mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL); +
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583772280 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessGroupResource.java ## @@ -4173,6 +4179,227 @@ private void sanitizeRegistryInfo(final VersionedProcessGroup versionedProcessGr } } +/** + * Uploads the specified versioned flow file and adds it to a new process group. + * + * @param httpServletRequest request + * @param in The flow file stream + * @return A processGroupEntity + * @throws IOException if there is an error during deserialization of the InputStream + */ +@POST +@Consumes(MediaType.MULTIPART_FORM_DATA) +@Produces(MediaType.APPLICATION_JSON) +@Path("{id}/process-groups/upload") +@ApiOperation( +value = "Uploads a versioned flow file and creates a process group", +response = ProcessGroupEntity.class, +authorizations = { +@Authorization(value = "Write - /process-groups/{uuid}") +} +) +@ApiResponses( +value = { +@ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), +@ApiResponse(code = 401, message = "Client could not be authenticated."), +@ApiResponse(code = 403, message = "Client is not authorized to make this request."), +@ApiResponse(code = 404, message = "The specified resource could not be found."), +@ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") +} +) +public Response uploadProcessGroup( +@Context final HttpServletRequest httpServletRequest, +@ApiParam( +value = "The process group id.", +required = true +) +@PathParam("id") final String groupId, +@ApiParam( +value = "The process group name.", +required = true +) +@FormDataParam("groupName") final String groupName, +@ApiParam( +value = "The process group X position.", +required = true +) +@FormDataParam("position-x") final Double positionX, +@ApiParam( +value = "The process group Y position.", +required = true +) +@FormDataParam("position-y") final Double positionY, +@ApiParam( +value = "The client id.", +required = true +) +@FormDataParam("clientId") final String clientId, +@ApiParam( +value = "Acknowledges that this node is disconnected to allow for mutable requests to proceed.", +required = false +) +@FormDataParam(DISCONNECTED_NODE_ACKNOWLEDGED) @DefaultValue("false") final Boolean disconnectedNodeAcknowledged, +@FormDataParam("file") final InputStream in) throws IOException { + +// ensure the group name is specified +if (StringUtils.isBlank(groupName)) { +throw new IllegalArgumentException("The process group name is required."); +} + +if (StringUtils.isBlank(groupId)) { +throw new IllegalArgumentException("The parent process group id must be the same as specified in the URI."); +} + +if (positionX == null) { +throw new IllegalArgumentException("The x coordinate of the proposed position must be specified."); +} + +if (positionY == null) { +throw new IllegalArgumentException("The y coordinate of the proposed position must be specified."); +} + +if (StringUtils.isBlank(clientId)) { +throw new IllegalArgumentException("The client id must be specified"); +} + +// create a new process group +final ProcessGroupEntity newProcessGroupEntity = new ProcessGroupEntity(); + +// get the contents of the InputStream as a String +String stringContent = null; +if (in != null) { +stringContent = IOUtils.toString(in, StandardCharsets.UTF_8); +} + +// deserialize content to a VersionedFlowSnapshot +VersionedFlowSnapshot deserializedSnapshot = null; +final ObjectMapper mapper = new ObjectMapper(); + +if (stringContent.length() > 0) { +try { +mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL); +
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583767556 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessGroupResource.java ## @@ -4173,6 +4179,227 @@ private void sanitizeRegistryInfo(final VersionedProcessGroup versionedProcessGr } } +/** + * Uploads the specified versioned flow file and adds it to a new process group. + * + * @param httpServletRequest request + * @param in The flow file stream + * @return A processGroupEntity + * @throws IOException if there is an error during deserialization of the InputStream + */ +@POST +@Consumes(MediaType.MULTIPART_FORM_DATA) +@Produces(MediaType.APPLICATION_JSON) +@Path("{id}/process-groups/upload") +@ApiOperation( +value = "Uploads a versioned flow file and creates a process group", +response = ProcessGroupEntity.class, +authorizations = { +@Authorization(value = "Write - /process-groups/{uuid}") +} +) +@ApiResponses( +value = { +@ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), +@ApiResponse(code = 401, message = "Client could not be authenticated."), +@ApiResponse(code = 403, message = "Client is not authorized to make this request."), +@ApiResponse(code = 404, message = "The specified resource could not be found."), +@ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") +} +) +public Response uploadProcessGroup( +@Context final HttpServletRequest httpServletRequest, +@ApiParam( +value = "The process group id.", +required = true +) +@PathParam("id") final String groupId, +@ApiParam( +value = "The process group name.", +required = true +) +@FormDataParam("groupName") final String groupName, +@ApiParam( +value = "The process group X position.", +required = true +) +@FormDataParam("position-x") final Double positionX, +@ApiParam( +value = "The process group Y position.", +required = true +) +@FormDataParam("position-y") final Double positionY, +@ApiParam( +value = "The client id.", +required = true +) +@FormDataParam("clientId") final String clientId, +@ApiParam( +value = "Acknowledges that this node is disconnected to allow for mutable requests to proceed.", +required = false +) +@FormDataParam(DISCONNECTED_NODE_ACKNOWLEDGED) @DefaultValue("false") final Boolean disconnectedNodeAcknowledged, +@FormDataParam("file") final InputStream in) throws IOException { + +// ensure the group name is specified +if (StringUtils.isBlank(groupName)) { +throw new IllegalArgumentException("The process group name is required."); +} + +if (StringUtils.isBlank(groupId)) { +throw new IllegalArgumentException("The parent process group id must be the same as specified in the URI."); Review comment: Revised the message. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache commented on a change in pull request #4846: NIFI-8260 Implement an Upload File capability in the NiFi UI for flow definitions.
mtien-apache commented on a change in pull request #4846: URL: https://github.com/apache/nifi/pull/4846#discussion_r583767187 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/groups/StandardProcessGroup.java ## @@ -3766,7 +3765,7 @@ public void updateFlow(final VersionedFlowSnapshot proposedSnapshot, final Strin final boolean updateDescendantVersionedFlows) { writeLock.lock(); try { -verifyCanUpdate(proposedSnapshot, true, verifyNotDirty); + verifyCanUpdate(proposedSnapshot, true, verifyNotDirty); Review comment: Strange - I don't think I touched this file. I'll revert it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7668) Add configurable PBE AEAD algorithms to flow encryption
[ https://issues.apache.org/jira/browse/NIFI-7668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan Gough updated NIFI-7668: --- Fix Version/s: 1.14.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add configurable PBE AEAD algorithms to flow encryption > --- > > Key: NIFI-7668 > URL: https://issues.apache.org/jira/browse/NIFI-7668 > Project: Apache NiFi > Issue Type: Improvement > Components: Configuration, Core Framework >Affects Versions: 1.12.0 >Reporter: Andy LoPresto >Assignee: David Handermann >Priority: Major > Labels: aead, configuration, encryption, pbe, security > Fix For: 1.14.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > NIFI-7638 introduced a single custom PBE algorithm (pair for 128/256-bit > keys) which provided AEAD semantics using Argon2 for key derivation and > AES-G/CM for authenticated encryption. This solution was a stop gap to allow > more robust encryption than AES-CBC without modifying the > {{EncryptionMethod}}, which is a single definition of encryption algorithms > and (supposed) KDFs referenced throughout the codebase. > Introducing changes to {{EncryptionMethod}} would have required massive > regression testing, further support changes to {{EncryptContent}}, encrypted > repository implementations, multiple documentation changes, etc. This change > allows for a single custom algorithm which makes reasonable default decisions > around cost parameters and algorithm selection, meeting the user requirements > without demanding far-reaching changes. > Instead, this ticket proposes an intentional enhancement to the > {{nifi.properties}} structure to add a new {{nifi.sensitive.props.kdf}} > property to complement the existing {{nifi.sensitive.props.algorithm}} > property. This will allow arbitrary secure KDFs (Argon2, bcrypt, scrypt, > PBKDF2) to be specified with custom cost parameters and combined with any > keyed encryption algorithm (AES-CBC, AES-G/CM, AES-CTR) to derive a key and > encrypt the flow sensitive properties. > For backward compatibility, as this is likely to go in a 1.13.0 release, not > a major release, an existing {{nifi.properties}} file will work fine. If the > {{nifi.sensitive.props.kdf}} value is not specified, it will not be used, > which is acceptable for all existing {{EncryptionMethod}} values which are > already supported by the {{StringEncryptor}} class. If a _new_ algorithm is > specified (e.g. one of the raw keyed algorithms), the KDF will need to be > present and will be checked for appropriateness and cost parameter validity. > No default value changes will be made. Thus, this will only affect > security-conscious users who explicitly change those values to reflect more > robust key derivation and data protection algorithm choices. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8266) InvokeScriptedProcessor can throw NPE during custom property validation
[ https://issues.apache.org/jira/browse/NIFI-8266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-8266: --- Status: Patch Available (was: In Progress) > InvokeScriptedProcessor can throw NPE during custom property validation > --- > > Key: NIFI-8266 > URL: https://issues.apache.org/jira/browse/NIFI-8266 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > If a script in InvokeScriptedProcessor (ISP) specifies additional properties > to be added to the configuration dialog, a NullPointerException can occur > during validation: > This appears to be the result of the ValidationContext for the ISP being > created before the script is loaded, and thus the properties are not present > in the context during validation. However the list iterated over in > AbstractConfigurableComponent.validate() is the list of supported property > descriptors, which may or may not include the script's properties when > validate() is called. > The list to be iterated over should be the list of properties in the > ValidationContext, to avoid an NPE. Having said that, for components that can > provide properties dynamically (via script, e.g.) the ValidationContext > should be recreated rather than reusing the first one created. This could be > accomplished with a component annotation (ProvidesCustomProperties or > something). If the annotation exists for the component, the ValidationContext > should be recreated before validation; otherwise, the first one created > should be retained for performance purposes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] mattyb149 opened a new pull request #4851: NIFI-8266: Fix NPE when validating supported properties against validation context
mattyb149 opened a new pull request #4851: URL: https://github.com/apache/nifi/pull/4851 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR Rather than getting the component's supported properties and validating them using a context that may or may not have the same properties, this change just iterates over the context's properties and validates those. 99% of the time the properties match, but for scripted components that can add properties once the script is loaded, it was causing a NullPointerException. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-8266) InvokeScriptedProcessor can throw NPE during custom property validation
[ https://issues.apache.org/jira/browse/NIFI-8266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess reassigned NIFI-8266: -- Assignee: Matt Burgess > InvokeScriptedProcessor can throw NPE during custom property validation > --- > > Key: NIFI-8266 > URL: https://issues.apache.org/jira/browse/NIFI-8266 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > If a script in InvokeScriptedProcessor (ISP) specifies additional properties > to be added to the configuration dialog, a NullPointerException can occur > during validation: > This appears to be the result of the ValidationContext for the ISP being > created before the script is loaded, and thus the properties are not present > in the context during validation. However the list iterated over in > AbstractConfigurableComponent.validate() is the list of supported property > descriptors, which may or may not include the script's properties when > validate() is called. > The list to be iterated over should be the list of properties in the > ValidationContext, to avoid an NPE. Having said that, for components that can > provide properties dynamically (via script, e.g.) the ValidationContext > should be recreated rather than reusing the first one created. This could be > accomplished with a component annotation (ProvidesCustomProperties or > something). If the annotation exists for the component, the ValidationContext > should be recreated before validation; otherwise, the first one created > should be retained for performance purposes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8266) InvokeScriptedProcessor can throw NPE during custom property validation
[ https://issues.apache.org/jira/browse/NIFI-8266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17291722#comment-17291722 ] Matt Burgess commented on NIFI-8266: Decided to just implement the part when we validate the context's properties rather than what the component reports as supported properties. So it will likely validate a new InvokeScriptedProcessor before any script body/file is set, so only the existing 4 properties are added. Once properties are set, the validation context is recreated and will include any new properties defined by the script. It's a much cleaner and simpler solution and solves the NPE issue. > InvokeScriptedProcessor can throw NPE during custom property validation > --- > > Key: NIFI-8266 > URL: https://issues.apache.org/jira/browse/NIFI-8266 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Priority: Major > > If a script in InvokeScriptedProcessor (ISP) specifies additional properties > to be added to the configuration dialog, a NullPointerException can occur > during validation: > This appears to be the result of the ValidationContext for the ISP being > created before the script is loaded, and thus the properties are not present > in the context during validation. However the list iterated over in > AbstractConfigurableComponent.validate() is the list of supported property > descriptors, which may or may not include the script's properties when > validate() is called. > The list to be iterated over should be the list of properties in the > ValidationContext, to avoid an NPE. Having said that, for components that can > provide properties dynamically (via script, e.g.) the ValidationContext > should be recreated rather than reusing the first one created. This could be > accomplished with a component annotation (ProvidesCustomProperties or > something). If the annotation exists for the component, the ValidationContext > should be recreated before validation; otherwise, the first one created > should be retained for performance purposes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1515) Add integration tests verifying that MiNiFi handles large flowfiles as expected
[ https://issues.apache.org/jira/browse/MINIFICPP-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1515: Description: *Background:* Issues with big flowfiles were reported on stack overflow: [https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615] Seems like this is related to twi issues, one a [narrowing exception happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] when trying to determine the length of the file to be written into the content repository, and another is that we use _stat on windows even though we should be using _stat64 to query files larger than 2GB. *Proposal:* MiNiFi should be tested with large files. *Acceptance criteria:* {code:python|title=Feature definition proposal} Scenario Outline: MiNiFi is capable of manipulating flowfiles of different sizes Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with of content is present in "/tmp/input" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with matching content is placed in the monitored directory in less than 20 seconds Examples: File size | file size | | 10 B | | 1.5 KiB | | 10 MiB| | 1.0 GiB | | 2.1 GiB | {code} was: *Background:* Issues with big flowfiles were reported here: https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615 Seems like this is related to twi issues, one a [narrowing exception happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] when trying to determine the length of the file to be written into the content repository, and another is that we use _stat on windows even though we should be using _stat64 to query files larger than 2GB. *Proposal:* MiNiFi should be tested with large files. *Acceptance criteria:* {code:python|title=Feature definition proposal} Scenario Outline: MiNiFi is capable of manipulating flowfiles of different sizes Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with of content is present in "/tmp/input" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with matching content is placed in the monitored directory in less than 20 seconds Examples: File size | file size | | 10 B | | 1.5 KiB | | 10 MiB| | 1.0 GiB | | 2.1 GiB | {code} > Add integration tests verifying that MiNiFi handles large flowfiles as > expected > --- > > Key: MINIFICPP-1515 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1515 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 1.0.0, 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > Time Spent: 10m > Remaining Estimate: 0h > > *Background:* > Issues with big flowfiles were reported on stack overflow: > > [https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615] > Seems like this is related to twi issues, one a [narrowing exception > happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] > when trying to determine the length of the file to be written into the > content repository, and another is that we use _stat on windows even though > we should be using _stat64 to query files larger than 2GB. > *Proposal:* > MiNiFi should be tested with large files. > *Acceptance criteria:* > {code:python|title=Feature definition proposal} > Scenario Outline: MiNiFi is capable of manipulating flowfiles of different > sizes > Given a GetFile processor with the "Input Directory" property set to > "/tmp/input" > And a file with of content is present in "/tmp/input" > And a PutFile processor with the "Directory" property set to "/tmp/output" > And the "success" relationship of the GetFile processor is connected to > the PutFile > When the MiNiFi instance starts up > Then a flowfile with matching content is placed in the monitored
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #1018: MINIFICPP-1513 - Add cleanup to behave integration test running
hunyadi-dev commented on a change in pull request #1018: URL: https://github.com/apache/nifi-minifi-cpp/pull/1018#discussion_r583683672 ## File path: docker/test/integration/MiNiFi_integration_test_driver.py ## @@ -22,7 +22,6 @@ class MiNiFi_integration_test(): def __init__(self, context): -logging.info("MiNiFi_integration_test init") Review comment: Logging before `before_all` is triggered is removed, because that seems to use the previous `logging` instantiation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8230) Remove default Sensitive Properties Key
[ https://issues.apache.org/jira/browse/NIFI-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17291687#comment-17291687 ] David Handermann commented on NIFI-8230: Thanks for your work on this [~cef111]. Were you interested in working through the whole implementation, or just contributing input? The PR for NIFI-7668 was just merged, which removed {{StringEncryptor}} and refactored the approach to a new interface named {{PropertyEncryptor}} with specific implementations. I was going to get started on this issue soon, but it would be great to have your input on a review. I recommend reviewing [GitHub PR 4809|https://github.com/apache/nifi/pull/4809] for the new implementation to get some background on how things work now. The {{PropertyEncryptorFactory}} class now encapsulates the default sensitive key, so that will be the primary class that needs to be changed. > Remove default Sensitive Properties Key > --- > > Key: NIFI-8230 > URL: https://issues.apache.org/jira/browse/NIFI-8230 > Project: Apache NiFi > Issue Type: Sub-task > Components: Security >Affects Versions: 1.13.0 >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > > Support for encryption of sensitive properties relies on configuration of the > Sensitive Properties Key specified using {{nifi.sensitive.props.key}} in > {{nifi.properties}}. The default behavior of {{StringEncryptor}} allows for > the key to be blank and falls back to a default value, logging a verbose > error message indicating that an explicit key should be provided. > The fallback to a default value for the Sensitive Properties Key should be > removed and an exception should be thrown indicating that the property value > is required. Deployments that already have an explicit value will not be > impacted. Migration guidance for upgrading should include steps to encrypt > the configuration using a new key. > It may be worthwhile generating a random Sensitive Properties Key for new > installations where there is no existing flow. This would new standalone > installations to run with a secure key without the need for manual steps. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] hunyadi-dev opened a new pull request #1018: MINIFICPP-1513 - Add cleanup to behave integration test running
hunyadi-dev opened a new pull request #1018: URL: https://github.com/apache/nifi-minifi-cpp/pull/1018 Currently python based the integration tests instantiate separate loggers. This makes it difficult to capture logging when multiple test are run with the integration tests (eg. single Scenario Outline with multiple examples). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-7986) Add UI to view and update assigned parameter context for each nested process group.
[ https://issues.apache.org/jira/browse/NIFI-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17291665#comment-17291665 ] John Wise commented on NIFI-7986: - We use variables specifically because they are inheritable by nested process groups. Parameters are a non-starter for us specifically because they aren't inheritable. Managing parameter contexts in multiple nested levels would be an administrative nightmare for us. > Add UI to view and update assigned parameter context for each nested process > group. > --- > > Key: NIFI-7986 > URL: https://issues.apache.org/jira/browse/NIFI-7986 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Affects Versions: 1.10.0, 1.12.0, 1.11.4, 1.12.1 >Reporter: George Knaggs >Priority: Minor > > Currently the assigned parameter context for a process group can only happen > from its "Configuration" window. The scope of the parameter context only > applies to processors and service controllers directly within that process > group that it is assigned, excluding any nested process groups, which may > have a different or no parameter context assigned. This is remarkably > different than how scoping worked with variables, which had an inheritance > model. The nature of these differences makes it important for users when > making a copy of a flow or importing a flow from the NiFi Registry, that > contains nested process groups, that the entire flow must be navigated and > for each nested process group open the configure window to check and possibly > change the assigned parameter context. The use of nested process groups had > the advantage of keeping flows easy to read, monitor, and maintain, but now > are a disadvantage if you want to parameterize your flows. > This issue was also highlighted in "Better Support for Parameterizing Flows" > under NiFi Feature Flows confluence blog. Specifically under "Import Flow" > section: > {code:java} > 4. If there are nested Process Groups in the flow, the UX will need to > provide a way to specify which Parameter Context should be used for each of > those, since Parameter Contexts are not inherited, and because the intent of > the original flow was to use a separate context as well. This could be done > either by showing a hierarchy of the Process Groups and choosing a Parameter > Context for each, or by showing a listing of Parameter Context Names that are > referenced in the flow pulled from Registry and allowing the user to map each > of those names to a locally defined Parameter Context. > Note that this user experience for importing the flow addresses both of the > key use cases: Migration of flow from dev to test to prod, as well as the > need to create parameterized flows (i.e., import the same flow multiple times > into the same environment, each with different parameters). Each time the > user imports the flow, they can choose a new set of parameters to use.{code} > I propose a change to the configuration window for a processs group, add a > "Parameter Contexts" tab, which shows all nested process groups by name in a > list with a scope/path/breadcrumb to help identify, and also showing its > assigned parameter context. From here the user can check if the correct > parameter context is assigned to the nested process groups. There could then > be a drill in arrow ↘︎ icon for quickly drilling to the Configuration window > for the nested process group so the user can change the assigned parameter > context. It may not be feasible to navigate back to make additional changes, > so alternatively, the UI could allow the change of the assigned parameter > context from the parent process group for any nested process group. > Added bonus, the same concept could be used within the Controller Services > tab to show controllers services assigned to nested process groups, quickly > identifying which controller services are disabled or invalid, which is also > a pain when starting up flows with many nested process groups. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1511) Add missing VS 2019 documentation
[ https://issues.apache.org/jira/browse/MINIFICPP-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Debreceni resolved MINIFICPP-1511. --- Resolution: Fixed > Add missing VS 2019 documentation > - > > Key: MINIFICPP-1511 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1511 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Adam Debreceni >Assignee: Adam Debreceni >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Windows.md is missing the Visual Studio 2019 documentation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1516) Fix transient issues related to docker container cleanup
Adam Hunyadi created MINIFICPP-1516: --- Summary: Fix transient issues related to docker container cleanup Key: MINIFICPP-1516 URL: https://issues.apache.org/jira/browse/MINIFICPP-1516 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.9.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* There are still issues due to the python docker API rarely not properly cleaning up containers. *Proposal:* We should add retry attempts when starting docker containers and potentially try and improve the process of cleaning. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] hunyadi-dev opened a new pull request #1017: MINIFICPP-1515 - Add integration tests testing different flowfile sizes in a simple flow
hunyadi-dev opened a new pull request #1017: URL: https://github.com/apache/nifi-minifi-cpp/pull/1017 Issues with big flowfiles were reported on [stack overflow](https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615). Seems like this is related to twi issues, one a narrowing exception happens when trying to determine the length of the file to be written into the content repository, and another is that we use `_stat` on windows even though we should be using `_stat64` to query files larger than 2GB. In order to avoid these issues to reappear, we should add some integration test coverage. This PR will be in draft status until the mentioned issues are fixed. Also, currently integration test logs are bloated for scenario outline based testing, this is to be fixed as [part of this task](https://issues.apache.org/jira/browse/MINIFICPP-1515). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (MINIFICPP-1515) Add integration tests verifying that MiNiFi handles large flowfiles as expected
[ https://issues.apache.org/jira/browse/MINIFICPP-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1515: Labels: MiNiFi-CPP-Hygiene (was: ) > Add integration tests verifying that MiNiFi handles large flowfiles as > expected > --- > > Key: MINIFICPP-1515 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1515 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 1.0.0, 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > Issues with big flowfiles were reported here: > https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615 > Seems like this is related to twi issues, one a [narrowing exception > happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] > when trying to determine the length of the file to be written into the > content repository, and another is that we use _stat on windows even though > we should be using _stat64 to query files larger than 2GB. > *Proposal:* > MiNiFi should be tested with large files. > *Acceptance criteria:* > {code:python|title=Feature definition proposal} > Scenario Outline: MiNiFi is capable of manipulating flowfiles of different > sizes > Given a GetFile processor with the "Input Directory" property set to > "/tmp/input" > And a file with of content is present in "/tmp/input" > And a PutFile processor with the "Directory" property set to "/tmp/output" > And the "success" relationship of the GetFile processor is connected to > the PutFile > When the MiNiFi instance starts up > Then a flowfile with matching content is placed in the monitored > directory in less than 20 seconds > Examples: File size > | file size | > | 10 B | > | 1.5 KiB | > | 10 MiB| > | 1.0 GiB | > | 2.1 GiB | > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1514) Improve integration test coverage for MiNiFi
[ https://issues.apache.org/jira/browse/MINIFICPP-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1514: Labels: (was: MiNiFi-CPP-Hygiene) > Improve integration test coverage for MiNiFi > > > Key: MINIFICPP-1514 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1514 > Project: Apache NiFi MiNiFi C++ > Issue Type: Epic >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Background:* > There are quite a lot of improvements to be done in terms of integration test > improvements. > *Proposal:* > This epic is to be used to collect such tasks and monitor efforts done to > extend integration test functionality. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1515) Add integration tests verifying that MiNiFi handles large flowfiles as expected
Adam Hunyadi created MINIFICPP-1515: --- Summary: Add integration tests verifying that MiNiFi handles large flowfiles as expected Key: MINIFICPP-1515 URL: https://issues.apache.org/jira/browse/MINIFICPP-1515 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 1.0.0, 0.9.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* Issues with big flowfiles were reported here: https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615 Seems like this is related to twi issues, one a [narrowing exception happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] when trying to determine the length of the file to be written into the content repository, and another is that we use _stat on windows even though we should be using _stat64 to query files larger than 2GB. *Proposal:* MiNiFi should be tested with large files. *Acceptance criteria:* {code:python|title=Feature definition proposal} Scenario Outline: MiNiFi is capable of manipulating flowfiles of different sizes Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with of content is present in "/tmp/input" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with matching content is placed in the monitored directory in less than 20 seconds Examples: File size | file size | | 10 B | | 1.5 KiB | | 10 MiB| | 1.0 GiB | | 2.1 GiB | {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1514) Improve integration test coverage for MiNiFi
Adam Hunyadi created MINIFICPP-1514: --- Summary: Improve integration test coverage for MiNiFi Key: MINIFICPP-1514 URL: https://issues.apache.org/jira/browse/MINIFICPP-1514 Project: Apache NiFi MiNiFi C++ Issue Type: Epic Affects Versions: 0.9.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* There are quite a lot of improvements to be done in terms of integration test improvements. *Proposal:* This epic is to be used to collect such tasks and monitor efforts done to extend integration test functionality. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1445) Clean up docker integration test framework
[ https://issues.apache.org/jira/browse/MINIFICPP-1445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1445. - Resolution: Done > Clean up docker integration test framework > -- > > Key: MINIFICPP-1445 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1445 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > Time Spent: 5h 40m > Remaining Estimate: 0h > > *Background:* > Docker integration tests are currently living in a two, long __init__.py > files. > *Proposal:* > This can be easily refactored by having the components separated into python > modules based on their functionality. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1513) Make integration tests log using a common logger
Adam Hunyadi created MINIFICPP-1513: --- Summary: Make integration tests log using a common logger Key: MINIFICPP-1513 URL: https://issues.apache.org/jira/browse/MINIFICPP-1513 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* Currently python based the integration tests instantiate separate loggers. This makes it difficult to capture logging when multiple test are run with the integration tests (eg. single Scenario Outline with multiple examples). *Proposal:* We should extract the logging functionality into its separate modul, and instantiate loggers exactly once. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] adamdebreceni closed pull request #948: MINIFICPP-1325 - Refactor and test YAML connection parsing
adamdebreceni closed pull request #948: URL: https://github.com/apache/nifi-minifi-cpp/pull/948 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1512) Clarify README to run bin/minifi.sh from the bin tree, not the source tree.
Marton Szasz created MINIFICPP-1512: --- Summary: Clarify README to run bin/minifi.sh from the bin tree, not the source tree. Key: MINIFICPP-1512 URL: https://issues.apache.org/jira/browse/MINIFICPP-1512 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Reporter: Marton Szasz Assignee: Marton Szasz Fix For: 1.0.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1373) Implement and test a simplified ConsumeKafka processor without security protocols
[ https://issues.apache.org/jira/browse/MINIFICPP-1373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1373. - Resolution: Resolved > Implement and test a simplified ConsumeKafka processor without security > protocols > - > > Key: MINIFICPP-1373 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1373 > Project: Apache NiFi MiNiFi C++ > Issue Type: Sub-task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Major > Fix For: 1.0.0 > > Attachments: ConsumeKafka_test_matrix.numbers, > ConsumeKafka_test_matrix.pdf > > Time Spent: 29h 40m > Remaining Estimate: 0h > > *Acceptance Criteria:* > *{color:#de350b}See attached test matrix plan.{color}* > Additional test (that require multiple Kafka consumers): > {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with > {color:#0747a6}different group ids{color} subscribed to the same topic > {color:#505f79}*WHEN*{color} a message is published to the topic > {color:#505f79}*THEN*{color} both of the ConsumeKafka processors should > produce identical flowfiles > {quote} > {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with > {color:#0747a6}the same group id{color} subscribed to the same topic > {color:#505f79}*WHEN*{color} a message is published to the topic > {color:#505f79}*THEN*{color} only one of the ConsumeKafka processors should > produce a flowfile > {quote} > {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with > {color:#0747a6}the same group id{color} subscribed to the same topic with > exactly two partitions with {color:#0747a6}Offset Reset{color} set to > {color:#0747a6}earliest{color}. > {color:#505f79}*WHEN*{color} a messages were already present on both > partitions and the second one crashes > {color:#505f79}*THEN*{color} the first one should process duplicates of the > messages that originally came to the second (at_least_once delivery) > {quote} > {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with > {color:#0747a6}the same group id{color} subscribed to the same topic with > exactly two partitions with {color:#0747a6}Offset Reset{color} set to > {color:#0747a6}latest{color}. > {color:#505f79}*WHEN a*{color} messages were already present on both > partitions and the second one crashes > {color:#505f79}*THEN*{color} the first one should {color:#0747a6}not{color} > process duplicates of the messages that originally came to the second > (at_most_once delivery) > {quote} > {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with > {color:#0747a6}the same group id{color} subscribed to the same topic with > exactly two partitions with {color:#0747a6}Offset Reset{color} set to > {color:#0747a6}none{color}. > {color:#505f79}*WHEN*{color} a messages were already present on both > partitions and the second one crashes > {color:#505f79}*THEN*{color} the first one should throw an exception > {quote} > *Background:* > See parent task. > *Proposal:* > This should be the first part of the implementation, the second being adding > and testing multiple security protocols. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #1016: MINIFICPP-1511 - Update documentation with VS 2019
arpadboda closed pull request #1016: URL: https://github.com/apache/nifi-minifi-cpp/pull/1016 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #940: MINIFICPP-1373 - Implement ConsumeKafka
arpadboda closed pull request #940: URL: https://github.com/apache/nifi-minifi-cpp/pull/940 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-site] adamdebreceni merged pull request #43: Add Adam Debreceni to committer list
adamdebreceni merged pull request #43: URL: https://github.com/apache/nifi-site/pull/43 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-site] adamdebreceni opened a new pull request #43: Add Adam Debreceni to committer list
adamdebreceni opened a new pull request #43: URL: https://github.com/apache/nifi-site/pull/43 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8230) Remove default Sensitive Properties Key
[ https://issues.apache.org/jira/browse/NIFI-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17291496#comment-17291496 ] Moncef ABBOUD commented on NIFI-8230: - Hello [~exceptionfactory], Thank you for the thorough reply. I had started some initial work on this issue. I updated the StringEncryptor class to throw an exception in case the property is empty and ajusted the corresponding tests.[https://github.com/CefBoud/nifi/commit/f40e97e290172f7f07d2a4d8af804c6a59b2daa5] I hope this can be of some use. Best of luck. > Remove default Sensitive Properties Key > --- > > Key: NIFI-8230 > URL: https://issues.apache.org/jira/browse/NIFI-8230 > Project: Apache NiFi > Issue Type: Sub-task > Components: Security >Affects Versions: 1.13.0 >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > > Support for encryption of sensitive properties relies on configuration of the > Sensitive Properties Key specified using {{nifi.sensitive.props.key}} in > {{nifi.properties}}. The default behavior of {{StringEncryptor}} allows for > the key to be blank and falls back to a default value, logging a verbose > error message indicating that an explicit key should be provided. > The fallback to a default value for the Sensitive Properties Key should be > removed and an exception should be thrown indicating that the property value > is required. Deployments that already have an explicit value will not be > impacted. Migration guidance for upgrading should include steps to encrypt > the configuration using a new key. > It may be worthwhile generating a random Sensitive Properties Key for new > installations where there is no existing flow. This would new standalone > installations to run with a secure key without the need for manual steps. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8267) HWX Schema Registry - Cache size doesn't support long
[ https://issues.apache.org/jira/browse/NIFI-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-8267: - Component/s: Extensions > HWX Schema Registry - Cache size doesn't support long > - > > Key: NIFI-8267 > URL: https://issues.apache.org/jira/browse/NIFI-8267 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > In version 0.8.1, the cache size needs to be a positive number, that's the > only verification being done: > [https://github.com/hortonworks/registry/blob/0.8.1/schema-registry/client/src/main/java/com/hortonworks/registries/schemaregistry/client/SchemaRegistryClient.java#L1292] > In more recent versions of the registry, the value is first converted into an > integer: > [https://github.com/hortonworks/registry/blob/0.9.0/schema-registry/client/src/main/java/com/hortonworks/registries/schemaregistry/client/SchemaRegistryClient.java#L1442] > Problem is that we use a long in NiFi and the conversion fails. This needs to > be changed. > This applies to: > * CLASSLOADER_CACHE_SIZE > * CLASSLOADER_CACHE_EXPIRY_INTERVAL_SECS > * SCHEMA_VERSION_CACHE_SIZE > * SCHEMA_VERSION_CACHE_EXPIRY_INTERVAL_SECS -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8267) HWX Schema Registry - Cache size doesn't support long
[ https://issues.apache.org/jira/browse/NIFI-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-8267: - Status: Patch Available (was: Open) > HWX Schema Registry - Cache size doesn't support long > - > > Key: NIFI-8267 > URL: https://issues.apache.org/jira/browse/NIFI-8267 > Project: Apache NiFi > Issue Type: Bug >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > In version 0.8.1, the cache size needs to be a positive number, that's the > only verification being done: > [https://github.com/hortonworks/registry/blob/0.8.1/schema-registry/client/src/main/java/com/hortonworks/registries/schemaregistry/client/SchemaRegistryClient.java#L1292] > In more recent versions of the registry, the value is first converted into an > integer: > [https://github.com/hortonworks/registry/blob/0.9.0/schema-registry/client/src/main/java/com/hortonworks/registries/schemaregistry/client/SchemaRegistryClient.java#L1442] > Problem is that we use a long in NiFi and the conversion fails. This needs to > be changed. > This applies to: > * CLASSLOADER_CACHE_SIZE > * CLASSLOADER_CACHE_EXPIRY_INTERVAL_SECS > * SCHEMA_VERSION_CACHE_SIZE > * SCHEMA_VERSION_CACHE_EXPIRY_INTERVAL_SECS -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] pvillard31 opened a new pull request #4850: NIFI-8267 - Fix for HWX Schema Registry to use integers
pvillard31 opened a new pull request #4850: URL: https://github.com/apache/nifi/pull/4850 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-8267) HWX Schema Registry - Cache size doesn't support long
Pierre Villard created NIFI-8267: Summary: HWX Schema Registry - Cache size doesn't support long Key: NIFI-8267 URL: https://issues.apache.org/jira/browse/NIFI-8267 Project: Apache NiFi Issue Type: Bug Reporter: Pierre Villard Assignee: Pierre Villard In version 0.8.1, the cache size needs to be a positive number, that's the only verification being done: [https://github.com/hortonworks/registry/blob/0.8.1/schema-registry/client/src/main/java/com/hortonworks/registries/schemaregistry/client/SchemaRegistryClient.java#L1292] In more recent versions of the registry, the value is first converted into an integer: [https://github.com/hortonworks/registry/blob/0.9.0/schema-registry/client/src/main/java/com/hortonworks/registries/schemaregistry/client/SchemaRegistryClient.java#L1442] Problem is that we use a long in NiFi and the conversion fails. This needs to be changed. This applies to: * CLASSLOADER_CACHE_SIZE * CLASSLOADER_CACHE_EXPIRY_INTERVAL_SECS * SCHEMA_VERSION_CACHE_SIZE * SCHEMA_VERSION_CACHE_EXPIRY_INTERVAL_SECS -- This message was sent by Atlassian Jira (v8.3.4#803005)