[jira] [Commented] (NIFI-7918) Getsftp is throwing time out issue
[ https://issues.apache.org/jira/browse/NIFI-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17213593#comment-17213593 ] Rohit commented on NIFI-7918: - 2020-10-14 09:48:31,753 ERROR [Timer-Driven Process Thread-3] o.a.nifi.processors.standard.GetSFTP GetSFTP[id=2202272a-0175-1000-b766-8dd61d129ae1] Unable to fetch listing from remote server due to java.net.ConnectException: Connection timed out (Connection timed out): java.net.ConnectException: Connection timed out (Connection timed out) java.net.ConnectException: Connection timed out (Connection timed out) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:607) at net.schmizz.sshj.SocketClient.connect(SocketClient.java:126) at org.apache.nifi.processors.standard.util.SFTPTransfer.getSFTPClient(SFTPTransfer.java:595) at org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:238) at org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:201) at org.apache.nifi.processors.standard.GetFileTransfer.fetchListing(GetFileTransfer.java:280) at org.apache.nifi.processors.standard.GetFileTransfer.onTrigger(GetFileTransfer.java:126) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) > Getsftp is throwing time out issue > -- > > Key: NIFI-7918 > URL: https://issues.apache.org/jira/browse/NIFI-7918 > Project: Apache NiFi > Issue Type: Bug >Reporter: Rohit >Priority: Major > > Hi, > > Trying to connect aws sftp, using getsftp or listsftp, but getting timeout > issue > Usinf nifi 11 versiin and java 8 for your info -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] mtien-apache commented on a change in pull request #4593: NIFI-7584 Added OIDC logout mechanism.
mtien-apache commented on a change in pull request #4593: URL: https://github.com/apache/nifi/pull/4593#discussion_r504375038 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/AccessResource.java ## @@ -329,24 +359,221 @@ public Response oidcExchange(@Context HttpServletRequest httpServletRequest, @Co ) public void oidcLogout(@Context HttpServletRequest httpServletRequest, @Context HttpServletResponse httpServletResponse) throws Exception { if (!httpServletRequest.isSecure()) { -throw new IllegalStateException("User authentication/authorization is only supported when running over HTTPS."); +throw new IllegalStateException(AUTHENTICATION_NOT_ENABLED_MSG); } if (!oidcService.isOidcEnabled()) { -throw new IllegalStateException("OpenId Connect is not configured."); +throw new IllegalStateException(OPEN_ID_CONNECT_SUPPORT_IS_NOT_CONFIGURED_MSG); } -URI endSessionEndpoint = oidcService.getEndSessionEndpoint(); -String postLogoutRedirectUri = generateResourceUri("..", "nifi", "logout-complete"); +// Get the oidc discovery url +String oidcDiscoveryUrl = properties.getOidcDiscoveryUrl(); + +// Determine the logout method +String logoutMethod = determineLogoutMethod(oidcDiscoveryUrl); + +switch (logoutMethod) { +case REVOKE_ACCESS_TOKEN_LOGOUT: Review comment: Correct. Both `REVOKE_ACCESS_TOKEN_LOGOUT` and `ID_TOKEN_LOGOUT` make the same request to the ID Provider. But each will use a different component of the response, which will be determined in `oidc/logoutCallback`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache commented on a change in pull request #4593: NIFI-7584 Added OIDC logout mechanism.
mtien-apache commented on a change in pull request #4593: URL: https://github.com/apache/nifi/pull/4593#discussion_r504375038 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/AccessResource.java ## @@ -329,24 +359,221 @@ public Response oidcExchange(@Context HttpServletRequest httpServletRequest, @Co ) public void oidcLogout(@Context HttpServletRequest httpServletRequest, @Context HttpServletResponse httpServletResponse) throws Exception { if (!httpServletRequest.isSecure()) { -throw new IllegalStateException("User authentication/authorization is only supported when running over HTTPS."); +throw new IllegalStateException(AUTHENTICATION_NOT_ENABLED_MSG); } if (!oidcService.isOidcEnabled()) { -throw new IllegalStateException("OpenId Connect is not configured."); +throw new IllegalStateException(OPEN_ID_CONNECT_SUPPORT_IS_NOT_CONFIGURED_MSG); } -URI endSessionEndpoint = oidcService.getEndSessionEndpoint(); -String postLogoutRedirectUri = generateResourceUri("..", "nifi", "logout-complete"); +// Get the oidc discovery url +String oidcDiscoveryUrl = properties.getOidcDiscoveryUrl(); + +// Determine the logout method +String logoutMethod = determineLogoutMethod(oidcDiscoveryUrl); + +switch (logoutMethod) { +case REVOKE_ACCESS_TOKEN_LOGOUT: Review comment: Correct. Both `REVOKE_ACCESS_TOKEN_LOGOUT` and `ID_TOKEN_LOGOUT` make the same request to the ID Provider. But each will use a different component of the response, which will be determined in 'oidc/logoutCallback`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #4587: NIFI-7895 - Fix NPE in ConsumeMQTT with truststore only SSL CS
exceptionfactory commented on a change in pull request #4587: URL: https://github.com/apache/nifi/pull/4587#discussion_r504312716 ## File path: nifi-nar-bundles/nifi-mqtt-bundle/nifi-mqtt-processors/src/main/java/org/apache/nifi/processors/mqtt/common/AbstractMQTTProcessor.java ## @@ -288,13 +288,27 @@ public ValidationResult validate(String subject, String input, ValidationContext public static Properties transformSSLContextService(SSLContextService sslContextService){ Properties properties = new Properties(); -properties.setProperty("com.ibm.ssl.protocol", sslContextService.getSslAlgorithm()); -properties.setProperty("com.ibm.ssl.keyStore", sslContextService.getKeyStoreFile()); -properties.setProperty("com.ibm.ssl.keyStorePassword", sslContextService.getKeyStorePassword()); -properties.setProperty("com.ibm.ssl.keyStoreType", sslContextService.getKeyStoreType()); -properties.setProperty("com.ibm.ssl.trustStore", sslContextService.getTrustStoreFile()); -properties.setProperty("com.ibm.ssl.trustStorePassword", sslContextService.getTrustStorePassword()); -properties.setProperty("com.ibm.ssl.trustStoreType", sslContextService.getTrustStoreType()); +if (sslContextService.getSslAlgorithm() != null) { Review comment: This is a very straightforward change, but would it be worth adding a unit test to verify that it resolves the problem? The MqttTestUtils class has a createSslProperties() method, so a similar method that sets only the trust store properties could be added to support the use case described in NIFI-7895. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog commented on a change in pull request #4593: NIFI-7584 Added OIDC logout mechanism.
thenatog commented on a change in pull request #4593: URL: https://github.com/apache/nifi/pull/4593#discussion_r504226782 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/AccessResource.java ## @@ -329,24 +359,221 @@ public Response oidcExchange(@Context HttpServletRequest httpServletRequest, @Co ) public void oidcLogout(@Context HttpServletRequest httpServletRequest, @Context HttpServletResponse httpServletResponse) throws Exception { if (!httpServletRequest.isSecure()) { -throw new IllegalStateException("User authentication/authorization is only supported when running over HTTPS."); +throw new IllegalStateException(AUTHENTICATION_NOT_ENABLED_MSG); } if (!oidcService.isOidcEnabled()) { -throw new IllegalStateException("OpenId Connect is not configured."); +throw new IllegalStateException(OPEN_ID_CONNECT_SUPPORT_IS_NOT_CONFIGURED_MSG); } -URI endSessionEndpoint = oidcService.getEndSessionEndpoint(); -String postLogoutRedirectUri = generateResourceUri("..", "nifi", "logout-complete"); +// Get the oidc discovery url +String oidcDiscoveryUrl = properties.getOidcDiscoveryUrl(); + +// Determine the logout method +String logoutMethod = determineLogoutMethod(oidcDiscoveryUrl); + +switch (logoutMethod) { +case REVOKE_ACCESS_TOKEN_LOGOUT: Review comment: Just to confirm, if the logoutMethod found is REVOKE_ACCESS_TOKEN_LOGOUT, we fall through to ID_TOKEN_LOGOUT, and if it's STANDARD_LOGOUT we do the default case? My reading of this is that the REVOKE_ACCESS_TOKEN_LOGOUT and ID_TOKEN_LOGOUT are the same (run the same code). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog commented on a change in pull request #4593: NIFI-7584 Added OIDC logout mechanism.
thenatog commented on a change in pull request #4593: URL: https://github.com/apache/nifi/pull/4593#discussion_r504221263 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/AccessResource.java ## @@ -329,24 +359,221 @@ public Response oidcExchange(@Context HttpServletRequest httpServletRequest, @Co ) public void oidcLogout(@Context HttpServletRequest httpServletRequest, @Context HttpServletResponse httpServletResponse) throws Exception { if (!httpServletRequest.isSecure()) { -throw new IllegalStateException("User authentication/authorization is only supported when running over HTTPS."); +throw new IllegalStateException(AUTHENTICATION_NOT_ENABLED_MSG); } if (!oidcService.isOidcEnabled()) { -throw new IllegalStateException("OpenId Connect is not configured."); +throw new IllegalStateException(OPEN_ID_CONNECT_SUPPORT_IS_NOT_CONFIGURED_MSG); Review comment: Unlike other uses of the isOIdcEnabled() method, this does not redirect to an error message page. Should it? Is there a reason it's different to the other uses? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog commented on a change in pull request #4593: NIFI-7584 Added OIDC logout mechanism.
thenatog commented on a change in pull request #4593: URL: https://github.com/apache/nifi/pull/4593#discussion_r504218997 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/AccessResource.java ## @@ -329,24 +359,221 @@ public Response oidcExchange(@Context HttpServletRequest httpServletRequest, @Co ) public void oidcLogout(@Context HttpServletRequest httpServletRequest, @Context HttpServletResponse httpServletResponse) throws Exception { if (!httpServletRequest.isSecure()) { -throw new IllegalStateException("User authentication/authorization is only supported when running over HTTPS."); +throw new IllegalStateException(AUTHENTICATION_NOT_ENABLED_MSG); } if (!oidcService.isOidcEnabled()) { -throw new IllegalStateException("OpenId Connect is not configured."); +throw new IllegalStateException(OPEN_ID_CONNECT_SUPPORT_IS_NOT_CONFIGURED_MSG); } -URI endSessionEndpoint = oidcService.getEndSessionEndpoint(); -String postLogoutRedirectUri = generateResourceUri("..", "nifi", "logout-complete"); +// Get the oidc discovery url +String oidcDiscoveryUrl = properties.getOidcDiscoveryUrl(); + +// Determine the logout method +String logoutMethod = determineLogoutMethod(oidcDiscoveryUrl); + +switch (logoutMethod) { +case REVOKE_ACCESS_TOKEN_LOGOUT: +case ID_TOKEN_LOGOUT: +// Make a request to the IdP +URI authorizationURI = oidcRequestAuthorizationCode(httpServletResponse, getOidcLogoutCallback()); +httpServletResponse.sendRedirect(authorizationURI.toString()); +break; +case STANDARD_LOGOUT: +default: +// Get the OIDC end session endpoint +URI endSessionEndpoint = oidcService.getEndSessionEndpoint(); +String postLogoutRedirectUri = generateResourceUri( "..", "nifi", "logout-complete"); + +if (endSessionEndpoint == null) { +httpServletResponse.sendRedirect(postLogoutRedirectUri); +} else { +URI logoutUri = UriBuilder.fromUri(endSessionEndpoint) +.queryParam("post_logout_redirect_uri", postLogoutRedirectUri) +.build(); +httpServletResponse.sendRedirect(logoutUri.toString()); +} +break; +} +} -if (endSessionEndpoint == null) { -// handle the case, where the OpenID Provider does not have an end session endpoint -httpServletResponse.sendRedirect(postLogoutRedirectUri); +@GET +@Consumes(MediaType.WILDCARD) +@Produces(MediaType.WILDCARD) +@Path("oidc/logoutCallback") +@ApiOperation( +value = "Redirect/callback URI for processing the result of the OpenId Connect logout sequence.", +notes = NON_GUARANTEED_ENDPOINT +) +public void oidcLogoutCallback(@Context HttpServletRequest httpServletRequest, @Context HttpServletResponse httpServletResponse) throws Exception { +// only consider user specific access over https +if (!httpServletRequest.isSecure()) { +forwardToMessagePage(httpServletRequest, httpServletResponse, AUTHENTICATION_NOT_ENABLED_MSG); +return; +} + +// ensure oidc is enabled +if (!oidcService.isOidcEnabled()) { +forwardToMessagePage(httpServletRequest, httpServletResponse, OPEN_ID_CONNECT_SUPPORT_IS_NOT_CONFIGURED_MSG); +return; +} + +final String oidcRequestIdentifier = getCookieValue(httpServletRequest.getCookies(), OIDC_REQUEST_IDENTIFIER); +if (oidcRequestIdentifier == null) { +forwardToMessagePage(httpServletRequest, httpServletResponse, "The login request identifier was not found in the request. Unable to continue."); +return; +} + +final com.nimbusds.openid.connect.sdk.AuthenticationResponse oidcResponse; +try { +oidcResponse = AuthenticationResponseParser.parse(getRequestUri()); +} catch (final ParseException e) { +logger.error("Unable to parse the redirect URI from the OpenId Connect Provider. Unable to continue logout process."); + +// remove the oidc request cookie +removeOidcRequestCookie(httpServletResponse); + +// forward to the error page +forwardToMessagePage(httpServletRequest, httpServletResponse, "Unable to parse the redirect URI from the OpenId Connect Provider. Unable to continue logout process."); +return; +} + +if (oidcResponse.indicatesSuccess()) { +final AuthenticationSuccessResponse successfulOidcResponse =
[jira] [Updated] (NIFI-7870) Fix anonymous access control for advanced UI resources
[ https://issues.apache.org/jira/browse/NIFI-7870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan Gough updated NIFI-7870: --- Description: -The X-Content-Type header was added in NiFi 1.12.0, which blocks resources in the browser if they do not have the content type added. It appears that some 'advanced UI' resources do not have the content type applied to their resources and are blocked from loading.- On further inspection, it appears that explicitly disallowing anonymous access has resulted in some static resources in the NiFi advanced UI's WAR checking whether the anonymous user should be able to access them. The anonymous access was intended to be used on the NiFi API endpoints, and not static resources. was:The X-Content-Type header was added in NiFi 1.12.0, which blocks resources in the browser if they do not have the content type added. It appears that some 'advanced UI' resources do not have the content type applied to their resources and are blocked from loading. > Fix anonymous access control for advanced UI resources > -- > > Key: NIFI-7870 > URL: https://issues.apache.org/jira/browse/NIFI-7870 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.12.0, 1.12.1 >Reporter: Nathan Gough >Assignee: Nathan Gough >Priority: Critical > Labels: UI, content-type, header, security > > -The X-Content-Type header was added in NiFi 1.12.0, which blocks resources > in the browser if they do not have the content type added. It appears that > some 'advanced UI' resources do not have the content type applied to their > resources and are blocked from loading.- > On further inspection, it appears that explicitly disallowing anonymous > access has resulted in some static resources in the NiFi advanced UI's WAR > checking whether the anonymous user should be able to access them. The > anonymous access was intended to be used on the NiFi API endpoints, and not > static resources. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7870) Fix anonymous access control for advanced UI resources
[ https://issues.apache.org/jira/browse/NIFI-7870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan Gough updated NIFI-7870: --- Summary: Fix anonymous access control for advanced UI resources (was: X-Content-Type missing for advanced UI resources) > Fix anonymous access control for advanced UI resources > -- > > Key: NIFI-7870 > URL: https://issues.apache.org/jira/browse/NIFI-7870 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.12.0, 1.12.1 >Reporter: Nathan Gough >Assignee: Nathan Gough >Priority: Critical > Labels: UI, content-type, header, security > > The X-Content-Type header was added in NiFi 1.12.0, which blocks resources in > the browser if they do not have the content type added. It appears that some > 'advanced UI' resources do not have the content type applied to their > resources and are blocked from loading. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFIREG-424) Releasing Apache NiFi Registry 0.8.0
Pierre Villard created NIFIREG-424: -- Summary: Releasing Apache NiFi Registry 0.8.0 Key: NIFIREG-424 URL: https://issues.apache.org/jira/browse/NIFIREG-424 Project: NiFi Registry Issue Type: Task Affects Versions: 0.8.0 Reporter: Pierre Villard Assignee: Pierre Villard Fix For: 0.8.0 JIRA for the RM process in order to release Apache NiFi Registry 0.8.0. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] NissimShiman commented on pull request #4130: NIFI-6235 - Prioritizing standard content war loading order
NissimShiman commented on pull request #4130: URL: https://github.com/apache/nifi/pull/4130#issuecomment-707929932 @anaylor It looks like a conflict has occured with JettyServer.java over the time this has been sitting here. Could you rebase with the latest in main and I will take a look at it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-7903) UpdateAttribute advanced configuration error "Unable to load the rule list and evalaution criteria."
[ https://issues.apache.org/jira/browse/NIFI-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristen Guthier resolved NIFI-7903. --- Resolution: Not A Problem Missed requirement to build with Java 8 > UpdateAttribute advanced configuration error "Unable to load the rule list > and evalaution criteria." > > > Key: NIFI-7903 > URL: https://issues.apache.org/jira/browse/NIFI-7903 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.13.0 > Environment: Ubuntu 18.04, OpenJDK 11.0.8 >Reporter: Kristen Guthier >Priority: Major > > Opening UpdateAttribute>Configure>Advanced in nifi.1.13.0-SNAPSHOT causes > error "Unable to load the rule list and evalaution criteria." > If that error is ignored and a rule added/named, when a condition expression > is created, a configuration error is reported: >content="text/html;charset=utf-8"/> Error 500 > java.lang.NullPointerException HTTP ERROR 500 > java.lang.NullPointerException > URI:/nifi-update-attribute-ui-1.13.0-SNAPSHOT/api/criteria/rules/conditions > STATUS:500 > MESSAGE:java.lang.NullPointerException > SERVLET:api CAUSED > BY:java.lang.NullPointerException Caused > by:java.lang.NullPointerException at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342) > at > org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229) > at > org.eclipse.jetty.servlet.ServletHolder$NotAsyncServlet.service(ServletHolder.java:1395) > at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:755) at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) > at > org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) > at > org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) > at > org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) > at > org.apache.nifi.web.security.NiFiAuthenticationFilter.authenticate(NiFiAuthenticationFilter.java:100) > at > org.apache.nifi.web.security.NiFiAuthenticationFilter.doFilter(NiFiAuthenticationFilter.java:59) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) > at > org.apache.nifi.web.security.NiFiAuthenticationFilter.authenticate(NiFiAuthenticationFilter.java:100) > at > org.apache.nifi.web.security.NiFiAuthenticationFilter.doFilter(NiFiAuthenticationFilter.java:59) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) > at > org.apache.nifi.web.security.NiFiAuthenticationFilter.authenticate(NiFiAuthenticationFilter.java:100) > at > org.apache.nifi.web.security.NiFiAuthenticationFilter.doFilter(NiFiAuthenticationFilter.java:59) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) > at > org.apache.nifi.web.security.NiFiAuthenticationFilter.authenticate(NiFiAuthenticationFilter.java:100) > at > org.apache.nifi.web.security.NiFiAuthenticationFilter.doFilter(NiFiAuthenticationFilter.java:59) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) > at > org.apache.nifi.web.security.NiFiAuthenticationFilter.authenticate(NiFiAuthenticationFilter.java:100) > at > org.apache.nifi.web.security.NiFiAuthenticationFilter.doFilter(NiFiAuthenticationFilter.java:59) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) > at > org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:96) > at > org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) > at > org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) > at > org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) > at > org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) >
[GitHub] [nifi] NissimShiman commented on pull request #4563: NIFI-7738 Reverse Provenance Query
NissimShiman commented on pull request #4563: URL: https://github.com/apache/nifi/pull/4563#issuecomment-707926327 @anaylor and @markobean Thank you for taking the time to review this PR. I greatly appreciate it! I added the points that you both pointed out and they are now reflected in the PR code Also, @anaylor : Adding a checkbox for an inverse query is an intuitive/user friendly way to go so that is why I chose this path, but adding an ! would have been a valid option as well. But you make a very good point that leads to a broader issue. It would be great if the Provenance Search had an option where users could compose their own query with regular expressions or lucene query syntax (as the underlying queries are made in lucene for most of the existing provenance inplementations anyway). I am interested to know what community members would think about that. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
fgerlits commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r504119406 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -191,7 +206,11 @@ void ListenHTTP::onSchedule(core::ProcessContext *context, core::ProcessSessionF } server_.reset(new CivetServer(options, _, _)); - handler_.reset(new Handler(basePath, context, sessionFactory, std::move(authDNPattern), std::move(headersAsAttributesPattern))); + + context->getProperty(BatchSize.getName(), batch_size_); + logger_->log_debug("ListenHTTP using %s: %d", BatchSize.getName(), batch_size_); Review comment: this should be `%zu`, too This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
fgerlits commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r504119213 ## File path: extensions/civetweb/processors/ListenHTTP.h ## @@ -205,13 +193,17 @@ class ListenHTTP : public core::Processor { void notifyStop() override; private: - // Logger - std::shared_ptr logger_; + static const std::size_t DEFAULT_BUFFER_SIZE; + void processIncomingFlowFile(core::ProcessSession *session); + void processRequestBuffer(core::ProcessSession *session); + + std::shared_ptr logger_; CivetCallbacks callbacks_; std::unique_ptr server_; std::unique_ptr handler_; std::string listeningPort; + std::size_t batch_size_; Review comment: I know it will be set in `onSchedule()`, but I think it would be better to initialize `batch_size_`, either here or in the constructor. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
fgerlits commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r504112349 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -212,51 +235,80 @@ void ListenHTTP::onSchedule(core::ProcessContext *context, core::ProcessSessionF ListenHTTP::~ListenHTTP() = default; void ListenHTTP::onTrigger(core::ProcessContext *context, core::ProcessSession *session) { - std::shared_ptr flow_file = session->get(); + logger_->log_debug("OnTrigger ListenHTTP"); + processIncomingFlowFile(session); + processRequestBuffer(session); +} - // Do nothing if there are no incoming files +void ListenHTTP::processIncomingFlowFile(core::ProcessSession *session) { + std::shared_ptr flow_file = session->get(); if (!flow_file) { return; } std::string type; flow_file->getAttribute("http.type", type); - if (type == "response_body") { - -if (handler_) { - struct response_body response { "", "", "" }; - ResponseBodyReadCallback cb(); - flow_file->getAttribute("filename", response.uri); - flow_file->getAttribute("mime.type", response.mime_type); - if (response.mime_type.empty()) { -logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); -response.mime_type = "application/octet-stream"; - } - session->read(flow_file, ); - handler_->set_response_body(std::move(response)); + if (type == "response_body" && handler_) { +response_body response; +ResponseBodyReadCallback cb(); +flow_file->getAttribute("filename", response.uri); +flow_file->getAttribute("mime.type", response.mime_type); +if (response.mime_type.empty()) { + logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); + response.mime_type = "application/octet-stream"; } +session->read(flow_file, ); +handler_->setResponseBody(std::move(response)); } session->remove(flow_file); } -ListenHTTP::Handler::Handler(std::string base_uri, core::ProcessContext *context, core::ProcessSessionFactory *session_factory, std::string &_dn_regex, std::string &_as_attrs_regex) +void ListenHTTP::processRequestBuffer(core::ProcessSession *session) { + std::size_t flow_file_count = 0; + for (; batch_size_ == 0 || batch_size_ > flow_file_count; ++flow_file_count) { +FlowFileBufferPair flow_file_buffer_pair; +if (!handler_->request_buffer.tryDequeue(flow_file_buffer_pair)) { + break; +} + +auto flow_file = flow_file_buffer_pair.first; +session->add(flow_file); + +if (flow_file_buffer_pair.second) { + WriteCallback callback(std::move(flow_file_buffer_pair.second)); + session->write(flow_file, ); +} + +session->transfer(flow_file, Success); + } + + logger_->log_debug("ListenHTTP transferred %d flow files from HTTP request buffer", flow_file_count); +} + +ListenHTTP::Handler::Handler(std::string base_uri, core::ProcessContext *context, std::string &_dn_regex, std::string &_as_attrs_regex, std::size_t buffer_size) : base_uri_(std::move(base_uri)), auth_dn_regex_(std::move(auth_dn_regex)), headers_as_attrs_regex_(std::move(header_as_attrs_regex)), - logger_(logging::LoggerFactory::getLogger()) { - process_context_ = context; - session_factory_ = session_factory; + process_context_(context), + logger_(logging::LoggerFactory::getLogger()), + buffer_size_(buffer_size) { } -void ListenHTTP::Handler::send_error_response(struct mg_connection *conn) { +void ListenHTTP::Handler::sendHttp500(mg_connection* const conn) { mg_printf(conn, "HTTP/1.1 500 Internal Server Error\r\n" -"Content-Type: text/html\r\n" -"Content-Length: 0\r\n\r\n"); + "Content-Type: text/html\r\n" + "Content-Length: 0\r\n\r\n"); } -void ListenHTTP::Handler::set_header_attributes(const mg_request_info *req_info, const std::shared_ptr _file) const { +void ListenHTTP::Handler::sendHttp503(mg_connection* const conn) { + mg_printf(conn, "HTTP/1.1 503 Service Unavailable\r\n" + "Content-Type: text/html\r\n" + "Content-Length: 0\r\n\r\n"); Review comment: thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
fgerlits commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r504112193 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -212,51 +235,80 @@ void ListenHTTP::onSchedule(core::ProcessContext *context, core::ProcessSessionF ListenHTTP::~ListenHTTP() = default; void ListenHTTP::onTrigger(core::ProcessContext *context, core::ProcessSession *session) { - std::shared_ptr flow_file = session->get(); + logger_->log_debug("OnTrigger ListenHTTP"); + processIncomingFlowFile(session); + processRequestBuffer(session); +} - // Do nothing if there are no incoming files +void ListenHTTP::processIncomingFlowFile(core::ProcessSession *session) { + std::shared_ptr flow_file = session->get(); if (!flow_file) { return; } std::string type; flow_file->getAttribute("http.type", type); - if (type == "response_body") { - -if (handler_) { - struct response_body response { "", "", "" }; - ResponseBodyReadCallback cb(); - flow_file->getAttribute("filename", response.uri); - flow_file->getAttribute("mime.type", response.mime_type); - if (response.mime_type.empty()) { -logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); -response.mime_type = "application/octet-stream"; - } - session->read(flow_file, ); - handler_->set_response_body(std::move(response)); + if (type == "response_body" && handler_) { +response_body response; +ResponseBodyReadCallback cb(); +flow_file->getAttribute("filename", response.uri); +flow_file->getAttribute("mime.type", response.mime_type); +if (response.mime_type.empty()) { + logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); + response.mime_type = "application/octet-stream"; } +session->read(flow_file, ); +handler_->setResponseBody(std::move(response)); } session->remove(flow_file); } -ListenHTTP::Handler::Handler(std::string base_uri, core::ProcessContext *context, core::ProcessSessionFactory *session_factory, std::string &_dn_regex, std::string &_as_attrs_regex) +void ListenHTTP::processRequestBuffer(core::ProcessSession *session) { + std::size_t flow_file_count = 0; + for (; batch_size_ == 0 || batch_size_ > flow_file_count; ++flow_file_count) { +FlowFileBufferPair flow_file_buffer_pair; +if (!handler_->request_buffer.tryDequeue(flow_file_buffer_pair)) { + break; Review comment: :+1: This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits opened a new pull request #925: MINIFICPP-1391 Upgrade XCode version and replace usages of set-env
fgerlits opened a new pull request #925: URL: https://github.com/apache/nifi-minifi-cpp/pull/925 GitHub actions fixes: - Remove the XCode 10.3 job, and add a new job with XCode 11.7 - Replace deprecated usages of `set-env` (see https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/) https://issues.apache.org/jira/browse/MINIFICPP-1391 --- Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1391) Update CI jobs to remove errors/warnings
Ferenc Gerlits created MINIFICPP-1391: - Summary: Update CI jobs to remove errors/warnings Key: MINIFICPP-1391 URL: https://issues.apache.org/jira/browse/MINIFICPP-1391 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Ferenc Gerlits Assignee: Ferenc Gerlits GitHub actions fail because of some problems with XCode 10.3. This is a very old version of XCode, and we should no longer use it. Remove it and replace it with XCode 11.7. Also, most of our workflows use {{set-env}}, which has been deprecated: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/. Replace them with the alternative shown in the link. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7920) When enabling Controller Services, if Cluster Coordinator fails to enable, can result in service becoming unmodifiable with error "Revision ... is not the most up-to-date
[ https://issues.apache.org/jira/browse/NIFI-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-7920: - Status: Patch Available (was: Open) > When enabling Controller Services, if Cluster Coordinator fails to enable, > can result in service becoming unmodifiable with error "Revision ... is not > the most up-to-date revision. This component appears to have been modified" > -- > > Key: NIFI-7920 > URL: https://issues.apache.org/jira/browse/NIFI-7920 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > > I have two Controller Services: Service A and Service B. Service B references > Service A. > If I enable Service A, and select to enable all referencing components, > everything works well as long as all nodes succeed in enabling. > If a node other than the Cluster Coordinator fails, it will get kicked out of > the cluster. It will then rejoin, and everything will continue to work > smoothly. > But if the Cluster Coordinator gets kicked out of the cluster because it > fails to enable, the service gets into a state where it can't be modified. > Any attempt to modify it tells me an error: "Revision ... is not the most > up-to-date revision. This component appears to have been modified" -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] markap14 opened a new pull request #4602: NIFI-7920: When node that is connected to cluster is asked to reconne…
markap14 opened a new pull request #4602: URL: https://github.com/apache/nifi/pull/4602 …ct, ensure that it relinquishes role of Cluster Coordinator and Primary Node Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7920) When enabling Controller Services, if Cluster Coordinator fails to enable, can result in service becoming unmodifiable with error "Revision ... is not the most up-to-date
Mark Payne created NIFI-7920: Summary: When enabling Controller Services, if Cluster Coordinator fails to enable, can result in service becoming unmodifiable with error "Revision ... is not the most up-to-date revision. This component appears to have been modified" Key: NIFI-7920 URL: https://issues.apache.org/jira/browse/NIFI-7920 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne I have two Controller Services: Service A and Service B. Service B references Service A. If I enable Service A, and select to enable all referencing components, everything works well as long as all nodes succeed in enabling. If a node other than the Cluster Coordinator fails, it will get kicked out of the cluster. It will then rejoin, and everything will continue to work smoothly. But if the Cluster Coordinator gets kicked out of the cluster because it fails to enable, the service gets into a state where it can't be modified. Any attempt to modify it tells me an error: "Revision ... is not the most up-to-date revision. This component appears to have been modified" -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r504056045 ## File path: extensions/civetweb/processors/ListenHTTP.h ## @@ -155,18 +160,15 @@ class ListenHTTP : public core::Processor { // Write callback for transferring data from HTTP request to content repo class WriteCallback : public OutputStreamCallback { public: -WriteCallback(struct mg_connection *conn, const struct mg_request_info *reqInfo); +WriteCallback(std::unique_ptr); int64_t process(std::shared_ptr stream); private: -// Logger std::shared_ptr logger_; - -struct mg_connection *conn_; -const struct mg_request_info *req_info_; +std::shared_ptr request_content_; Review comment: Replaced in [385be9e](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/385be9e942f776f41fe87ab2526fbe016b8d1d51) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r504052537 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -273,6 +325,34 @@ void ListenHTTP::Handler::set_header_attributes(const mg_request_info *req_info, } } +bool ListenHTTP::Handler::enqueueRequest(mg_connection *conn, const mg_request_info *req_info, std::unique_ptr content_buffer) { + auto flow_file = std::make_shared(); + auto flow_version = process_context_->getProcessorNode()->getFlowIdentifier(); + if (flow_version != nullptr) { +flow_file->setAttribute(core::SpecialFlowAttribute::FLOW_ID, flow_version->getFlowId()); + } + + if (!flow_file) { Review comment: Previously session->create() was used to create the FlowFileRecord and now we allocate the record in this scope, so it shouldn't be needed to check the flow_file at all. I removed the check in [0427909](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/0427909155e707881d8640ca20e1c33b0c9d84f5) ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -273,6 +325,34 @@ void ListenHTTP::Handler::set_header_attributes(const mg_request_info *req_info, } } +bool ListenHTTP::Handler::enqueueRequest(mg_connection *conn, const mg_request_info *req_info, std::unique_ptr content_buffer) { + auto flow_file = std::make_shared(); + auto flow_version = process_context_->getProcessorNode()->getFlowIdentifier(); + if (flow_version != nullptr) { +flow_file->setAttribute(core::SpecialFlowAttribute::FLOW_ID, flow_version->getFlowId()); + } + + if (!flow_file) { +sendHttp500(conn); Review comment: Removed in [0427909](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/0427909155e707881d8640ca20e1c33b0c9d84f5) due to the comment above. ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -273,6 +325,34 @@ void ListenHTTP::Handler::set_header_attributes(const mg_request_info *req_info, } } +bool ListenHTTP::Handler::enqueueRequest(mg_connection *conn, const mg_request_info *req_info, std::unique_ptr content_buffer) { + auto flow_file = std::make_shared(); + auto flow_version = process_context_->getProcessorNode()->getFlowIdentifier(); + if (flow_version != nullptr) { +flow_file->setAttribute(core::SpecialFlowAttribute::FLOW_ID, flow_version->getFlowId()); + } + + if (!flow_file) { +sendHttp500(conn); +return true; Review comment: Fixed in [0427909](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/0427909155e707881d8640ca20e1c33b0c9d84f5) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r504052806 ## File path: extensions/civetweb/processors/ListenHTTP.h ## @@ -112,20 +115,22 @@ class ListenHTTP : public core::Processor { } } +std::size_t buffer_size_; +utils::ConcurrentQueue request_buffer; Review comment: I suppose it shouldn't be public as the consumer only should be allowed to dequeue the buffer. Fixed it in [0427909](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/0427909155e707881d8640ca20e1c33b0c9d84f5) ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -273,6 +325,34 @@ void ListenHTTP::Handler::set_header_attributes(const mg_request_info *req_info, } } +bool ListenHTTP::Handler::enqueueRequest(mg_connection *conn, const mg_request_info *req_info, std::unique_ptr content_buffer) { + auto flow_file = std::make_shared(); + auto flow_version = process_context_->getProcessorNode()->getFlowIdentifier(); + if (flow_version != nullptr) { +flow_file->setAttribute(core::SpecialFlowAttribute::FLOW_ID, flow_version->getFlowId()); + } + + if (!flow_file) { +sendHttp500(conn); +return true; + } + + setHeaderAttributes(req_info, flow_file); + + if (buffer_size_ == 0 || request_buffer.size() < buffer_size_) { +request_buffer.enqueue(std::make_pair(std::move(flow_file), std::move(content_buffer))); + } else { +logger_->log_warn("ListenHTTP buffer is full"); Review comment: Added the request method and uri to the log, I could not find anything more specific to the message in the request info. It is already good to have the information that the message was dropped. Fixed in [0427909](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/0427909155e707881d8640ca20e1c33b0c9d84f5) ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -456,19 +473,28 @@ int64_t ListenHTTP::WriteCallback::process(std::shared_ptr strea } // Read a buffer of data from client -rlen = mg_read(conn_, [0], (size_t) rlen); +rlen = mg_read(conn, [0], (size_t) rlen); if (rlen <= 0) { break; } // Transfer buffer data to the output stream -stream->write([0], gsl::narrow(rlen)); +content_buffer->write([0], gsl::narrow(rlen)); nlen += rlen; } - return nlen; + return content_buffer; +} + +ListenHTTP::WriteCallback::WriteCallback(std::unique_ptr request_content) +: logger_(logging::LoggerFactory::getLogger()) Review comment: Fixed in [0427909](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/0427909155e707881d8640ca20e1c33b0c9d84f5) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r504053097 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -456,19 +473,28 @@ int64_t ListenHTTP::WriteCallback::process(std::shared_ptr strea } // Read a buffer of data from client -rlen = mg_read(conn_, [0], (size_t) rlen); +rlen = mg_read(conn, [0], (size_t) rlen); if (rlen <= 0) { break; } // Transfer buffer data to the output stream -stream->write([0], gsl::narrow(rlen)); +content_buffer->write([0], gsl::narrow(rlen)); nlen += rlen; } - return nlen; + return content_buffer; +} + +ListenHTTP::WriteCallback::WriteCallback(std::unique_ptr request_content) +: logger_(logging::LoggerFactory::getLogger()) +, request_content_(std::move(request_content)) { +} + +int64_t ListenHTTP::WriteCallback::process(std::shared_ptr stream) { Review comment: As I checked it was an overriden virtual method so the parameter type is fixed, but I added the `override` keyword to be more explicit in [0427909](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/0427909155e707881d8640ca20e1c33b0c9d84f5) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r504052241 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -212,51 +235,80 @@ void ListenHTTP::onSchedule(core::ProcessContext *context, core::ProcessSessionF ListenHTTP::~ListenHTTP() = default; void ListenHTTP::onTrigger(core::ProcessContext *context, core::ProcessSession *session) { - std::shared_ptr flow_file = session->get(); + logger_->log_debug("OnTrigger ListenHTTP"); + processIncomingFlowFile(session); + processRequestBuffer(session); +} - // Do nothing if there are no incoming files +void ListenHTTP::processIncomingFlowFile(core::ProcessSession *session) { + std::shared_ptr flow_file = session->get(); if (!flow_file) { return; } std::string type; flow_file->getAttribute("http.type", type); - if (type == "response_body") { - -if (handler_) { - struct response_body response { "", "", "" }; - ResponseBodyReadCallback cb(); - flow_file->getAttribute("filename", response.uri); - flow_file->getAttribute("mime.type", response.mime_type); - if (response.mime_type.empty()) { -logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); -response.mime_type = "application/octet-stream"; - } - session->read(flow_file, ); - handler_->set_response_body(std::move(response)); + if (type == "response_body" && handler_) { +response_body response; +ResponseBodyReadCallback cb(); +flow_file->getAttribute("filename", response.uri); +flow_file->getAttribute("mime.type", response.mime_type); +if (response.mime_type.empty()) { + logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); + response.mime_type = "application/octet-stream"; } +session->read(flow_file, ); +handler_->setResponseBody(std::move(response)); } session->remove(flow_file); } -ListenHTTP::Handler::Handler(std::string base_uri, core::ProcessContext *context, core::ProcessSessionFactory *session_factory, std::string &_dn_regex, std::string &_as_attrs_regex) +void ListenHTTP::processRequestBuffer(core::ProcessSession *session) { + std::size_t flow_file_count = 0; + for (; batch_size_ == 0 || batch_size_ > flow_file_count; ++flow_file_count) { +FlowFileBufferPair flow_file_buffer_pair; +if (!handler_->request_buffer.tryDequeue(flow_file_buffer_pair)) { + break; +} + +auto flow_file = flow_file_buffer_pair.first; +session->add(flow_file); + +if (flow_file_buffer_pair.second) { + WriteCallback callback(std::move(flow_file_buffer_pair.second)); + session->write(flow_file, ); +} + +session->transfer(flow_file, Success); + } + + logger_->log_debug("ListenHTTP transferred %d flow files from HTTP request buffer", flow_file_count); Review comment: Fixed in [0427909](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/0427909155e707881d8640ca20e1c33b0c9d84f5) ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -212,51 +235,80 @@ void ListenHTTP::onSchedule(core::ProcessContext *context, core::ProcessSessionF ListenHTTP::~ListenHTTP() = default; void ListenHTTP::onTrigger(core::ProcessContext *context, core::ProcessSession *session) { - std::shared_ptr flow_file = session->get(); + logger_->log_debug("OnTrigger ListenHTTP"); + processIncomingFlowFile(session); + processRequestBuffer(session); +} - // Do nothing if there are no incoming files +void ListenHTTP::processIncomingFlowFile(core::ProcessSession *session) { + std::shared_ptr flow_file = session->get(); if (!flow_file) { return; } std::string type; flow_file->getAttribute("http.type", type); - if (type == "response_body") { - -if (handler_) { - struct response_body response { "", "", "" }; - ResponseBodyReadCallback cb(); - flow_file->getAttribute("filename", response.uri); - flow_file->getAttribute("mime.type", response.mime_type); - if (response.mime_type.empty()) { -logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); -response.mime_type = "application/octet-stream"; - } - session->read(flow_file, ); - handler_->set_response_body(std::move(response)); + if (type == "response_body" && handler_) { +response_body response; +ResponseBodyReadCallback cb(); +flow_file->getAttribute("filename", response.uri); +flow_file->getAttribute("mime.type", response.mime_type); +if (response.mime_type.empty()) { + logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); + response.mime_type = "application/octet-stream"; } +session->read(flow_file, ); +
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r504052023 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -191,7 +206,15 @@ void ListenHTTP::onSchedule(core::ProcessContext *context, core::ProcessSessionF } server_.reset(new CivetServer(options, _, _)); - handler_.reset(new Handler(basePath, context, sessionFactory, std::move(authDNPattern), std::move(headersAsAttributesPattern))); + + context->getProperty(BatchSize.getName(), batch_size_); + logger_->log_debug("ListenHTTP using %s: %d", BatchSize.getName(), batch_size_); + + std::size_t buffer_size; + context->getProperty(BufferSize.getName(), buffer_size); + logger_->log_debug("ListenHTTP using %s: %d", BufferSize.getName(), buffer_size); + + handler_.reset(new Handler(basePath, context, std::move(authDNPattern), std::move(headersAsAttributesPattern), buffer_size)); Review comment: Good point, fixed in [0427909](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/0427909155e707881d8640ca20e1c33b0c9d84f5) ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -212,51 +235,80 @@ void ListenHTTP::onSchedule(core::ProcessContext *context, core::ProcessSessionF ListenHTTP::~ListenHTTP() = default; void ListenHTTP::onTrigger(core::ProcessContext *context, core::ProcessSession *session) { - std::shared_ptr flow_file = session->get(); + logger_->log_debug("OnTrigger ListenHTTP"); + processIncomingFlowFile(session); + processRequestBuffer(session); +} - // Do nothing if there are no incoming files +void ListenHTTP::processIncomingFlowFile(core::ProcessSession *session) { + std::shared_ptr flow_file = session->get(); if (!flow_file) { return; } std::string type; flow_file->getAttribute("http.type", type); - if (type == "response_body") { - -if (handler_) { - struct response_body response { "", "", "" }; - ResponseBodyReadCallback cb(); - flow_file->getAttribute("filename", response.uri); - flow_file->getAttribute("mime.type", response.mime_type); - if (response.mime_type.empty()) { -logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); -response.mime_type = "application/octet-stream"; - } - session->read(flow_file, ); - handler_->set_response_body(std::move(response)); + if (type == "response_body" && handler_) { +response_body response; +ResponseBodyReadCallback cb(); +flow_file->getAttribute("filename", response.uri); +flow_file->getAttribute("mime.type", response.mime_type); +if (response.mime_type.empty()) { + logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); + response.mime_type = "application/octet-stream"; } +session->read(flow_file, ); +handler_->setResponseBody(std::move(response)); } session->remove(flow_file); } -ListenHTTP::Handler::Handler(std::string base_uri, core::ProcessContext *context, core::ProcessSessionFactory *session_factory, std::string &_dn_regex, std::string &_as_attrs_regex) +void ListenHTTP::processRequestBuffer(core::ProcessSession *session) { + std::size_t flow_file_count = 0; + for (; batch_size_ == 0 || batch_size_ > flow_file_count; ++flow_file_count) { +FlowFileBufferPair flow_file_buffer_pair; +if (!handler_->request_buffer.tryDequeue(flow_file_buffer_pair)) { + break; Review comment: This can happen only if the queue becomes empty before we reach batch size. It is a normal use case and we log the number of flow files we dequeued after the loop so I think it shouldn't be needed to additionally log here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #914: MINIFICPP-1323 Encrypt sensitive properties using libsodium
fgerlits commented on a change in pull request #914: URL: https://github.com/apache/nifi-minifi-cpp/pull/914#discussion_r504036753 ## File path: libminifi/include/properties/Properties.h ## @@ -65,7 +65,7 @@ class Properties { * @param value value in which to place the map's stored property value * @returns true if found, false otherwise. */ - bool get(const std::string , std::string ) const; + bool getString(const std::string , std::string ) const; Review comment: I think that by naming both functions in the parent and child classes `get()`, we would be setting a trap for our future selves, and for anyone who will work on this code. I agree `getString()` is not a great name, but I would prefer to rename it to anything other than `get()`. How about `value()`, `getProperty()`, `getPropertyValue()` or something similar? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #914: MINIFICPP-1323 Encrypt sensitive properties using libsodium
fgerlits commented on a change in pull request #914: URL: https://github.com/apache/nifi-minifi-cpp/pull/914#discussion_r504012960 ## File path: encrypt-config/ConfigFile.cpp ## @@ -0,0 +1,171 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "ConfigFile.h" + +#include +#include +#include + +#include "utils/StringUtils.h" + +namespace { +constexpr std::array DEFAULT_SENSITIVE_PROPERTIES{"nifi.security.client.pass.phrase", + "nifi.rest.api.password"}; +constexpr const char* ADDITIONAL_SENSITIVE_PROPS_PROPERTY_NAME = "nifi.sensitive.props.additional.keys"; +} // namespace + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace encrypt_config { + +ConfigLine::ConfigLine(std::string line) : line_(line) { + line = utils::StringUtils::trim(line); + if (line.empty() || line[0] == '#') { return; } + + size_t index_of_first_equals_sign = line.find('='); + if (index_of_first_equals_sign == std::string::npos) { return; } + + std::string key = utils::StringUtils::trim(line.substr(0, index_of_first_equals_sign)); + if (key.empty()) { return; } + + key_ = key; + value_ = utils::StringUtils::trim(line.substr(index_of_first_equals_sign + 1)); +} + +ConfigLine::ConfigLine(std::string key, std::string value) + : line_{utils::StringUtils::join_pack(key, "=", value)}, key_{std::move(key)}, value_{std::move(value)} { +} + +void ConfigLine::updateValue(const std::string& value) { + auto pos = line_.find('='); + if (pos != std::string::npos) { +line_.replace(pos + 1, std::string::npos, value); +value_ = value; + } else { +throw std::invalid_argument{"Cannot update value in config line: it does not contain an = sign!"}; + } +} + +ConfigFile::ConfigFile(std::istream& input_stream) { + std::string line; + while (std::getline(input_stream, line)) { +config_lines_.push_back(ConfigLine{line}); + } +} + +ConfigFile::Lines::const_iterator ConfigFile::findKey(const std::string& key) const { + return std::find_if(config_lines_.cbegin(), config_lines_.cend(), [](const ConfigLine& config_line) { +return config_line.getKey() == key; + }); +} + +ConfigFile::Lines::iterator ConfigFile::findKey(const std::string& key) { + return std::find_if(config_lines_.begin(), config_lines_.end(), [](const ConfigLine& config_line) { +return config_line.getKey() == key; + }); +} + +bool ConfigFile::hasValue(const std::string& key) const { + const auto it = findKey(key); + return (it != config_lines_.end()); +} + +utils::optional ConfigFile::getValue(const std::string& key) const { + const auto it = findKey(key); + if (it != config_lines_.end()) { +return it->getValue(); + } else { +return utils::nullopt; + } +} + +void ConfigFile::update(const std::string& key, const std::string& value) { + auto it = findKey(key); + if (it != config_lines_.end()) { +it->updateValue(value); + } else { +throw std::invalid_argument{"Key " + key + " not found in the config file!"}; + } +} + +void ConfigFile::insertAfter(const std::string& after_key, const std::string& key, const std::string& value) { + auto it = findKey(after_key); + if (it != config_lines_.end()) { +++it; +config_lines_.emplace(it, key, value); + } else { +throw std::invalid_argument{"Key " + after_key + " not found in the config file!"}; + } +} + +void ConfigFile::append(const std::string& key, const std::string& value) { + config_lines_.emplace_back(key, value); +} + +int ConfigFile::erase(const std::string& key) { + auto has_this_key = [](const ConfigLine& line) { return line.getKey() == key; }; + auto new_end = std::remove_if(config_lines_.begin(), config_lines_.end(), has_this_key); + auto num_removed = std::distance(new_end, config_lines_.end()); + config_lines_.erase(new_end, config_lines_.end()); + return gsl::narrow(num_removed); +} + +void ConfigFile::writeTo(const std::string& file_path) const { + std::ofstream file{file_path}; Review comment: done: https://github.com/apache/nifi-minifi-cpp/pull/914/commits/ec7277ca5be16565d25738d23f898d7e46608c5c
[jira] [Resolved] (NIFI-7919) Bump JUnit dependency
[ https://issues.apache.org/jira/browse/NIFI-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-7919. -- Resolution: Fixed > Bump JUnit dependency > - > > Key: NIFI-7919 > URL: https://issues.apache.org/jira/browse/NIFI-7919 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.13.0 > > > Bump JUnit from 4.13 to 4.13.1 > [https://github.com/apache/nifi/pull/4597] > [https://github.com/apache/nifi/pull/4598] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7919) Bump JUnit dependency
Pierre Villard created NIFI-7919: Summary: Bump JUnit dependency Key: NIFI-7919 URL: https://issues.apache.org/jira/browse/NIFI-7919 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Pierre Villard Assignee: Pierre Villard Fix For: 1.13.0 Bump JUnit from 4.13 to 4.13.1 [https://github.com/apache/nifi/pull/4597] [https://github.com/apache/nifi/pull/4598] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-registry] bbende merged pull request #306: Bump junit from 4.12 to 4.13.1
bbende merged pull request #306: URL: https://github.com/apache/nifi-registry/pull/306 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFIREG-423) Upgrade Junit
[ https://issues.apache.org/jira/browse/NIFIREG-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende resolved NIFIREG-423. - Resolution: Fixed > Upgrade Junit > - > > Key: NIFIREG-423 > URL: https://issues.apache.org/jira/browse/NIFIREG-423 > Project: NiFi Registry > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 0.8.0 > > > https://github.com/apache/nifi-registry/pull/306 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFIREG-423) Upgrade Junit
Bryan Bende created NIFIREG-423: --- Summary: Upgrade Junit Key: NIFIREG-423 URL: https://issues.apache.org/jira/browse/NIFIREG-423 Project: NiFi Registry Issue Type: Task Affects Versions: 0.7.0 Reporter: Bryan Bende Assignee: Bryan Bende Fix For: 0.8.0 https://github.com/apache/nifi-registry/pull/306 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-registry] asfgit closed pull request #302: NIFIREG-415 Add Support for Unicode in X-ProxiedEntitiesChain
asfgit closed pull request #302: URL: https://github.com/apache/nifi-registry/pull/302 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-registry] bbende commented on pull request #302: NIFIREG-415 Add Support for Unicode in X-ProxiedEntitiesChain
bbende commented on pull request #302: URL: https://github.com/apache/nifi-registry/pull/302#issuecomment-707760615 Updates look good, all integration tests passing locally, going to merge, thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFIREG-415) Add support for character sets other than US-ASCII in X-ProxiedEntitiesChain Header
[ https://issues.apache.org/jira/browse/NIFIREG-415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende resolved NIFIREG-415. - Fix Version/s: 0.8.0 Resolution: Fixed > Add support for character sets other than US-ASCII in X-ProxiedEntitiesChain > Header > --- > > Key: NIFIREG-415 > URL: https://issues.apache.org/jira/browse/NIFIREG-415 > Project: NiFi Registry > Issue Type: Improvement >Reporter: Kevin Doran >Assignee: Kevin Doran >Priority: Major > Fix For: 0.8.0 > > Time Spent: 2.5h > Remaining Estimate: 0h > > This is a subtask for NIFI-7744 that captures the work that needs to be done > in NiFi Registry. > NiFi Registry is an important component of the solution, as the > {{nifi-registry-client}} lib that is used by NiFi (and potentially other Java > clients to Registry) contains logic for forming the > {{X-ProxiedEntitiesChain}} header when required. > NiFi Registry should be updated and released, and then NiFi can be updated > to use the newer version of {{nifi-registry-client}} dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7918) Getsftp is throwing time out issue
Rohit created NIFI-7918: --- Summary: Getsftp is throwing time out issue Key: NIFI-7918 URL: https://issues.apache.org/jira/browse/NIFI-7918 Project: Apache NiFi Issue Type: Bug Reporter: Rohit Hi, Trying to connect aws sftp, using getsftp or listsftp, but getting timeout issue Usinf nifi 11 versiin and java 8 for your info -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-registry] dependabot[bot] opened a new pull request #306: Bump junit from 4.12 to 4.13.1
dependabot[bot] opened a new pull request #306: URL: https://github.com/apache/nifi-registry/pull/306 Bumps [junit](https://github.com/junit-team/junit4) from 4.12 to 4.13.1. Release notes Sourced from https://github.com/junit-team/junit4/releases;>junit's releases. JUnit 4.13.1 Please refer to the https://github.com/junit-team/junit/blob/HEAD/doc/ReleaseNotes4.13.1.md;>release notes for details. JUnit 4.13 Please refer to the https://github.com/junit-team/junit/blob/HEAD/doc/ReleaseNotes4.13.md;>release notes for details. JUnit 4.13 RC 2 Please refer to the https://github.com/junit-team/junit4/wiki/4.13-Release-Notes;>release notes for details. JUnit 4.13 RC 1 Please refer to the https://github.com/junit-team/junit4/wiki/4.13-Release-Notes;>release notes for details. JUnit 4.13 Beta 3 Please refer to the https://github.com/junit-team/junit4/wiki/4.13-Release-Notes;>release notes for details. JUnit 4.13 Beta 2 Please refer to the https://github.com/junit-team/junit4/wiki/4.13-Release-Notes;>release notes for details. JUnit 4.13 Beta 1 Please refer to the https://github.com/junit-team/junit4/wiki/4.13-Release-Notes;>release notes for details. Commits https://github.com/junit-team/junit4/commit/1b683f4ec07bcfa40149f086d32240f805487e66;>1b683f4 [maven-release-plugin] prepare release r4.13.1 https://github.com/junit-team/junit4/commit/ce6ce3aadc070db2902698fe0d3dc6729cd631f2;>ce6ce3a Draft 4.13.1 release notes https://github.com/junit-team/junit4/commit/c29dd8239d6b353e699397eb090a1fd27411fa24;>c29dd82 Change version to 4.13.1-SNAPSHOT https://github.com/junit-team/junit4/commit/1d174861f0b64f97ab0722bb324a760bfb02f567;>1d17486 Add a link to assertThrows in exception testing https://github.com/junit-team/junit4/commit/543905df72ff10364b94dda27552efebf3dd04e9;>543905d Use separate line for annotation in Javadoc https://github.com/junit-team/junit4/commit/510e906b391e7e46a346e1c852416dc7be934944;>510e906 Add sub headlines to class Javadoc https://github.com/junit-team/junit4/commit/610155b8c22138329f0723eec22521627dbc52ae;>610155b Merge pull request from GHSA-269g-pwp5-87pp https://github.com/junit-team/junit4/commit/b6cfd1e3d736cc2106242a8be799615b472c7fec;>b6cfd1e Explicitly wrap float parameter for consistency (https://github-redirect.dependabot.com/junit-team/junit4/issues/1671;>#1671) https://github.com/junit-team/junit4/commit/a5d205c7956dbed302b3bb5ecde5ba4299f0b646;>a5d205c Fix GitHub link in FAQ (https://github-redirect.dependabot.com/junit-team/junit4/issues/1672;>#1672) https://github.com/junit-team/junit4/commit/3a5c6b4d08f408c8ca6a8e0bae71a9bc5a8f97e8;>3a5c6b4 Deprecated since jdk9 replacing constructor instance of Double and Float (https://github-redirect.dependabot.com/junit-team/junit4/issues/1660;>#1660) Additional commits viewable in https://github.com/junit-team/junit4/compare/r4.12...r4.13.1;>compare view [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=junit:junit=maven=4.12=4.13.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- Dependabot commands and options You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current
[GitHub] [nifi] adenes opened a new pull request #4601: NIFI-7917 Upgrade avro to 1.10 in nifi-nar-bundles/nifi-standard-serv…
adenes opened a new pull request #4601: URL: https://github.com/apache/nifi/pull/4601 …ices/nifi-record-serialization-services-bundle/nifi-record-serialization-services - Updated avro version to 1.10 - Replaced org.codehaus.jackson.* with com.fasterxml.jackson classes as previously the former was used and provided transitively through avro 1.8 - Added org.tukaani:xz:1.5 and org.xerial.snappy:snappy-java:1.1.1.3 dependencies explicitly as those were also coming through avro In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [x] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [x] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
fgerlits commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r503848429 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -212,51 +235,80 @@ void ListenHTTP::onSchedule(core::ProcessContext *context, core::ProcessSessionF ListenHTTP::~ListenHTTP() = default; void ListenHTTP::onTrigger(core::ProcessContext *context, core::ProcessSession *session) { - std::shared_ptr flow_file = session->get(); + logger_->log_debug("OnTrigger ListenHTTP"); + processIncomingFlowFile(session); + processRequestBuffer(session); +} - // Do nothing if there are no incoming files +void ListenHTTP::processIncomingFlowFile(core::ProcessSession *session) { + std::shared_ptr flow_file = session->get(); if (!flow_file) { return; } std::string type; flow_file->getAttribute("http.type", type); - if (type == "response_body") { - -if (handler_) { - struct response_body response { "", "", "" }; - ResponseBodyReadCallback cb(); - flow_file->getAttribute("filename", response.uri); - flow_file->getAttribute("mime.type", response.mime_type); - if (response.mime_type.empty()) { -logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); -response.mime_type = "application/octet-stream"; - } - session->read(flow_file, ); - handler_->set_response_body(std::move(response)); + if (type == "response_body" && handler_) { +response_body response; +ResponseBodyReadCallback cb(); +flow_file->getAttribute("filename", response.uri); +flow_file->getAttribute("mime.type", response.mime_type); +if (response.mime_type.empty()) { + logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); + response.mime_type = "application/octet-stream"; } +session->read(flow_file, ); +handler_->setResponseBody(std::move(response)); } session->remove(flow_file); } -ListenHTTP::Handler::Handler(std::string base_uri, core::ProcessContext *context, core::ProcessSessionFactory *session_factory, std::string &_dn_regex, std::string &_as_attrs_regex) +void ListenHTTP::processRequestBuffer(core::ProcessSession *session) { + std::size_t flow_file_count = 0; + for (; batch_size_ == 0 || batch_size_ > flow_file_count; ++flow_file_count) { +FlowFileBufferPair flow_file_buffer_pair; +if (!handler_->request_buffer.tryDequeue(flow_file_buffer_pair)) { + break; Review comment: when can this happen? should we log an error? ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -191,7 +206,15 @@ void ListenHTTP::onSchedule(core::ProcessContext *context, core::ProcessSessionF } server_.reset(new CivetServer(options, _, _)); - handler_.reset(new Handler(basePath, context, sessionFactory, std::move(authDNPattern), std::move(headersAsAttributesPattern))); + + context->getProperty(BatchSize.getName(), batch_size_); + logger_->log_debug("ListenHTTP using %s: %d", BatchSize.getName(), batch_size_); + + std::size_t buffer_size; + context->getProperty(BufferSize.getName(), buffer_size); + logger_->log_debug("ListenHTTP using %s: %d", BufferSize.getName(), buffer_size); + + handler_.reset(new Handler(basePath, context, std::move(authDNPattern), std::move(headersAsAttributesPattern), buffer_size)); Review comment: why does the `Handler` need the `buffer_size` parameter? it is already set in the context ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -273,6 +325,34 @@ void ListenHTTP::Handler::set_header_attributes(const mg_request_info *req_info, } } +bool ListenHTTP::Handler::enqueueRequest(mg_connection *conn, const mg_request_info *req_info, std::unique_ptr content_buffer) { + auto flow_file = std::make_shared(); + auto flow_version = process_context_->getProcessorNode()->getFlowIdentifier(); + if (flow_version != nullptr) { +flow_file->setAttribute(core::SpecialFlowAttribute::FLOW_ID, flow_version->getFlowId()); + } + + if (!flow_file) { +sendHttp500(conn); Review comment: it would be good to log an error or warning here ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -273,6 +325,34 @@ void ListenHTTP::Handler::set_header_attributes(const mg_request_info *req_info, } } +bool ListenHTTP::Handler::enqueueRequest(mg_connection *conn, const mg_request_info *req_info, std::unique_ptr content_buffer) { + auto flow_file = std::make_shared(); + auto flow_version = process_context_->getProcessorNode()->getFlowIdentifier(); + if (flow_version != nullptr) { +flow_file->setAttribute(core::SpecialFlowAttribute::FLOW_ID, flow_version->getFlowId()); + } + + if (!flow_file) { Review comment: this should be checked earlier, before `flow_file` is
[GitHub] [nifi-minifi-cpp] szaszm closed pull request #923: MINIFICPP-1379 - Revisit Identifier::parse
szaszm closed pull request #923: URL: https://github.com/apache/nifi-minifi-cpp/pull/923 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7917) Upgrade avro to 1.10 in nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services
Denes Arvay created NIFI-7917: - Summary: Upgrade avro to 1.10 in nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services Key: NIFI-7917 URL: https://issues.apache.org/jira/browse/NIFI-7917 Project: Apache NiFi Issue Type: Improvement Reporter: Denes Arvay Assignee: Denes Arvay When trying to convert an avro record with ConvertRecord processor I bumped into the following exception: {code:java} 2020-10-12 15:04:57,017 ERROR [Timer-Driven Process Thread-1] o.a.n.processors.standard.ConvertRecord ConvertRecord[id=1e2b99d5-0175-1000-531f-0cf5a5ae8490] Failed to process StandardFlowFileRecord[uuid=6b7d3df3-fc7d-4606-acfe-87fcafcdebc9,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1602529363240-1, container=default, section=1], offset=656, length=332],offset=0,name=avro_test_29092020104944.avro,size=332]; will route to failure: org.apache.avro.file.DataFileWriter$AppendWriteException: org.apache.avro.AvroRuntimeException: Unknown datum type org.apache.avro.JsonProperties$Null: org.apache.avro.JsonProperties$Null@24418f28 org.apache.avro.file.DataFileWriter$AppendWriteException: org.apache.avro.AvroRuntimeException: Unknown datum type org.apache.avro.JsonProperties$Null: org.apache.avro.JsonProperties$Null@24418f28 at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308) at org.apache.nifi.avro.WriteAvroResultWithSchema.writeRecord(WriteAvroResultWithSchema.java:61) at org.apache.nifi.serialization.AbstractRecordSetWriter.write(AbstractRecordSetWriter.java:59) at org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:153) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2887) at org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:122) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:209) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.avro.AvroRuntimeException: Unknown datum type org.apache.avro.JsonProperties$Null: org.apache.avro.JsonProperties$Null@24418f28 at org.apache.avro.generic.GenericData.getSchemaName(GenericData.java:741) at org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:706) at org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73) at org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153) at org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60) at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302) ... 17 common frames omitted {code} This is caused by AVRO-1954 which has been fixed in avro 1.9. I'd suggest upgrading the avro version in the {{org.apache.nifi.avro.WriteAvroResultWithSchema}}'s module (\{{nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services}}) to the latest avro (1.10.0). Currently 1.8.1 is in use. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1390) Create DeleteS3Object processor
Gabor Gyimesi created MINIFICPP-1390: Summary: Create DeleteS3Object processor Key: MINIFICPP-1390 URL: https://issues.apache.org/jira/browse/MINIFICPP-1390 Project: Apache NiFi MiNiFi C++ Issue Type: New Feature Reporter: Gabor Gyimesi Assignee: Gabor Gyimesi Create new processor to delete existing S3 object with similar functionality defined in Nifi: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-aws-nar/1.12.1/org.apache.nifi.processors.aws.s3.DeleteS3Object/index.html -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r503859521 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -368,7 +368,7 @@ bool ListenHTTP::Handler::handlePost(CivetServer *server, struct mg_connection * // Always send 100 Continue, as allowed per standard to minimize client delay (https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html) mg_printf(conn, "HTTP/1.1 100 Continue\r\n\r\n"); - return enqueueRequest(conn, req_info, createContentBuffer(conn, req_info)); + return enqueueRequest(conn, req_info, std::move(createContentBuffer(conn, req_info))); Review comment: Fixed in [750d5d1](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/750d5d14c627818cacfc6292856bbf9e10f6ce30) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] deekim commented on pull request #4595: NIFI-7916: refactor CountText to ignore whitespace words when counting
deekim commented on pull request #4595: URL: https://github.com/apache/nifi/pull/4595#issuecomment-707661341 Done! Appreciate the review @pvillard31 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] deekim edited a comment on pull request #4595: NIFI-7916: refactor CountText to ignore whitespace words when counting
deekim edited a comment on pull request #4595: URL: https://github.com/apache/nifi/pull/4595#issuecomment-707661341 > Hey @deekim - thanks for your contribution. Can you create a dedicated JIRA on https://issues.apache.org/jira/browse/NIFI and reference it in the title of your pull request? Thanks! Done! Appreciate the review @pvillard31 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7916) refactor CountText to ignore whitespace words when counting
[ https://issues.apache.org/jira/browse/NIFI-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dan Kim updated NIFI-7916: -- Description: Was going through the repo to find some {{todos}} to clean up, and this one seemed pretty straightforward. {quote} TODO: Trim individual words before counting to eliminate whitespace words? {quote} was: Was going through the repo to find some {{todo}}s to clean up, and this one seemed pretty straightforward. {quote} TODO: Trim individual words before counting to eliminate whitespace words? {quote} > refactor CountText to ignore whitespace words when counting > --- > > Key: NIFI-7916 > URL: https://issues.apache.org/jira/browse/NIFI-7916 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Dan Kim >Assignee: Dan Kim >Priority: Minor > Labels: pull-request-available > Original Estimate: 2h > Time Spent: 1h > Remaining Estimate: 1h > > Was going through the repo to find some {{todos}} to clean up, and this one > seemed pretty straightforward. > {quote} > TODO: Trim individual words before counting to eliminate whitespace words? > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7916) refactor CountText to ignore whitespace words when counting
Dan Kim created NIFI-7916: - Summary: refactor CountText to ignore whitespace words when counting Key: NIFI-7916 URL: https://issues.apache.org/jira/browse/NIFI-7916 Project: Apache NiFi Issue Type: Improvement Reporter: Dan Kim Assignee: Dan Kim Was going through the repo to find some {{todo}}s to clean up, and this one seemed pretty straightforward. {quote} TODO: Trim individual words before counting to eliminate whitespace words? {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7854) Allow SQS processors to pass proxy username and passowrd
[ https://issues.apache.org/jira/browse/NIFI-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-7854: - Assignee: Pierre Villard Status: Patch Available (was: Open) > Allow SQS processors to pass proxy username and passowrd > > > Key: NIFI-7854 > URL: https://issues.apache.org/jira/browse/NIFI-7854 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Juan C. Sequeiros >Assignee: Pierre Villard >Priority: Major > > Currently PUT / GET / DELETE SQS processors can take: > > Proxy Host > Proxy Host Port > > As configuration settings. > > They do not have: > > Proxy Username > Proxy Password > > > Please allow these processors to take: > > Proxy Username > Proxy Password > > and/or > > StandardProxyConfigurationService -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] pvillard31 opened a new pull request #4600: NIFI-7854 - Add proxy username/password support to SQS processors
pvillard31 opened a new pull request #4600: URL: https://github.com/apache/nifi/pull/4600 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
szaszm commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r503848631 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -368,7 +368,7 @@ bool ListenHTTP::Handler::handlePost(CivetServer *server, struct mg_connection * // Always send 100 Continue, as allowed per standard to minimize client delay (https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html) mg_printf(conn, "HTTP/1.1 100 Continue\r\n\r\n"); - return enqueueRequest(conn, req_info, createContentBuffer(conn, req_info)); + return enqueueRequest(conn, req_info, std::move(createContentBuffer(conn, req_info))); Review comment: No need to cast an rvalue to an rvalue reference, i.e. call `std::move` on a temporary. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
szaszm commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r503848631 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -368,7 +368,7 @@ bool ListenHTTP::Handler::handlePost(CivetServer *server, struct mg_connection * // Always send 100 Continue, as allowed per standard to minimize client delay (https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html) mg_printf(conn, "HTTP/1.1 100 Continue\r\n\r\n"); - return enqueueRequest(conn, req_info, createContentBuffer(conn, req_info)); + return enqueueRequest(conn, req_info, std::move(createContentBuffer(conn, req_info))); Review comment: No need to cast an rvalue to an rvalue reference, i.e. call `std::move` on an rvalue. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-7899) InvokeHTTP does not timeout
[ https://issues.apache.org/jira/browse/NIFI-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17213030#comment-17213030 ] Jens M Kofoed commented on NIFI-7899: - I've tried to debug our setup. I've changed the run schedule from 0 to 1 sec in order to not stress the web server. I also enabled debug for org.apache.nifi.processors.standard.InvokeHTTP within the next 5-10 minutes over 300 files run though with no problems. Sudenly I can see that a request to the webpage is not followed by a responce and the process does not time out. The invokeHTTP process in the GUI is just hanging running with 1 process and doing nothing. > InvokeHTTP does not timeout > --- > > Key: NIFI-7899 > URL: https://issues.apache.org/jira/browse/NIFI-7899 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.11.4 > Environment: Ubuntu 18.04. Nifi 1.11.4. > 4 core, 8GB mem. Java set to 4GB mem >Reporter: Jens M Kofoed >Priority: Major > > We have some issues with the InvokeHTTP process. It "randomly" hangs in the > process without timing out. The processor shows that there are 1 task running > (upper right corner) and it can runs for hours without any outputs, but with > multiply flowfiles in the queue. > Trying to stop it takes forever so I have to terminate it. restart the > processor and everything works fine for a long time. until next time it hangs. > Our configuration of the process is as follow: > Penalty: 30s, Yield: 1s, > Scheduling: timer driven, Concurrent Task: 1, Run Schedule: 0, Run duration: > 0 > HTTP Method: GET > Connection timeout: 5s > Read timeout: 15s > Idle Timeout: 5m > Max idle Connection: 5 > I could not find any other bug reports here. but there are other people > metion same issues: > [https://webcache.googleusercontent.com/search?q=cache:LMqcymQiM-IJ:https://community.cloudera.com/t5/Support-Questions/InvokeHTTP-randomly-hangs/td-p/296184+=1=da=clnk=dk] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] fgerlits commented on pull request #924: MINIFICPP-1389 - Upgrade librdkafka to version 1.5.0
fgerlits commented on pull request #924: URL: https://github.com/apache/nifi-minifi-cpp/pull/924#issuecomment-707646711 I think we need to update the LICENSE file with the new versions of the licenses. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7915) Components using okhttp 3.x Broken on Nifi 1.9.2
Jake Dalli created NIFI-7915: Summary: Components using okhttp 3.x Broken on Nifi 1.9.2 Key: NIFI-7915 URL: https://issues.apache.org/jira/browse/NIFI-7915 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.9.2 Reporter: Jake Dalli Hi, As of the latest Java version (8u252), components using the okhttp3 dependency are broken with the following error: {code:java} clientBuilder.sslSocketFactory(SSLSocketFactory) not supported on JDK 9 {code} Following some investigation, it turns out to be an issue with the dependency: [https://github.com/square/okhttp/issues/6019] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #910: MINIFICPP-1375 Windows: Redistribute Universal CRT DLLs with our MSI
arpadboda closed pull request #910: URL: https://github.com/apache/nifi-minifi-cpp/pull/910 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #918: MINIFICPP-1383 fix intdiv_ceil
arpadboda closed pull request #918: URL: https://github.com/apache/nifi-minifi-cpp/pull/918 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev opened a new pull request #924: MINIFICPP-1389 - Upgrade librdkafka to version 1.5.0
hunyadi-dev opened a new pull request #924: URL: https://github.com/apache/nifi-minifi-cpp/pull/924 The current version of librdkafka we use has cherry-picked patches applied to it and also does not support transactions required for the implementation of ConsumeKafka processor. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] pvillard31 commented on pull request #4595: refactor CountText to ignore whitespace words when counting
pvillard31 commented on pull request #4595: URL: https://github.com/apache/nifi/pull/4595#issuecomment-707619071 Hey @deekim - thanks for your contribution. Can you create a dedicated JIRA on https://issues.apache.org/jira/browse/NIFI and reference it in the title of your pull request? Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] pvillard31 merged pull request #4597: Bump junit from 4.13 to 4.13.1 in /nifi-nar-bundles/nifi-ranger-bundle/nifi-ranger-plugin
pvillard31 merged pull request #4597: URL: https://github.com/apache/nifi/pull/4597 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] pvillard31 merged pull request #4598: Bump junit from 4.13 to 4.13.1
pvillard31 merged pull request #4598: URL: https://github.com/apache/nifi/pull/4598 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r503799392 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -434,16 +456,11 @@ void ListenHTTP::Handler::write_body(mg_connection *conn, const mg_request_info } } -ListenHTTP::WriteCallback::WriteCallback(struct mg_connection *conn, const struct mg_request_info *reqInfo) -: logger_(logging::LoggerFactory::getLogger()) { - conn_ = conn; - req_info_ = reqInfo; -} - -int64_t ListenHTTP::WriteCallback::process(std::shared_ptr stream) { +std::shared_ptr ListenHTTP::Handler::createContentBuffer(struct mg_connection *conn, const struct mg_request_info *req_info) { + auto content_buffer = std::make_shared(); Review comment: Fixed in [edf5b90](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/edf5b9095ca7eeea4a1b2db4ba0ba58bec6632d0) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r503799217 ## File path: extensions/civetweb/processors/ListenHTTP.h ## @@ -43,15 +43,17 @@ namespace processors { class ListenHTTP : public core::Processor { public: + using FlowFileBufferPair=std::pair, std::shared_ptr>; Review comment: Fixed in [edf5b90](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/edf5b9095ca7eeea4a1b2db4ba0ba58bec6632d0) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r503799050 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -273,6 +325,34 @@ void ListenHTTP::Handler::set_header_attributes(const mg_request_info *req_info, } } +bool ListenHTTP::Handler::enqueueRequest(mg_connection *conn, const mg_request_info *req_info, std::shared_ptr content_buffer) { + auto flow_file = std::make_shared(); + auto flow_version = process_context_->getProcessorNode()->getFlowIdentifier(); + if (flow_version != nullptr) { +flow_file->setAttribute(core::SpecialFlowAttribute::FLOW_ID, flow_version->getFlowId()); + } + + if (!flow_file) { +sendHttp500(conn); +return true; + } + + setHeaderAttributes(req_info, flow_file); + + if (buffer_size_ == 0 || request_buffer.size() < buffer_size_) { +request_buffer.enqueue(std::make_pair(std::move(flow_file), std::move(content_buffer))); + } else { +logger_->log_error("ListenHTTP buffer is full"); Review comment: Fixed in [edf5b90](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/edf5b9095ca7eeea4a1b2db4ba0ba58bec6632d0) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r503798865 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -212,51 +233,82 @@ void ListenHTTP::onSchedule(core::ProcessContext *context, core::ProcessSessionF ListenHTTP::~ListenHTTP() = default; void ListenHTTP::onTrigger(core::ProcessContext *context, core::ProcessSession *session) { - std::shared_ptr flow_file = session->get(); + logger_->log_debug("OnTrigger ListenHTTP"); + processIncomingFlowFile(session); + processRequestBuffer(session); +} - // Do nothing if there are no incoming files +void ListenHTTP::processIncomingFlowFile(core::ProcessSession *session) { + std::shared_ptr flow_file = session->get(); if (!flow_file) { return; } std::string type; flow_file->getAttribute("http.type", type); - if (type == "response_body") { - -if (handler_) { - struct response_body response { "", "", "" }; - ResponseBodyReadCallback cb(); - flow_file->getAttribute("filename", response.uri); - flow_file->getAttribute("mime.type", response.mime_type); - if (response.mime_type.empty()) { -logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); -response.mime_type = "application/octet-stream"; - } - session->read(flow_file, ); - handler_->set_response_body(std::move(response)); + if (type == "response_body" && handler_) { +response_body response; +ResponseBodyReadCallback cb(); +flow_file->getAttribute("filename", response.uri); +flow_file->getAttribute("mime.type", response.mime_type); +if (response.mime_type.empty()) { + logger_->log_warn("Using default mime type of application/octet-stream for response body file: %s", response.uri); + response.mime_type = "application/octet-stream"; } +session->read(flow_file, ); +handler_->setResponseBody(std::move(response)); } session->remove(flow_file); } -ListenHTTP::Handler::Handler(std::string base_uri, core::ProcessContext *context, core::ProcessSessionFactory *session_factory, std::string &_dn_regex, std::string &_as_attrs_regex) +void ListenHTTP::processRequestBuffer(core::ProcessSession *session) { + std::size_t flow_file_count = 0; + while (batch_size_ == 0 || batch_size_ > flow_file_count) { Review comment: I kept the initialization in its original scope as it is logged outside the loop, but I moved the increment to the for loop in [edf5b90](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/edf5b9095ca7eeea4a1b2db4ba0ba58bec6632d0) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #921: MINIFICPP-1388 Introduce buffer for HTTP requests in ListenHTTP processor
lordgamez commented on a change in pull request #921: URL: https://github.com/apache/nifi-minifi-cpp/pull/921#discussion_r503798155 ## File path: extensions/civetweb/processors/ListenHTTP.cpp ## @@ -62,6 +62,17 @@ core::Property ListenHTTP::HeadersAsAttributesRegex("HTTP Headers to receive as " should be passed along as FlowFile attributes", ""); +core::Property ListenHTTP::BatchSize( +core::PropertyBuilder::createProperty("Batch Size") +->withDescription("Maximum number of buffered requests to be processed in a single batch. If set to zero all buffered requests are processed.") +->withDefaultValue(0)->build()); + +core::Property ListenHTTP::BufferSize( +core::PropertyBuilder::createProperty("Buffer Size") +->withDescription("Maximum number of HTTP Requests allowed to be buffered before processing them when the processor is triggered. " + "If the buffer full, the request is refused. If set to zero the buffer is unlimited.") +->withDefaultValue(0)->build()); + Review comment: That seems reasonable, thanks for the analysis. I changed the default to 20k in [edf5b90](https://github.com/apache/nifi-minifi-cpp/pull/921/commits/edf5b9095ca7eeea4a1b2db4ba0ba58bec6632d0) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503770589 ## File path: extensions/libarchive/CompressContent.h ## @@ -33,19 +33,14 @@ #include "core/Property.h" #include "core/logging/LoggerConfiguration.h" #include "io/ZlibStream.h" +#include "utils/Enum.h" namespace org { namespace apache { namespace nifi { namespace minifi { namespace processors { -#define COMPRESSION_FORMAT_ATTRIBUTE "use mime.type attribute" -#define COMPRESSION_FORMAT_GZIP "gzip" -#define COMPRESSION_FORMAT_BZIP2 "bzip2" -#define COMPRESSION_FORMAT_XZ_LZMA2 "xz-lzma2" -#define COMPRESSION_FORMAT_LZMA "lzma" - #define MODE_COMPRESS "compress" #define MODE_DECOMPRESS "decompress" Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503770589 ## File path: extensions/libarchive/CompressContent.h ## @@ -33,19 +33,14 @@ #include "core/Property.h" #include "core/logging/LoggerConfiguration.h" #include "io/ZlibStream.h" +#include "utils/Enum.h" namespace org { namespace apache { namespace nifi { namespace minifi { namespace processors { -#define COMPRESSION_FORMAT_ATTRIBUTE "use mime.type attribute" -#define COMPRESSION_FORMAT_GZIP "gzip" -#define COMPRESSION_FORMAT_BZIP2 "bzip2" -#define COMPRESSION_FORMAT_XZ_LZMA2 "xz-lzma2" -#define COMPRESSION_FORMAT_LZMA "lzma" - #define MODE_COMPRESS "compress" #define MODE_DECOMPRESS "decompress" Review comment: done, made it into an enum This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503770153 ## File path: extensions/libarchive/CompressContent.cpp ## @@ -81,39 +101,42 @@ void CompressContent::initialize() { } void CompressContent::onSchedule(core::ProcessContext *context, core::ProcessSessionFactory *sessionFactory) { - std::string value; context->getProperty(CompressLevel.getName(), compressLevel_); context->getProperty(CompressMode.getName(), compressMode_); - context->getProperty(CompressFormat.getName(), compressFormat_); + + { +std::string compressFormatStr; +context->getProperty(CompressFormat.getName(), compressFormatStr); +std::transform(compressFormatStr.begin(), compressFormatStr.end(), compressFormatStr.begin(), ::tolower); +compressFormat_ = ExtendedCompressionFormat::parse(compressFormatStr.c_str()); +if (!compressFormat_) { + throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Unknown compression format: \"" + compressFormatStr + "\""); +} + } Review comment: done, moved the checking into the extraction (getProperty) itself This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503769523 ## File path: extensions/libarchive/CompressContent.cpp ## @@ -60,10 +60,29 @@ core::Property CompressContent::EncapsulateInTar( "If false, on compression the content of the FlowFile simply gets compressed, and on decompression a simple compressed content is expected.\n" "true is the behaviour compatible with older MiNiFi C++ versions, false is the behaviour compatible with NiFi.") ->isRequired(false)->withDefaultValue(true)->build()); +core::Property CompressContent::BatchSize( +core::PropertyBuilder::createProperty("Batch Size") +->withDescription("Maximum number of FlowFiles processed in a single session") +->withDefaultValue(1)->build()); core::Relationship CompressContent::Success("success", "FlowFiles will be transferred to the success relationship after successfully being compressed or decompressed"); core::Relationship CompressContent::Failure("failure", "FlowFiles will be transferred to the failure relationship if they fail to compress/decompress"); +std::map CompressContent::compressionFormatMimeTypeMap_{ Review comment: made them constant, the member variable formatting is all over the place, we have both snake_case and camelCase in BinFiles, CompressContent and MergeContent (and possibly others as well), I wouldn't touch their names for now, maybe in a separate PR renaming them all at once? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503763908 ## File path: extensions/libarchive/CompressContent.cpp ## @@ -45,11 +45,11 @@ core::Property CompressContent::CompressFormat( core::PropertyBuilder::createProperty("Compression Format")->withDescription("The compression format to use.") ->isRequired(false) ->withAllowableValues({ - COMPRESSION_FORMAT_ATTRIBUTE, - COMPRESSION_FORMAT_GZIP, - COMPRESSION_FORMAT_BZIP2, - COMPRESSION_FORMAT_XZ_LZMA2, - COMPRESSION_FORMAT_LZMA})->withDefaultValue(COMPRESSION_FORMAT_ATTRIBUTE)->build()); + toString(ExtendedCompressionFormat::USE_MIME_TYPE), + toString(CompressionFormat::GZIP), + toString(CompressionFormat::BZIP2), + toString(CompressionFormat::XZ_LZMA2), + toString(CompressionFormat::LZMA)})->withDefaultValue(toString(ExtendedCompressionFormat::USE_MIME_TYPE))->build()); Review comment: done, added means to query all enum representations, i.e. `ExtendedCompressionFormat::values()` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503739298 ## File path: extensions/libarchive/BinFiles.cpp ## @@ -259,9 +279,13 @@ void BinFiles::onTrigger(const std::shared_ptr , c } } - auto flow = session->get(); + for (size_t i = 0; i < batchSize_; ++i) { +auto flow = session->get(); + +if (flow == nullptr) { + break; Review comment: the problem with yielding is that, this processor (MergeContent) can function correctly even if there are no incoming flowFiles (it can still emit already processed merged files) this is not true for CompressContent so added a yield there This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503739298 ## File path: extensions/libarchive/BinFiles.cpp ## @@ -259,9 +279,13 @@ void BinFiles::onTrigger(const std::shared_ptr , c } } - auto flow = session->get(); + for (size_t i = 0; i < batchSize_; ++i) { +auto flow = session->get(); + +if (flow == nullptr) { + break; Review comment: the problem with yielding is that, this processor can function correctly even if there are no incoming flowFiles (it can still emit already processed merged files) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503713117 ## File path: extensions/libarchive/BinFiles.cpp ## @@ -259,9 +279,13 @@ void BinFiles::onTrigger(const std::shared_ptr , c } } - auto flow = session->get(); + for (size_t i = 0; i < batchSize_; ++i) { +auto flow = session->get(); + +if (flow == nullptr) { + break; Review comment: This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503712927 ## File path: extensions/libarchive/BinFiles.cpp ## @@ -38,12 +38,34 @@ namespace nifi { namespace minifi { namespace processors { -core::Property BinFiles::MinSize("Minimum Group Size", "The minimum size of for the bundle", "0"); -core::Property BinFiles::MaxSize("Maximum Group Size", "The maximum size for the bundle. If not specified, there is no maximum.", ""); -core::Property BinFiles::MinEntries("Minimum Number of Entries", "The minimum number of files to include in a bundle", "1"); -core::Property BinFiles::MaxEntries("Maximum Number of Entries", "The maximum number of files to include in a bundle. If not specified, there is no maximum.", ""); -core::Property BinFiles::MaxBinAge("Max Bin Age", "The maximum age of a Bin that will trigger a Bin to be complete. Expected format is ", ""); -core::Property BinFiles::MaxBinCount("Maximum number of Bins", "Specifies the maximum number of bins that can be held in memory at any one time", "100"); +core::Property BinFiles::MinSize( +core::PropertyBuilder::createProperty("Minimum Group Size") +->withDescription("The minimum size of for the bundle") +->withDefaultValue(0)->build()); +core::Property BinFiles::MaxSize( +core::PropertyBuilder::createProperty("Maximum Group Size") +->withDescription("The maximum size for the bundle. If not specified, there is no maximum.") + ->withType(core::StandardValidators::get().UNSIGNED_LONG_VALIDATOR)->build()); +core::Property BinFiles::MinEntries( +core::PropertyBuilder::createProperty("Minimum Number of Entries") +->withDescription("The minimum number of files to include in a bundle") +->withDefaultValue(1)->build()); +core::Property BinFiles::MaxEntries( +core::PropertyBuilder::createProperty("Maximum Number of Entries") +->withDescription("The maximum number of files to include in a bundle. If not specified, there is no maximum.") + ->withType(core::StandardValidators::get().UNSIGNED_INT_VALIDATOR)->build()); +core::Property BinFiles::MaxBinAge( +core::PropertyBuilder::createProperty("Max Bin Age") +->withDescription("The maximum age of a Bin that will trigger a Bin to be complete. Expected format is ") + ->withType(core::StandardValidators::get().TIME_PERIOD_VALIDATOR)->build()); +core::Property BinFiles::MaxBinCount( +core::PropertyBuilder::createProperty("Maximum number of Bins") +->withDescription("Specifies the maximum number of bins that can be held in memory at any one time") +->withDefaultValue(100)->build()); +core::Property BinFiles::BatchSize( +core::PropertyBuilder::createProperty("Batch Size") +->withDescription("Maximum number of FlowFiles processed in a single session") +->withDefaultValue(1)->build()); Review comment: I wanted to preserve the original behavior, as this is an optimazation feature the user can configure on an as-needed basis. Should we change it, and if so, what do you think would be the appropriate default? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503699011 ## File path: libminifi/include/utils/Enum.h ## @@ -0,0 +1,190 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#pragma once + +#include +#include +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace utils { + +#define COMMA(...) , +#define MSVC_HACK(x) x + +#define PICK_(_1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, ...) _15 +#define COUNT(...) \ + MSVC_HACK(PICK_(__VA_ARGS__, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)) + +#define CONCAT_(a, b) a ## b +#define CONCAT(a, b) CONCAT_(a, b) + +#define CALL(Fn, ...) MSVC_HACK(Fn(__VA_ARGS__)) +#define SPREAD(...) __VA_ARGS__ + +#define FOR_EACH(fn, delim, ARGS) \ + CALL(CONCAT(FOR_EACH_, COUNT ARGS), fn, delim, SPREAD ARGS) + +#define FOR_EACH_0(...) +#define FOR_EACH_1(fn, delim, _1) fn(_1) +#define FOR_EACH_2(fn, delim, _1, _2) fn(_1) delim() fn(_2) +#define FOR_EACH_3(fn, delim, _1, _2, _3) fn(_1) delim() fn(_2) delim() fn(_3) +#define FOR_EACH_4(fn, delim, _1, _2, _3, _4) \ + fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) +#define FOR_EACH_5(fn, delim, _1, _2, _3, _4, _5) \ + fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() fn(_5) +#define FOR_EACH_6(fn, delim, _1, _2, _3, _4, _5, _6) \ + fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() \ + fn(_5) delim() fn(_6) +#define FOR_EACH_7(fn, delim, _1, _2, _3, _4, _5, _6, _7) \ + fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() \ + fn(_5) delim() fn(_6) delim() fn(_7) +#define FOR_EACH_8(fn, delim, _1, _2, _3, _4, _5, _6, _7, _8) \ Review comment: we could give something like this a try: `#define FOR_EACH_8(fn, delim, _1, ...) fn(_1) delim() FOR_EACH_7(fn, delim, __VA_ARGS__)` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503697223 ## File path: libminifi/include/utils/Enum.h ## @@ -0,0 +1,190 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#pragma once + +#include +#include +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace utils { + +#define COMMA(...) , +#define MSVC_HACK(x) x + +#define PICK_(_1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, ...) _15 +#define COUNT(...) \ + MSVC_HACK(PICK_(__VA_ARGS__, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)) + +#define CONCAT_(a, b) a ## b +#define CONCAT(a, b) CONCAT_(a, b) + +#define CALL(Fn, ...) MSVC_HACK(Fn(__VA_ARGS__)) +#define SPREAD(...) __VA_ARGS__ + +#define FOR_EACH(fn, delim, ARGS) \ + CALL(CONCAT(FOR_EACH_, COUNT ARGS), fn, delim, SPREAD ARGS) + +#define FOR_EACH_0(...) +#define FOR_EACH_1(fn, delim, _1) fn(_1) +#define FOR_EACH_2(fn, delim, _1, _2) fn(_1) delim() fn(_2) +#define FOR_EACH_3(fn, delim, _1, _2, _3) fn(_1) delim() fn(_2) delim() fn(_3) +#define FOR_EACH_4(fn, delim, _1, _2, _3, _4) \ + fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) +#define FOR_EACH_5(fn, delim, _1, _2, _3, _4, _5) \ + fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() fn(_5) +#define FOR_EACH_6(fn, delim, _1, _2, _3, _4, _5, _6) \ + fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() \ + fn(_5) delim() fn(_6) +#define FOR_EACH_7(fn, delim, _1, _2, _3, _4, _5, _6, _7) \ + fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() \ + fn(_5) delim() fn(_6) delim() fn(_7) +#define FOR_EACH_8(fn, delim, _1, _2, _3, _4, _5, _6, _7, _8) \ Review comment: I wish, but recursive macro expansion is a no-go This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #917: MINIFICPP-1380 - Batch behavior for CompressContent and MergeContent processors
adamdebreceni commented on a change in pull request #917: URL: https://github.com/apache/nifi-minifi-cpp/pull/917#discussion_r503696850 ## File path: libminifi/test/Utils.h ## @@ -29,4 +30,11 @@ return std::forward(instance).method(std::forward(args)...); \ } -#endif // LIBMINIFI_TEST_UTILS_H_ +std::string repeat(const std::string& str, std::size_t count) { Review comment: oh nice, I was looking in the StringUtils for something like this, and even found join, but did not put the pieces together This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org