[jira] [Updated] (NIFI-12837) Add DFS setting to smb processors
[ https://issues.apache.org/jira/browse/NIFI-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anders updated NIFI-12837: -- Description: The hierynomus/smbj library has a setting for enabling DFS which is disabled by default: https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39 This appears to cause problems in some SMB configurations. Patched https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java to test in my environment with: {code} $ git diff nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java diff --git a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java index 0895abfae0..eac765 100644 --- a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java +++ b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java @@ -46,6 +46,8 @@ public final class SmbUtils { } } +configBuilder.withDfsEnabled(true); + if (context.getProperty(USE_ENCRYPTION).isSet()) { configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean()); } {code} This appeared to resolve the issue. It would be very useful if this setting was available to toggle in the UI for all SMB processors. Without this setting, I get a *STATUS_PATH_NOT_COVERED* error. Somewhat related hierynomus/smbj github issues: https://github.com/hierynomus/smbj/issues/152 https://github.com/hierynomus/smbj/issues/419 This setting should be made available in the following processors and services: * GetSmbFile * PutSmbFile * SmbjClientProviderService Edit: It might require some more changes to handle the connections and sessions correctly. was: The hierynomus/smbj library has a setting for enabling DFS which is disabled by default: https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39 This appears to cause problems in some SMB configurations. Patched https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java to test in my environment with: {code} $ git diff nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java diff --git a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java index 0895abfae0..eac765 100644 --- a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java +++ b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java @@ -46,6 +46,8 @@ public final class SmbUtils { } } +configBuilder.withDfsEnabled(true); + if (context.getProperty(USE_ENCRYPTION).isSet()) { configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean()); } {code} This appeared to resolve the issue. It would be very useful if this setting was available to toggle in the UI for all SMB processors. Without this setting, I get a *STATUS_PATH_NOT_COVERED* error. Somewhat related hierynomus/smbj github issues: https://github.com/hierynomus/smbj/issues/152 https://github.com/hierynomus/smbj/issues/419 This setting should be made available in the following processors and services: * GetSmbFile * PutSmbFile * SmbjClientProviderService > Add DFS setting to smb processors > - > > Key: NIFI-12837 > URL: https://issues.apache.org/jira/browse/NIFI-12837 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.25.0 >Reporter: Anders >Priority: Major > > The hierynomus/smbj library has a setting for enabling DFS which is disabled > by default: > https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39 > This appears to cause problems in some SMB configurations. > Patched > https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java > to test in my environment with: > {code} > $ git diff > nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java > diff --git >
Re: [PR] [NIFI-12778] manage remote ports [nifi]
scottyaslan commented on code in PR #8433: URL: https://github.com/apache/nifi/pull/8433#discussion_r1506948215 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-frontend/src/main/nifi/src/app/pages/flow-designer/state/manage-remote-ports/manage-remote-ports.reducer.ts: ## @@ -0,0 +1,75 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +import { createReducer, on } from '@ngrx/store'; +import { +configureRemotePort, +configureRemotePortSuccess, +loadRemotePorts, +loadRemotePortsSuccess, +remotePortsBannerApiError, +resetRemotePortsState +} from './manage-remote-ports.actions'; +import { produce } from 'immer'; +import { RemotePortsState } from './index'; + +export const initialState: RemotePortsState = { +ports: [], +saving: false, +loadedTimestamp: '', +rpg: null, +status: 'pending' +}; + +export const manageRemotePortsReducer = createReducer( +initialState, +on(resetRemotePortsState, () => ({ +...initialState +})), +on(loadRemotePorts, (state) => ({ +...state, +status: 'loading' as const +})), +on(loadRemotePortsSuccess, (state, { response }) => ({ +...state, +ports: response.ports, +loadedTimestamp: response.rpg.component.flowRefreshed || '', Review Comment: Now the `loadedTimestamp` in the store is calculated from the users browser, the flow configuration `timeOffset`, and the about `timezone` values. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [NIFI-12778] manage remote ports [nifi]
scottyaslan commented on code in PR #8433: URL: https://github.com/apache/nifi/pull/8433#discussion_r1506947059 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-frontend/src/main/nifi/src/app/pages/flow-designer/ui/manage-remote-ports/edit-remote-port/edit-remote-port.component.ts: ## @@ -0,0 +1,107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +import { Component, Inject } from '@angular/core'; +import { MAT_DIALOG_DATA, MatDialogModule } from '@angular/material/dialog'; +import { FormBuilder, FormControl, FormGroup, ReactiveFormsModule, Validators } from '@angular/forms'; +import { Store } from '@ngrx/store'; +import { MatInputModule } from '@angular/material/input'; +import { MatCheckboxModule } from '@angular/material/checkbox'; +import { MatButtonModule } from '@angular/material/button'; +import { AsyncPipe } from '@angular/common'; +import { ErrorBanner } from '../../../../../ui/common/error-banner/error-banner.component'; +import { NifiSpinnerDirective } from '../../../../../ui/common/spinner/nifi-spinner.directive'; +import { selectSaving } from '../../../state/manage-remote-ports/manage-remote-ports.selectors'; +import { EditComponentDialogRequest } from '../../../state/flow'; +import { Client } from '../../../../../service/client.service'; +import { ComponentType } from '../../../../../state/shared'; +import { PortSummary } from '../../../state/manage-remote-ports'; +import { configureRemotePort } from '../../../state/manage-remote-ports/manage-remote-ports.actions'; + +@Component({ +standalone: true, +templateUrl: './edit-remote-port.component.html', +imports: [ +ReactiveFormsModule, +ErrorBanner, +MatDialogModule, +MatInputModule, +MatCheckboxModule, +MatButtonModule, +AsyncPipe, +NifiSpinnerDirective +], +styleUrls: ['./edit-remote-port.component.scss'] +}) +export class EditRemotePortComponent { +saving$ = this.store.select(selectSaving); + +editPortForm: FormGroup; +portTypeLabel: string; + +constructor( +@Inject(MAT_DIALOG_DATA) public request: EditComponentDialogRequest, +private formBuilder: FormBuilder, +private store: Store, +private client: Client +) { +// set the port type name +if (ComponentType.InputPort == this.request.type) { +this.portTypeLabel = 'Input Port'; +} else { +this.portTypeLabel = 'Output Port'; +} + +// build the form +this.editPortForm = this.formBuilder.group({ +concurrentTasks: new FormControl(request.entity.concurrentlySchedulableTaskCount, Validators.required), +compressed: new FormControl(request.entity.useCompression || false), +count: new FormControl(request.entity.batchSettings.count || ''), +size: new FormControl(request.entity.batchSettings.size || ''), +duration: new FormControl(request.entity.batchSettings.duration || '') Review Comment: I think I addressed this along with the way these values are displayed in the main table listing. Please have another look. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [NIFI-12778] manage remote ports [nifi]
scottyaslan commented on code in PR #8433: URL: https://github.com/apache/nifi/pull/8433#discussion_r1506945702 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-frontend/src/main/nifi/src/app/pages/flow-designer/ui/manage-remote-ports/_manage-remote-ports.component-theme.scss: ## @@ -18,17 +18,28 @@ @use 'sass:map'; @use '@angular/material' as mat; -@mixin nifi-theme($theme) { +@mixin nifi-theme($theme, $canvas-theme) { // Get the color config from the theme. $color-config: mat.get-color-config($theme); +$canvas-color-config: mat.get-color-config($canvas-theme); // Get the color palette from the color-config. $primary-palette: map.get($color-config, 'primary'); +$canvas-accent-palette: map.get($canvas-color-config, 'accent'); // Get hues from palette $primary-palette-500: mat.get-color-from-palette($primary-palette, 500); +$canvas-accent-palette-A200: mat.get-color-from-palette($canvas-accent-palette, 'A200'); .manage-remote-ports-header { color: $primary-palette-500; } + +.manage-remote-ports-table { +.listing-table { +.fa.fa-warning { +color: $canvas-accent-palette-A200; Review Comment: @mcgilman I wasn't sure what color (if any) this icon should be? Red seems too strong and is used throughout nifi to represent an error or invalid. However, the application does not seem to mix using this `fa-warning` icon with a red color but rather these `fa-warning` icons typically have a yellow color. I am also fine if you think we just want to leave the icon with no extra colors applied and just let it be the default `.listing-table .icon` color. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12851) ConsumeKafka - remove hard coded limit to number of subscribed topics
[ https://issues.apache.org/jira/browse/NIFI-12851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Grey updated NIFI-12851: - Status: Patch Available (was: In Progress) > ConsumeKafka - remove hard coded limit to number of subscribed topics > - > > Key: NIFI-12851 > URL: https://issues.apache.org/jira/browse/NIFI-12851 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Paul Grey >Assignee: Paul Grey >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > This code in ConsumeKafka_2_6, and corresponding code in > ConsumeKafkaRecord_2_6, cause NiFi to limit the number of topics subscribed > to by the processor to 100. > - > https://github.com/apache/nifi/blob/ecea18f79655c0e34949d94609c8909aeb2d093e/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-2-6-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumeKafka_2_6.java#L400 > If configuration specifies a number of topics greater than 100, those after > the first hundred are ignored, with no indication. > There is no limit to the size of the list which may be supplied to the Kafka > Consumer API. > - > https://javadoc.io/static/org.apache.kafka/kafka-clients/2.8.2/org/apache/kafka/clients/consumer/Consumer.html#subscribe-java.util.Collection- > Consider removal of this limit. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] NIFI-12851 - ConsumeKafka, remove limitation on count of subscribed t… [nifi]
greyp9 opened a new pull request, #8460: URL: https://github.com/apache/nifi/pull/8460 …opics # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [x] Build completed using `mvn clean install -P contrib-check` - [x] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [NIFI-12537] Open cluster/node dialog from Summary screen. [nifi]
mcgilman commented on code in PR #8454: URL: https://github.com/apache/nifi/pull/8454#discussion_r1506733607 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-frontend/src/main/nifi/src/app/pages/summary/state/component-cluster-status/component-cluster-status.effects.ts: ## @@ -0,0 +1,102 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +import { Injectable } from '@angular/core'; +import { Actions, concatLatestFrom, createEffect, ofType } from '@ngrx/effects'; +import { NiFiState } from '../../../../state'; +import { Store } from '@ngrx/store'; +import { ErrorHelper } from '../../../../service/error-helper.service'; +import * as ClusterStatusActions from './component-cluster-status.actions'; +import { catchError, from, map, of, switchMap, tap } from 'rxjs'; +import { ComponentClusterStatusService } from '../../service/component-cluster-status.service'; +import { MatDialog } from '@angular/material/dialog'; +import { ClusterSummaryDialog } from '../../ui/common/cluster-summary-dialog/cluster-summary-dialog.component'; +import { selectComponentClusterStatusLatestRequest } from './component-cluster-status.selectors'; +import { isDefinedAndNotNull } from '../../../../state/shared'; + +@Injectable() +export class ComponentClusterStatusEffects { +constructor( +private actions$: Actions, +private store: Store, +private errorHelper: ErrorHelper, +private clusterStatusService: ComponentClusterStatusService, +private dialog: MatDialog +) {} + +loadComponentClusterStatusAndOpenDialog$ = createEffect(() => +this.actions$.pipe( + ofType(ClusterStatusActions.loadComponentClusterStatusAndOpenDialog), +map((action) => action.request), +switchMap((request) => +from(this.clusterStatusService.getClusterStatus(request.id, request.componentType)).pipe( +map((response) => { +return ClusterStatusActions.openComponentClusterStatusDialog({ +response: { +clusterStatusEntity: response, +componentType: request.componentType +} +}); +}), +catchError((error) => of(this.errorHelper.handleLoadingError(error.error, error))) +) +) +) +); + +loadComponentClusterStatus$ = createEffect(() => +this.actions$.pipe( +ofType(ClusterStatusActions.loadComponentClusterStatus), +map((action) => action.request), +switchMap((request) => +from(this.clusterStatusService.getClusterStatus(request.id, request.componentType)).pipe( +map((response) => { +return ClusterStatusActions.loadComponentClusterStatusSuccess({ +response: { +clusterStatusEntity: response, +componentType: request.componentType +} +}); +}), +catchError((error) => of(this.errorHelper.handleLoadingError(error.error, error))) Review Comment: Same as above. ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-frontend/src/main/nifi/src/app/pages/summary/ui/common/component-status-table/component-status-table.component.ts: ## @@ -0,0 +1,239 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT
[PR] NIFI-12785 Refactored the code to avoidNIFI double encoding. [nifi]
dan-s1 opened a new pull request, #8459: URL: https://github.com/apache/nifi/pull/8459 This is a backport of this fix done in #8458 # Summary [NIFI-12785 ](https://issues.apache.org/jira/browse/NIFI-12785 ) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-12785) InvokeHTTP handler should not urlencode HTTP URL
[ https://issues.apache.org/jira/browse/NIFI-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821888#comment-17821888 ] ASF subversion and git services commented on NIFI-12785: Commit 9cbd06d32b6d88f3f66c3d1b14846ccdb91c5a29 in nifi's branch refs/heads/NIFI-12785 from dystieg [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=9cbd06d32b ] NIFI-12785 Refactored the code to avoidNIFI double encoding. > InvokeHTTP handler should not urlencode HTTP URL > > > Key: NIFI-12785 > URL: https://issues.apache.org/jira/browse/NIFI-12785 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.25.0, 2.0.0-M2 > Environment: AlmaLinux 8.9 Kernel 4.18.0-513.5.1.el8_9.x86_64 > Apache NiFi 2.0.0-M2 >Reporter: macdoor615 >Assignee: Daniel Stieglitz >Priority: Major > Attachments: M1-output.png, M2-output.png > > Time Spent: 10m > Remaining Estimate: 0h > > InvokeHTTP processor call HTTP URL > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%2Fstage%2F15m%2Fheshangwuzhibo.yaml/raw?ref=main] > output attribute > invokehttp.request.url: > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%252Fstage%252F15m%252Fheshangwuzhibo.yaml/raw?ref=main] > > invokehttp.status.code: 404 > > The situation is different for version 2.0.0-M1, output attribute > invokehttp.request.url: > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%2Fstage%2F15m%2Fheshangwuzhibo.yaml/raw?ref=main] > > invokehttp.status.code: 200 > > I found that in the M2 version % symbol was urlencoded to %25, M1 version. > The M1 version does not urlencode > > pls refer to the uploaded pictures -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [NIFI-12537] Open cluster/node dialog from Summary screen. [nifi]
mcgilman commented on PR #8454: URL: https://github.com/apache/nifi/pull/8454#issuecomment-1969939571 Will review... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12848) Status History - clustered node support
[ https://issues.apache.org/jira/browse/NIFI-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-12848: --- Fix Version/s: 2.0.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Status History - clustered node support > --- > > Key: NIFI-12848 > URL: https://issues.apache.org/jira/browse/NIFI-12848 > Project: Apache NiFi > Issue Type: Sub-task >Reporter: Rob Fellows >Assignee: Rob Fellows >Priority: Major > Fix For: 2.0.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12848) Status History - clustered node support
[ https://issues.apache.org/jira/browse/NIFI-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821867#comment-17821867 ] ASF subversion and git services commented on NIFI-12848: Commit 1cb0a537118a413622e3ce7b2485820c4910a04c in nifi's branch refs/heads/main from Rob Fellows [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1cb0a53711 ] [NIFI-12848] - fixed ExpressionChanged error in Status History dialog (#8455) * color the legend text to match the color of the corresponding line in the chart for each node This closes #8455 > Status History - clustered node support > --- > > Key: NIFI-12848 > URL: https://issues.apache.org/jira/browse/NIFI-12848 > Project: Apache NiFi > Issue Type: Sub-task >Reporter: Rob Fellows >Assignee: Rob Fellows >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [NIFI-12848] - Status History - clustered node support [nifi]
mcgilman merged PR #8455: URL: https://github.com/apache/nifi/pull/8455 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12785) InvokeHTTP handler should not urlencode HTTP URL
[ https://issues.apache.org/jira/browse/NIFI-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Stieglitz updated NIFI-12785: Status: Patch Available (was: In Progress) > InvokeHTTP handler should not urlencode HTTP URL > > > Key: NIFI-12785 > URL: https://issues.apache.org/jira/browse/NIFI-12785 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 2.0.0-M2, 1.25.0 > Environment: AlmaLinux 8.9 Kernel 4.18.0-513.5.1.el8_9.x86_64 > Apache NiFi 2.0.0-M2 >Reporter: macdoor615 >Assignee: Daniel Stieglitz >Priority: Major > Attachments: M1-output.png, M2-output.png > > Time Spent: 10m > Remaining Estimate: 0h > > InvokeHTTP processor call HTTP URL > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%2Fstage%2F15m%2Fheshangwuzhibo.yaml/raw?ref=main] > output attribute > invokehttp.request.url: > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%252Fstage%252F15m%252Fheshangwuzhibo.yaml/raw?ref=main] > > invokehttp.status.code: 404 > > The situation is different for version 2.0.0-M1, output attribute > invokehttp.request.url: > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%2Fstage%2F15m%2Fheshangwuzhibo.yaml/raw?ref=main] > > invokehttp.status.code: 200 > > I found that in the M2 version % symbol was urlencoded to %25, M1 version. > The M1 version does not urlencode > > pls refer to the uploaded pictures -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-8134 allow unescapeJson Record Path function to recursively convert Maps to Records [nifi]
ChrisSamo632 commented on PR #7745: URL: https://github.com/apache/nifi/pull/7745#issuecomment-1969913218 @markap14 thanks for the review, I've addressed your comments (and reverted the IDE auto-formatting issues!) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [NIFI-12848] - Status History - clustered node support [nifi]
mcgilman commented on PR #8455: URL: https://github.com/apache/nifi/pull/8455#issuecomment-1969859810 Reviewing... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12850 - Prevent indexing of overly large filename attribute [nifi]
mattyb149 commented on PR #8457: URL: https://github.com/apache/nifi/pull/8457#issuecomment-1969847850 Does this have any ramifications for existing indexes? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12773) Add 'join' and 'anchored' RecordPath functions
[ https://issues.apache.org/jira/browse/NIFI-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Sampson updated NIFI-12773: - Resolution: Fixed Status: Resolved (was: Patch Available) > Add 'join' and 'anchored' RecordPath functions > -- > > Key: NIFI-12773 > URL: https://issues.apache.org/jira/browse/NIFI-12773 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0 > > Time Spent: 50m > Remaining Estimate: 0h > > I've come across two functions that would make flow design much simpler in > RecordPath. > The first one, 'join' would be similar to the 'concat' method but provides a > delimiter between each element instead of just smashing the values together. > The other provides the ability to anchor the context node while evaluating a > RecordPath. For example, given the following record: > {code:java} > { > "id": "1234", > "elements": [{ > "name": "book", > "color": "red" > }, { > "name": "computer", > "color": "black" > }] > } {code} > We should be able to use: > {code:java} > anchored(/elements, concat(/name, ': ', /color)) {code} > In order to obtain an array of 2 elements: > {code:java} > book: red {code} > and > {code:java} > computer: black {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12773) Add 'join' and 'anchored' RecordPath functions
[ https://issues.apache.org/jira/browse/NIFI-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821857#comment-17821857 ] ASF subversion and git services commented on NIFI-12773: Commit 74bd798097e15d54b871ac3ef7654a0d3433f99a in nifi's branch refs/heads/main from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=74bd798097 ] NIFI-12773: Added join and anchored RecordPath function Signed-off-by: Chris Sampson This closes #8391 > Add 'join' and 'anchored' RecordPath functions > -- > > Key: NIFI-12773 > URL: https://issues.apache.org/jira/browse/NIFI-12773 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 2.0.0 > > Time Spent: 40m > Remaining Estimate: 0h > > I've come across two functions that would make flow design much simpler in > RecordPath. > The first one, 'join' would be similar to the 'concat' method but provides a > delimiter between each element instead of just smashing the values together. > The other provides the ability to anchor the context node while evaluating a > RecordPath. For example, given the following record: > {code:java} > { > "id": "1234", > "elements": [{ > "name": "book", > "color": "red" > }, { > "name": "computer", > "color": "black" > }] > } {code} > We should be able to use: > {code:java} > anchored(/elements, concat(/name, ': ', /color)) {code} > In order to obtain an array of 2 elements: > {code:java} > book: red {code} > and > {code:java} > computer: black {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12773: Added join and anchored RecordPath function [nifi]
asfgit closed pull request #8391: NIFI-12773: Added join and anchored RecordPath function URL: https://github.com/apache/nifi/pull/8391 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] NIFI-12785 Refactored the code to avoid double encoding. [nifi]
dan-s1 opened a new pull request, #8458: URL: https://github.com/apache/nifi/pull/8458 # Summary [NIFI-12785](https://issues.apache.org/jira/browse/NIFI-12785) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12851) ConsumeKafka - remove hard coded limit to number of subscribed topics
Paul Grey created NIFI-12851: Summary: ConsumeKafka - remove hard coded limit to number of subscribed topics Key: NIFI-12851 URL: https://issues.apache.org/jira/browse/NIFI-12851 Project: Apache NiFi Issue Type: Improvement Reporter: Paul Grey Assignee: Paul Grey This code in ConsumeKafka_2_6, and corresponding code in ConsumeKafkaRecord_2_6, cause NiFi to limit the number of topics subscribed to by the processor to 100. - https://github.com/apache/nifi/blob/ecea18f79655c0e34949d94609c8909aeb2d093e/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-2-6-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumeKafka_2_6.java#L400 If configuration specifies a number of topics greater than 100, those after the first hundred are ignored, with no indication. There is no limit to the size of the list which may be supplied to the Kafka Consumer API. - https://javadoc.io/static/org.apache.kafka/kafka-clients/2.8.2/org/apache/kafka/clients/consumer/Consumer.html#subscribe-java.util.Collection- Consider removal of this limit. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12850) Failure to index Provenance Events with large filename attribute
[ https://issues.apache.org/jira/browse/NIFI-12850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-12850: -- Status: Patch Available (was: Open) > Failure to index Provenance Events with large filename attribute > > > Key: NIFI-12850 > URL: https://issues.apache.org/jira/browse/NIFI-12850 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 2.0.0-M2, 1.25.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > {code:java} > ERROR org.apache.nifi.provenance.index.lucene.EventIndexTask: Failed to index > Provenance Events java.lang.IllegalArgumentException: Document contains at > least one immense term in field="filename" (whose UTF8 encoding is longer > than the max length 32766), all of which were skipped. Please correct the > analyzer to not produce such terms. The prefix of the first immense term is: > '[49, 50, 55, 48, 54, 50, 51, 55, 51, 57, 51, 52, 53, 50, 56, 51, 53, 46, 48, > 46, 97, 118, 114, 111, 46, 48, 46, 97, 118, 114]...', original message: bytes > can be at most 32766 in length; got 74483 at > org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:984) > at > org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:527) > at > org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:491) > at > org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208) > at > org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:415) > at > org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471) at > org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1444) at > org.apache.nifi.provenance.lucene.LuceneEventIndexWriter.index(LuceneEventIndexWriter.java:70) > at > org.apache.nifi.provenance.index.lucene.EventIndexTask.index(EventIndexTask.java:202) > at > org.apache.nifi.provenance.index.lucene.EventIndexTask.run(EventIndexTask.java:113) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) Caused by: > org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes > can be at most 32766 in length; got 74483 at > org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:281) at > org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:182) at > org.apache.lucene.index.DefaultIndexingChain$PerField. {code} > Looking at the code, it looks like filename is the only attribute that could > be set with arbitrary values that is not protected against overly large > values right now. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [NIFI-12778] manage remote ports [nifi]
mcgilman commented on code in PR #8433: URL: https://github.com/apache/nifi/pull/8433#discussion_r1506054691 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-frontend/src/main/nifi/src/app/pages/flow-designer/service/canvas-context-menu.service.ts: ## @@ -775,8 +776,16 @@ export class CanvasContextMenu implements ContextMenuDefinitionProvider { }, clazz: 'fa fa-cloud', text: 'Manage remote ports', -action: () => { -// TODO - remotePorts +action: (selection: any) => { Review Comment: The condition for this action needs to ensure the user `canRead`. ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-frontend/src/main/nifi/src/app/pages/flow-designer/ui/manage-remote-ports/manage-remote-ports.component.html: ## @@ -0,0 +1,234 @@ + + + + + + + +Manage Remote Ports +@if (portsState$ | async; as portsState) { + + + +Name + +{{ portsState.rpg?.id }} + + + + + +Urls + +{{ portsState.rpg?.component?.targetUri }} + + + + +@if (isInitialLoading(portsState)) { + + + +} @else { + + + + + + + + + + +@if (hasComments(item)) { + + + +} + + + + + + + + +Name + + + +{{ formatName(item) }} + + + + + + + +Type + + + +{{ formatType(item) }} + + + + + + + +Concurrent Tasks + + + + +{{ formatTasks(item) }} + + + + + + + + +Compressed + + + +{{ formatCompression(item) }} + + + + + + + +Batch Count + + + + +{{ formatCount(item)
[PR] NIFI-12850 - Prevent indexing of overly large filename attribute [nifi]
pvillard31 opened a new pull request, #8457: URL: https://github.com/apache/nifi/pull/8457 # Summary [NIFI-12850](https://issues.apache.org/jira/browse/NIFI-12850) - Failure to index Provenance Events with large filename attribute Easy way to test: GFF -> funnel and have a dynamic property in GFF that sets 'filename' with an excessively large value (for example 60k characters). # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12850) Failure to index Provenance Events with large filename attribute
[ https://issues.apache.org/jira/browse/NIFI-12850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-12850: -- Affects Version/s: 2.0.0-M2 1.25.0 > Failure to index Provenance Events with large filename attribute > > > Key: NIFI-12850 > URL: https://issues.apache.org/jira/browse/NIFI-12850 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.25.0, 2.0.0-M2 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > {code:java} > ERROR org.apache.nifi.provenance.index.lucene.EventIndexTask: Failed to index > Provenance Events java.lang.IllegalArgumentException: Document contains at > least one immense term in field="filename" (whose UTF8 encoding is longer > than the max length 32766), all of which were skipped. Please correct the > analyzer to not produce such terms. The prefix of the first immense term is: > '[49, 50, 55, 48, 54, 50, 51, 55, 51, 57, 51, 52, 53, 50, 56, 51, 53, 46, 48, > 46, 97, 118, 114, 111, 46, 48, 46, 97, 118, 114]...', original message: bytes > can be at most 32766 in length; got 74483 at > org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:984) > at > org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:527) > at > org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:491) > at > org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208) > at > org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:415) > at > org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471) at > org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1444) at > org.apache.nifi.provenance.lucene.LuceneEventIndexWriter.index(LuceneEventIndexWriter.java:70) > at > org.apache.nifi.provenance.index.lucene.EventIndexTask.index(EventIndexTask.java:202) > at > org.apache.nifi.provenance.index.lucene.EventIndexTask.run(EventIndexTask.java:113) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) Caused by: > org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes > can be at most 32766 in length; got 74483 at > org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:281) at > org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:182) at > org.apache.lucene.index.DefaultIndexingChain$PerField. {code} > Looking at the code, it looks like filename is the only attribute that could > be set with arbitrary values that is not protected against overly large > values right now. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12850) Failure to index Provenance Events with large filename attribute
[ https://issues.apache.org/jira/browse/NIFI-12850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-12850: -- Issue Type: Bug (was: Improvement) > Failure to index Provenance Events with large filename attribute > > > Key: NIFI-12850 > URL: https://issues.apache.org/jira/browse/NIFI-12850 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > {code:java} > ERROR org.apache.nifi.provenance.index.lucene.EventIndexTask: Failed to index > Provenance Events java.lang.IllegalArgumentException: Document contains at > least one immense term in field="filename" (whose UTF8 encoding is longer > than the max length 32766), all of which were skipped. Please correct the > analyzer to not produce such terms. The prefix of the first immense term is: > '[49, 50, 55, 48, 54, 50, 51, 55, 51, 57, 51, 52, 53, 50, 56, 51, 53, 46, 48, > 46, 97, 118, 114, 111, 46, 48, 46, 97, 118, 114]...', original message: bytes > can be at most 32766 in length; got 74483 at > org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:984) > at > org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:527) > at > org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:491) > at > org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208) > at > org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:415) > at > org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471) at > org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1444) at > org.apache.nifi.provenance.lucene.LuceneEventIndexWriter.index(LuceneEventIndexWriter.java:70) > at > org.apache.nifi.provenance.index.lucene.EventIndexTask.index(EventIndexTask.java:202) > at > org.apache.nifi.provenance.index.lucene.EventIndexTask.run(EventIndexTask.java:113) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) Caused by: > org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes > can be at most 32766 in length; got 74483 at > org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:281) at > org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:182) at > org.apache.lucene.index.DefaultIndexingChain$PerField. {code} > Looking at the code, it looks like filename is the only attribute that could > be set with arbitrary values that is not protected against overly large > values right now. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-12850) Failure to index Provenance Events with large attributes
Pierre Villard created NIFI-12850: - Summary: Failure to index Provenance Events with large attributes Key: NIFI-12850 URL: https://issues.apache.org/jira/browse/NIFI-12850 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Pierre Villard Assignee: Pierre Villard {code:java} ERROR org.apache.nifi.provenance.index.lucene.EventIndexTask: Failed to index Provenance Events java.lang.IllegalArgumentException: Document contains at least one immense term in field="filename" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[49, 50, 55, 48, 54, 50, 51, 55, 51, 57, 51, 52, 53, 50, 56, 51, 53, 46, 48, 46, 97, 118, 114, 111, 46, 48, 46, 97, 118, 114]...', original message: bytes can be at most 32766 in length; got 74483 at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:984) at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:527) at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:491) at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208) at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:415) at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471) at org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1444) at org.apache.nifi.provenance.lucene.LuceneEventIndexWriter.index(LuceneEventIndexWriter.java:70) at org.apache.nifi.provenance.index.lucene.EventIndexTask.index(EventIndexTask.java:202) at org.apache.nifi.provenance.index.lucene.EventIndexTask.run(EventIndexTask.java:113) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 74483 at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:281) at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:182) at org.apache.lucene.index.DefaultIndexingChain$PerField. {code} Looking at the code, it looks like filename is the only attribute that could be set with arbitrary values that is not protected against overly large values right now. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12850) Failure to index Provenance Events with large filename attribute
[ https://issues.apache.org/jira/browse/NIFI-12850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-12850: -- Summary: Failure to index Provenance Events with large filename attribute (was: Failure to index Provenance Events with large attributes) > Failure to index Provenance Events with large filename attribute > > > Key: NIFI-12850 > URL: https://issues.apache.org/jira/browse/NIFI-12850 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > {code:java} > ERROR org.apache.nifi.provenance.index.lucene.EventIndexTask: Failed to index > Provenance Events java.lang.IllegalArgumentException: Document contains at > least one immense term in field="filename" (whose UTF8 encoding is longer > than the max length 32766), all of which were skipped. Please correct the > analyzer to not produce such terms. The prefix of the first immense term is: > '[49, 50, 55, 48, 54, 50, 51, 55, 51, 57, 51, 52, 53, 50, 56, 51, 53, 46, 48, > 46, 97, 118, 114, 111, 46, 48, 46, 97, 118, 114]...', original message: bytes > can be at most 32766 in length; got 74483 at > org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:984) > at > org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:527) > at > org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:491) > at > org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208) > at > org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:415) > at > org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471) at > org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1444) at > org.apache.nifi.provenance.lucene.LuceneEventIndexWriter.index(LuceneEventIndexWriter.java:70) > at > org.apache.nifi.provenance.index.lucene.EventIndexTask.index(EventIndexTask.java:202) > at > org.apache.nifi.provenance.index.lucene.EventIndexTask.run(EventIndexTask.java:113) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) Caused by: > org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes > can be at most 32766 in length; got 74483 at > org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:281) at > org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:182) at > org.apache.lucene.index.DefaultIndexingChain$PerField. {code} > Looking at the code, it looks like filename is the only attribute that could > be set with arbitrary values that is not protected against overly large > values right now. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12498) The Prioritization description in the User Guide is different from the actual source code implementation.
[ https://issues.apache.org/jira/browse/NIFI-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-12498: -- Fix Version/s: 2.0.0 Resolution: Fixed Status: Resolved (was: Patch Available) > The Prioritization description in the User Guide is different from the actual > source code implementation. > - > > Key: NIFI-12498 > URL: https://issues.apache.org/jira/browse/NIFI-12498 > Project: Apache NiFi > Issue Type: Bug > Components: Documentation Website >Affects Versions: 1.25.0, 2.0.0-M2 >Reporter: Doin Cha >Assignee: endzeit >Priority: Minor > Fix For: 2.0.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > In the prioritization explanation of the User Guide, it is stated that > *OldestFlowFileFirstPrioritizer* is the _"default scheme that is used if no > prioritizers are selected."_ > _([https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#prioritization)|https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#prioritization]_ > > > However, in the actual source code implementation, {color:#ff}*there is > no automatic default setting when prioritizers are not selected.* {color} > In such cases, the sorting is done by comparing the *ContentClaim* *of > FlowFiles.* > _([https://github.com/apache/nifi/blob/9a5ec83baa1b3593031f0917659a69e7a36bb0be/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/QueuePrioritizer.java#L39-L90])_ > > > It looks like the user guide needs to be revised. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12498) The Prioritization description in the User Guide is different from the actual source code implementation.
[ https://issues.apache.org/jira/browse/NIFI-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821795#comment-17821795 ] ASF subversion and git services commented on NIFI-12498: Commit 01ca19eccc9a711315797f30ece5aec67cda9a2e in nifi's branch refs/heads/main from Lucas [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=01ca19eccc ] NIFI-12498 Adjust docs on default behaviour of prioritizers (#8451) > The Prioritization description in the User Guide is different from the actual > source code implementation. > - > > Key: NIFI-12498 > URL: https://issues.apache.org/jira/browse/NIFI-12498 > Project: Apache NiFi > Issue Type: Bug > Components: Documentation Website >Affects Versions: 1.25.0, 2.0.0-M2 >Reporter: Doin Cha >Assignee: endzeit >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > In the prioritization explanation of the User Guide, it is stated that > *OldestFlowFileFirstPrioritizer* is the _"default scheme that is used if no > prioritizers are selected."_ > _([https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#prioritization)|https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#prioritization]_ > > > However, in the actual source code implementation, {color:#ff}*there is > no automatic default setting when prioritizers are not selected.* {color} > In such cases, the sorting is done by comparing the *ContentClaim* *of > FlowFiles.* > _([https://github.com/apache/nifi/blob/9a5ec83baa1b3593031f0917659a69e7a36bb0be/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/QueuePrioritizer.java#L39-L90])_ > > > It looks like the user guide needs to be revised. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12498 Adjust docs on default behaviour of prioritizers [nifi]
markap14 commented on PR #8451: URL: https://github.com/apache/nifi/pull/8451#issuecomment-1969536704 Thanks for fixing @EndzeitBegins ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12498 Adjust docs on default behaviour of prioritizers [nifi]
markap14 merged PR #8451: URL: https://github.com/apache/nifi/pull/8451 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-3785 Added feature to move a controller service to it's parent o… [nifi]
markap14 commented on code in PR #7734: URL: https://github.com/apache/nifi/pull/7734#discussion_r1506333554 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java: ## @@ -2939,6 +2939,115 @@ public ControllerServiceEntity updateControllerService(final Revision revision, return entityFactory.createControllerServiceEntity(snapshot.getComponent(), dtoFactory.createRevisionDTO(snapshot.getLastModification()), permissions, operatePermissions, bulletinEntities); } +@Override +public ControllerServiceEntity moveControllerService(final Revision revision, final ControllerServiceDTO controllerServiceDTO, final String newProcessGroupID) { +// get the component, ensure we have access to it, and perform the move request +final ControllerServiceNode controllerService = controllerServiceDAO.getControllerService(controllerServiceDTO.getId()); +final RevisionUpdate snapshot = updateComponent(revision, +controllerService, +() -> moveControllerServiceWork(controllerService, newProcessGroupID), +cs -> { +awaitValidationCompletion(cs); +final ControllerServiceDTO dto = dtoFactory.createControllerServiceDto(cs); +final ControllerServiceReference ref = controllerService.getReferences(); +final ControllerServiceReferencingComponentsEntity referencingComponentsEntity = createControllerServiceReferencingComponentsEntity(ref); + dto.setReferencingComponents(referencingComponentsEntity.getControllerServiceReferencingComponents()); +return dto; +}); + +final PermissionsDTO permissions = dtoFactory.createPermissionsDto(controllerService); +final PermissionsDTO operatePermissions = dtoFactory.createPermissionsDto(new OperationAuthorizable(controllerService)); +final List bulletins = dtoFactory.createBulletinDtos(bulletinRepository.findBulletinsForSource(controllerServiceDTO.getId())); +final List bulletinEntities = bulletins.stream().map(bulletin -> entityFactory.createBulletinEntity(bulletin, permissions.getCanRead())).collect(Collectors.toList()); +return entityFactory.createControllerServiceEntity(snapshot.getComponent(), dtoFactory.createRevisionDTO(snapshot.getLastModification()), permissions, operatePermissions, bulletinEntities); +} + +private ControllerServiceNode moveControllerServiceWork(final ControllerServiceNode controllerService, final String newProcessGroupID) { Review Comment: We should avoid names like doXYZ(), xyzWork(), xyz0(), etc. and instead name according to what the method does. It is fine to simply name it `moveControllerService` ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java: ## @@ -2939,6 +2939,115 @@ public ControllerServiceEntity updateControllerService(final Revision revision, return entityFactory.createControllerServiceEntity(snapshot.getComponent(), dtoFactory.createRevisionDTO(snapshot.getLastModification()), permissions, operatePermissions, bulletinEntities); } +@Override +public ControllerServiceEntity moveControllerService(final Revision revision, final ControllerServiceDTO controllerServiceDTO, final String newProcessGroupID) { +// get the component, ensure we have access to it, and perform the move request +final ControllerServiceNode controllerService = controllerServiceDAO.getControllerService(controllerServiceDTO.getId()); +final RevisionUpdate snapshot = updateComponent(revision, +controllerService, +() -> moveControllerServiceWork(controllerService, newProcessGroupID), +cs -> { +awaitValidationCompletion(cs); Review Comment: We cannot wait for validation to complete here. This is called from a web thread, and must return ASAP. There should be no need to wait for validation here, though. ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java: ## @@ -2939,6 +2939,115 @@ public ControllerServiceEntity updateControllerService(final Revision revision, return entityFactory.createControllerServiceEntity(snapshot.getComponent(), dtoFactory.createRevisionDTO(snapshot.getLastModification()), permissions, operatePermissions, bulletinEntities); } +@Override +public ControllerServiceEntity moveControllerService(final Revision revision, final ControllerServiceDTO controllerServiceDTO, final String newProcessGroupID) { +// get the component, ensure we have access to it, and perform the move request +
Re: [PR] NIFI-12828 Add mapping for BIT SQL Type in DataTypeUtils [nifi]
Lehel44 commented on code in PR #8445: URL: https://github.com/apache/nifi/pull/8445#discussion_r1506356453 ## nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java: ## @@ -1918,6 +1918,7 @@ public static DataType getDataTypeFromSQLTypeValue(final int sqlType) { case Types.BIGINT: return RecordFieldType.BIGINT.getDataType(); case Types.BOOLEAN: +case Types.BIT: Review Comment: @ravinarayansingh I tried it and it looks good to me! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-11443 Route Python Framework Logging to SLF4J [nifi]
markap14 commented on code in PR #8407: URL: https://github.com/apache/nifi/pull/8407#discussion_r1506303249 ## nifi-nar-bundles/nifi-py4j-bundle/nifi-py4j-bridge/src/main/java/org/apache/nifi/py4j/PythonProcessLogReader.java: ## @@ -0,0 +1,163 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.py4j; + +import org.apache.nifi.py4j.logging.PythonLogLevel; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedReader; +import java.io.IOException; +import java.util.Arrays; +import java.util.LinkedList; +import java.util.Map; +import java.util.Objects; +import java.util.Queue; +import java.util.stream.Collectors; + +/** + * Runnable Command for reading a line from Process Output Stream and writing to a Logger + */ +class PythonProcessLogReader implements Runnable { +private static final int LOG_LEVEL_BEGIN_INDEX = 0; + +private static final int LOG_LEVEL_END_INDEX = 2; + +private static final int MESSAGE_BEGIN_INDEX = 3; + +private static final char NAME_MESSAGE_SEPARATOR = ':'; + +private static final int MINIMUM_LOGGER_NAME_INDEX = 3; + +private static final String LOG_PREFIX = "PY4JLOG"; + +private static final int PREFIXED_LOG_LEVEL_BEGIN_INDEX = 8; + +private static final String LINE_SEPARATOR = System.lineSeparator(); + +private static final Map PYTHON_LOG_LEVELS = Arrays.stream(PythonLogLevel.values()).collect( +Collectors.toUnmodifiableMap( +pythonLogLevel -> Integer.toString(pythonLogLevel.getLevel()), +pythonLogLevel -> pythonLogLevel +) +); + +private final Logger processLogger = LoggerFactory.getLogger("org.apache.nifi.py4j.ProcessLog"); + +private final BufferedReader processReader; + +/** + * Standard constructor with Buffered Reader connected to Python Process Output Stream + * + * @param processReader Reader from Process Output Stream + */ +PythonProcessLogReader(final BufferedReader processReader) { +this.processReader = Objects.requireNonNull(processReader, "Reader required"); +} + +/** + * Read lines from Process Reader and write log messages based on parsed level and named logger + */ +@Override +public void run() { +final Queue parsedRecords = new LinkedList<>(); + +try { +String line = processReader.readLine(); +while (line != null) { +processLine(line, parsedRecords); + +if (parsedRecords.size() == 2 || !processReader.ready()) { +final ParsedRecord parsedRecord = parsedRecords.remove(); +log(parsedRecord); +} Review Comment: I think the logic here is a bit off. I created this Processor for testing: ``` from nifiapi.flowfiletransform import FlowFileTransform, FlowFileTransformResult import logging root_logger = logging.getLogger('') class LogContents(FlowFileTransform): class Java: implements = ['org.apache.nifi.python.processor.FlowFileTransform'] class ProcessorDetails: version = '0.0.1-SNAPSHOT' def __init__(self, **kwargs): pass def transform(self, context, flowFile): contents = flowFile.getContentsAsBytes().decode("utf-8") root_logger.info("Contents:\n1\n2\n3") root_logger.info(contents) return FlowFileTransformResult(relationship = "success") ``` I then sent in some FlowFiles. Each time, I get the log output of: ``` Contents 1 2 3 ``` but not the output of the actual logger. If I change the first log message instead to: ``` root_logger.info("Contents:") ``` I get just the line "Contents:" on the first iteration. The second time the processor is run, it outputs the contents of the first FlowFile, followed by "Contents" again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org
Re: [PR] NIFI-11443 Route Python Framework Logging to SLF4J [nifi]
markap14 commented on code in PR #8407: URL: https://github.com/apache/nifi/pull/8407#discussion_r1506300182 ## nifi-nar-bundles/nifi-py4j-bundle/nifi-py4j-bridge/src/main/java/org/apache/nifi/py4j/PythonProcessLogReader.java: ## @@ -0,0 +1,163 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.py4j; + +import org.apache.nifi.py4j.logging.PythonLogLevel; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedReader; +import java.io.IOException; +import java.util.Arrays; +import java.util.LinkedList; +import java.util.Map; +import java.util.Objects; +import java.util.Queue; +import java.util.stream.Collectors; + +/** + * Runnable Command for reading a line from Process Output Stream and writing to a Logger + */ +class PythonProcessLogReader implements Runnable { +private static final int LOG_LEVEL_BEGIN_INDEX = 0; + +private static final int LOG_LEVEL_END_INDEX = 2; + +private static final int MESSAGE_BEGIN_INDEX = 3; + +private static final char NAME_MESSAGE_SEPARATOR = ':'; + +private static final int MINIMUM_LOGGER_NAME_INDEX = 3; + +private static final String LOG_PREFIX = "PY4JLOG"; + +private static final int PREFIXED_LOG_LEVEL_BEGIN_INDEX = 8; + +private static final String LINE_SEPARATOR = System.lineSeparator(); + +private static final Map PYTHON_LOG_LEVELS = Arrays.stream(PythonLogLevel.values()).collect( +Collectors.toUnmodifiableMap( +pythonLogLevel -> Integer.toString(pythonLogLevel.getLevel()), +pythonLogLevel -> pythonLogLevel +) +); + +private final Logger processLogger = LoggerFactory.getLogger("org.apache.nifi.py4j.ProcessLog"); + +private final BufferedReader processReader; + +/** + * Standard constructor with Buffered Reader connected to Python Process Output Stream + * + * @param processReader Reader from Process Output Stream + */ +PythonProcessLogReader(final BufferedReader processReader) { +this.processReader = Objects.requireNonNull(processReader, "Reader required"); +} + +/** + * Read lines from Process Reader and write log messages based on parsed level and named logger + */ +@Override +public void run() { +final Queue parsedRecords = new LinkedList<>(); + +try { +String line = processReader.readLine(); +while (line != null) { +processLine(line, parsedRecords); Review Comment: I think within this `while` loop we should have a try/catch (Exception e) Without it, if there's any Exception thrown from parsing, etc. the execution will escape the run() method without any sort of warning, and the logging will simply stop. Which will eventually cause the Python process' STDOUT to fill up and then the Python process will block. Need to ensure that this thread never dies unexpectedly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12849) Complementary and Accent colors should be separated from the material and warn palettes
Scott Aslan created NIFI-12849: -- Summary: Complementary and Accent colors should be separated from the material and warn palettes Key: NIFI-12849 URL: https://issues.apache.org/jira/browse/NIFI-12849 Project: Apache NiFi Issue Type: Sub-task Reporter: Scott Aslan Assignee: Scott Aslan -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (NIFI-12785) InvokeHTTP handler should not urlencode HTTP URL
[ https://issues.apache.org/jira/browse/NIFI-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Stieglitz reassigned NIFI-12785: --- Assignee: Daniel Stieglitz > InvokeHTTP handler should not urlencode HTTP URL > > > Key: NIFI-12785 > URL: https://issues.apache.org/jira/browse/NIFI-12785 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.25.0, 2.0.0-M2 > Environment: AlmaLinux 8.9 Kernel 4.18.0-513.5.1.el8_9.x86_64 > Apache NiFi 2.0.0-M2 >Reporter: macdoor615 >Assignee: Daniel Stieglitz >Priority: Major > Attachments: M1-output.png, M2-output.png > > > InvokeHTTP processor call HTTP URL > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%2Fstage%2F15m%2Fheshangwuzhibo.yaml/raw?ref=main] > output attribute > invokehttp.request.url: > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%252Fstage%252F15m%252Fheshangwuzhibo.yaml/raw?ref=main] > > invokehttp.status.code: 404 > > The situation is different for version 2.0.0-M1, output attribute > invokehttp.request.url: > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%2Fstage%2F15m%2Fheshangwuzhibo.yaml/raw?ref=main] > > invokehttp.status.code: 200 > > I found that in the M2 version % symbol was urlencoded to %25, M1 version. > The M1 version does not urlencode > > pls refer to the uploaded pictures -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (NIFI-12842) InvokeHTTP version wrong encoding of % in URL
[ https://issues.apache.org/jira/browse/NIFI-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Stieglitz reassigned NIFI-12842: --- Assignee: (was: Daniel Stieglitz) > InvokeHTTP version wrong encoding of % in URL > - > > Key: NIFI-12842 > URL: https://issues.apache.org/jira/browse/NIFI-12842 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.25.0, 2.0.0-M2 > Environment: RHEL 7 >Reporter: WojciechWitos >Priority: Major > Attachments: image-2024-02-26-08-10-12-657.png, > image-2024-02-26-08-11-25-213.png, image-2024-02-26-08-13-14-199.png, > image-2024-02-26-08-16-36-309.png, image-2024-02-27-12-09-31-831.png, > image-2024-02-27-12-10-33-542.png, image-2024-02-27-12-10-59-337.png, > image-2024-02-27-12-11-39-163.png, image-2024-02-27-12-12-00-043.png, > image-2024-02-27-12-12-06-720.png, image-2024-02-27-12-13-10-173.png > > > Hi! > I've encountered on the version 1.25 issue with encoding of % in invokehttp > processor. > It changes every % into %25, which causes error 403 and most of the flows > don't work. > On the version 1.24 everything works properly with this processor. > Here are the screenshots of 1.25: > !image-2024-02-26-08-10-12-657.png! > And request: > !image-2024-02-26-08-11-25-213.png! > Where on version 1.24 this issue doesn't persist: > !image-2024-02-26-08-13-14-199.png! > !image-2024-02-26-08-16-36-309.png! > Please investigate this issue, it is the blocker of upgrading the environment > to this version. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12842) InvokeHTTP version wrong encoding of % in URL
[ https://issues.apache.org/jira/browse/NIFI-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael W Moser updated NIFI-12842: --- Affects Version/s: 2.0.0-M2 > InvokeHTTP version wrong encoding of % in URL > - > > Key: NIFI-12842 > URL: https://issues.apache.org/jira/browse/NIFI-12842 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.25.0, 2.0.0-M2 > Environment: RHEL 7 >Reporter: WojciechWitos >Assignee: Daniel Stieglitz >Priority: Major > Attachments: image-2024-02-26-08-10-12-657.png, > image-2024-02-26-08-11-25-213.png, image-2024-02-26-08-13-14-199.png, > image-2024-02-26-08-16-36-309.png, image-2024-02-27-12-09-31-831.png, > image-2024-02-27-12-10-33-542.png, image-2024-02-27-12-10-59-337.png, > image-2024-02-27-12-11-39-163.png, image-2024-02-27-12-12-00-043.png, > image-2024-02-27-12-12-06-720.png, image-2024-02-27-12-13-10-173.png > > > Hi! > I've encountered on the version 1.25 issue with encoding of % in invokehttp > processor. > It changes every % into %25, which causes error 403 and most of the flows > don't work. > On the version 1.24 everything works properly with this processor. > Here are the screenshots of 1.25: > !image-2024-02-26-08-10-12-657.png! > And request: > !image-2024-02-26-08-11-25-213.png! > Where on version 1.24 this issue doesn't persist: > !image-2024-02-26-08-13-14-199.png! > !image-2024-02-26-08-16-36-309.png! > Please investigate this issue, it is the blocker of upgrading the environment > to this version. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12785) InvokeHTTP handler should not urlencode HTTP URL
[ https://issues.apache.org/jira/browse/NIFI-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael W Moser updated NIFI-12785: --- Affects Version/s: 1.25.0 > InvokeHTTP handler should not urlencode HTTP URL > > > Key: NIFI-12785 > URL: https://issues.apache.org/jira/browse/NIFI-12785 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.25.0, 2.0.0-M2 > Environment: AlmaLinux 8.9 Kernel 4.18.0-513.5.1.el8_9.x86_64 > Apache NiFi 2.0.0-M2 >Reporter: macdoor615 >Priority: Major > Attachments: M1-output.png, M2-output.png > > > InvokeHTTP processor call HTTP URL > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%2Fstage%2F15m%2Fheshangwuzhibo.yaml/raw?ref=main] > output attribute > invokehttp.request.url: > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%252Fstage%252F15m%252Fheshangwuzhibo.yaml/raw?ref=main] > > invokehttp.status.code: 404 > > The situation is different for version 2.0.0-M1, output attribute > invokehttp.request.url: > [http://hb3-ifz-gitlab-000:8100/gitlab/api/v4/projects/318/repository/files/ftp%2Fstage%2F15m%2Fheshangwuzhibo.yaml/raw?ref=main] > > invokehttp.status.code: 200 > > I found that in the M2 version % symbol was urlencoded to %25, M1 version. > The M1 version does not urlencode > > pls refer to the uploaded pictures -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12760 Flow sensitive properties encryption support in toolkit [nifi]
briansolo1985 commented on code in PR #8430: URL: https://github.com/apache/nifi/pull/8430#discussion_r1506112666 ## minifi/minifi-toolkit/minifi-toolkit-encrypt-config/src/main/java/org/apache/nifi/minifi/toolkit/config/command/MiNiFiEncryptConfig.java: ## @@ -40,54 +53,76 @@ import picocli.CommandLine.Option; /** - * Shared Encrypt Configuration for NiFi and NiFi Registry + * Encrypt Configuration for MiNiFi */ @Command( -name = "encrypt-config", -sortOptions = false, -mixinStandardHelpOptions = true, -usageHelpWidth = 160, -separator = " ", -version = { -"Java ${java.version} (${java.vendor} ${java.vm.name} ${java.vm.version})" -}, -descriptionHeading = "Description: ", -description = { -"encrypt-config supports protection of sensitive values in Apache MiNiFi" -} +name = "encrypt-config", +sortOptions = false, +mixinStandardHelpOptions = true, +usageHelpWidth = 160, +separator = " ", +version = { +"Java ${java.version} (${java.vendor} ${java.vm.name} ${java.vm.version})" +}, +descriptionHeading = "Description: ", +description = { +"encrypt-config supports protection of sensitive values in Apache MiNiFi" +} ) -public class MiNiFiEncryptConfig implements Runnable{ +public class MiNiFiEncryptConfig implements Runnable { static final String BOOTSTRAP_ROOT_KEY_PROPERTY = "minifi.bootstrap.sensitive.key"; private static final String WORKING_FILE_NAME_FORMAT = "%s.%d.working"; private static final int KEY_LENGTH = 32; @Option( -names = {"-b", "--bootstrapConf"}, -description = "Path to file containing Bootstrap Configuration [bootstrap.conf] for optional root key and property protection scheme settings" +names = {"-b", "--bootstrapConf"}, +description = "Path to file containing Bootstrap Configuration [bootstrap.conf] for optional root key and property protection scheme settings" ) Path bootstrapConfPath; @Option( -names = {"-B", "--outputBootstrapConf"}, -description = "Path to output file for Bootstrap Configuration [bootstrap.conf] with root key configured" +names = {"-B", "--outputBootstrapConf"}, +description = "Path to output file for Bootstrap Configuration [bootstrap.conf] with root key configured" ) Path outputBootstrapConf; +@Option( Review Comment: Sure, added the instructions to the guide -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Reopened] (NIFI-12740) Python Processors sometimes stuck in invalid state: 'Initializing runtime environment'
[ https://issues.apache.org/jira/browse/NIFI-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne reopened NIFI-12740: --- Re-opening issue. While the fix greatly reduced the chances of this happening, I did encounter the issue again. So not all cases are handled correctly. > Python Processors sometimes stuck in invalid state: 'Initializing runtime > environment' > -- > > Key: NIFI-12740 > URL: https://issues.apache.org/jira/browse/NIFI-12740 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 2.0.0-M1, 2.0.0-M2 >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Blocker > Fix For: 2.0.0 > > Time Spent: 20m > Remaining Estimate: 0h > > When creating a Python processor, sometimes the Processor remains in an > invalid state with the message "Initializing runtime environment" > In the logs, we see the following error/stack trace: > {code:java} > 2024-02-05 17:23:30,308 ERROR [Initialize SetRecordField] > org.apache.nifi.NiFi An Unknown Error Occurred in Thread > VirtualThread[#123,Initialize > SetRecordField]/runnable@ForkJoinPool-1-worker-5: > java.lang.NullPointerException: Cannot invoke "java.util.List.stream()" > because "processorTypes" is null > java.lang.NullPointerException: Cannot invoke "java.util.List.stream()" > because "processorTypes" is null > at > org.apache.nifi.py4j.StandardPythonBridge.findExtensionId(StandardPythonBridge.java:322) > at > org.apache.nifi.py4j.StandardPythonBridge.createProcessorBridge(StandardPythonBridge.java:99) > at > org.apache.nifi.py4j.StandardPythonBridge.lambda$createProcessor$3(StandardPythonBridge.java:142) > at > org.apache.nifi.python.processor.PythonProcessorProxy.lambda$new$0(PythonProcessorProxy.java:73) > at java.base/java.lang.VirtualThread.run(VirtualThread.java:309) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] NIFI-12740: Fixed issue in NiFiPythonGateway that stems from the fact… [nifi]
markap14 opened a new pull request, #8456: URL: https://github.com/apache/nifi/pull/8456 … that the thread adding an object to the JavaObjectBindings was not necessarily the thread removing them. The algorithm that was in place assumed that the same thread would be used, in order to ensure that an object could be unbound before being accessed. The new algorithm binds each new object to all active method invocations and only unbinds the objects after all method invocations complete, regardless of thread. Additionally, found that many method calls could create new proxies on the Python side, just for getter methods whose values don't change. This is very expensive, so introduced a new @Idempotent annotation that can be added to interface methods such that we can cache the value and avoid the expensive overhead. # Summary [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12733) Make Apicurio's groupId optional and configurable and use artifactId instead of name as key
[ https://issues.apache.org/jira/browse/NIFI-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-12733: -- Fix Version/s: (was: 1.26.0) Resolution: Fixed Status: Resolved (was: Patch Available) > Make Apicurio's groupId optional and configurable and use artifactId instead > of name as key > --- > > Key: NIFI-12733 > URL: https://issues.apache.org/jira/browse/NIFI-12733 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Julien G. >Assignee: Julien G. >Priority: Major > Fix For: 2.0.0 > > Time Spent: 40m > Remaining Estimate: 0h > > In NiFi, when looking for a schema in Apicurio, it will extract the > {{groupId}} to use it to retrieve the schema. But in fact, in Apicurio, the > {{groupId}} is optional. So if the {{groupId}} is not set in Apicurio, it > will fail to retrieve the schema. And 2 schema can have the same {{id}} but 2 > different {{groupId}}. > And currently we are using the {{name}} of the schema to retrieve it but the > key should be the {{id}} of the schema because the {{id}} is required to be > unique across the registry but not the {{name}}. > So the {{groupId}} should be optionnal and settable within the controller > with a dedicated property. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12733) Make Apicurio's groupId optional and configurable and use artifactId instead of name as key
[ https://issues.apache.org/jira/browse/NIFI-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821674#comment-17821674 ] ASF subversion and git services commented on NIFI-12733: Commit ecea18f79655c0e34949d94609c8909aeb2d093e in nifi's branch refs/heads/main from Juldrixx [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ecea18f796 ] NIFI-12733 Make Apicurio's groupId optional and configurable and use artifactId instead of name as key Signed-off-by: Pierre Villard This closes #8351. > Make Apicurio's groupId optional and configurable and use artifactId instead > of name as key > --- > > Key: NIFI-12733 > URL: https://issues.apache.org/jira/browse/NIFI-12733 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Julien G. >Assignee: Julien G. >Priority: Major > Fix For: 2.0.0, 1.26.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > In NiFi, when looking for a schema in Apicurio, it will extract the > {{groupId}} to use it to retrieve the schema. But in fact, in Apicurio, the > {{groupId}} is optional. So if the {{groupId}} is not set in Apicurio, it > will fail to retrieve the schema. And 2 schema can have the same {{id}} but 2 > different {{groupId}}. > And currently we are using the {{name}} of the schema to retrieve it but the > key should be the {{id}} of the schema because the {{id}} is required to be > unique across the registry but not the {{name}}. > So the {{groupId}} should be optionnal and settable within the controller > with a dedicated property. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12733 Make Apicurio's groupId optional and configurable and use … [nifi]
asfgit closed pull request #8351: NIFI-12733 Make Apicurio's groupId optional and configurable and use … URL: https://github.com/apache/nifi/pull/8351 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-12847) Add Enum data type handling to Iceberg record converter
[ https://issues.apache.org/jira/browse/NIFI-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-12847. --- Fix Version/s: 2.0.0 1.26.0 Resolution: Fixed > Add Enum data type handling to Iceberg record converter > --- > > Key: NIFI-12847 > URL: https://issues.apache.org/jira/browse/NIFI-12847 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Bathori >Assignee: Mark Bathori >Priority: Minor > Fix For: 2.0.0, 1.26.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Add Enum data type handling to Iceberg record converter. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12847) Add Enum data type handling to Iceberg record converter
[ https://issues.apache.org/jira/browse/NIFI-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821672#comment-17821672 ] ASF subversion and git services commented on NIFI-12847: Commit dcc0e5edb78f924776003f3d1187e83b2ec616b1 in nifi's branch refs/heads/support/nifi-1.x from Mark Bathori [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=dcc0e5edb7 ] NIFI-12847: Add Enum data type handling to Iceberg record converter Signed-off-by: Pierre Villard This closes #8453. > Add Enum data type handling to Iceberg record converter > --- > > Key: NIFI-12847 > URL: https://issues.apache.org/jira/browse/NIFI-12847 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Bathori >Assignee: Mark Bathori >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Add Enum data type handling to Iceberg record converter. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12847: Add Enum data type handling to Iceberg record converter [nifi]
asfgit closed pull request #8453: NIFI-12847: Add Enum data type handling to Iceberg record converter URL: https://github.com/apache/nifi/pull/8453 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-12847) Add Enum data type handling to Iceberg record converter
[ https://issues.apache.org/jira/browse/NIFI-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821671#comment-17821671 ] ASF subversion and git services commented on NIFI-12847: Commit c29a744644134bb122dbbddc3eb3d6ba3d98508a in nifi's branch refs/heads/main from Mark Bathori [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=c29a744644 ] NIFI-12847: Add Enum data type handling to Iceberg record converter Signed-off-by: Pierre Villard This closes #8453. > Add Enum data type handling to Iceberg record converter > --- > > Key: NIFI-12847 > URL: https://issues.apache.org/jira/browse/NIFI-12847 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Bathori >Assignee: Mark Bathori >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Add Enum data type handling to Iceberg record converter. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-12847) Add Enum data type handling to Iceberg record converter
[ https://issues.apache.org/jira/browse/NIFI-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-12847: -- Component/s: Extensions > Add Enum data type handling to Iceberg record converter > --- > > Key: NIFI-12847 > URL: https://issues.apache.org/jira/browse/NIFI-12847 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Bathori >Assignee: Mark Bathori >Priority: Minor > Fix For: 2.0.0, 1.26.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Add Enum data type handling to Iceberg record converter. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] MINIFICPP-2202 Start MiNiFi service after Windows install [nifi-minifi-cpp]
lordgamez opened a new pull request, #1736: URL: https://github.com/apache/nifi-minifi-cpp/pull/1736 https://issues.apache.org/jira/browse/MINIFICPP-2202 Depends on https://github.com/apache/nifi-minifi-cpp/pull/1734 - Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINIFICPP-2203 Add support for building Windows MSI without any redistributables included [nifi-minifi-cpp]
szaszm commented on code in PR #1734: URL: https://github.com/apache/nifi-minifi-cpp/pull/1734#discussion_r1505905580 ## cmake/MiNiFiOptions.cmake: ## Review Comment: What happens if all of these remain off? I'd switch to no redistributables by default, unless the user asks for merge modules or redist DLLs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] MINIFICPP-2307 Fix libsodium url parameter passing for CMake 3.29 [nifi-minifi-cpp]
lordgamez opened a new pull request, #1735: URL: https://github.com/apache/nifi-minifi-cpp/pull/1735 https://issues.apache.org/jira/browse/MINIFICPP-2307 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINIFICPP-2231 Replace global CXX flags with target specific ones [nifi-minifi-cpp]
fgerlits commented on code in PR #1724: URL: https://github.com/apache/nifi-minifi-cpp/pull/1724#discussion_r1505650786 ## extensions/coap/CMakeLists.txt: ## @@ -31,8 +31,8 @@ include_directories(../http-curl/) file(GLOB CSOURCES "nanofi/*.c") file(GLOB SOURCES "*.cpp" "protocols/*.cpp" "processors/*.cpp" "controllerservice/*.cpp" "server/*.cpp" ) -add_library(nanofi-coap-c STATIC ${CSOURCES}) -add_library(minifi-coap SHARED ${SOURCES}) +add_minifi_library(nanofi-coap-c STATIC "${CSOURCES}") +add_minifi_library(minifi-coap SHARED "${SOURCES}") Review Comment: Are you sure we want to add these new `""`s? I would expect this means that now we are passing the list of sources as a single argument instead of a list of separate arguments; do we need to / do we want to do that? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-2307) Libsodium download fails with CMake 3.29 on Windows
Gábor Gyimesi created MINIFICPP-2307: Summary: Libsodium download fails with CMake 3.29 on Windows Key: MINIFICPP-2307 URL: https://issues.apache.org/jira/browse/MINIFICPP-2307 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Reporter: Gábor Gyimesi Assignee: Gábor Gyimesi Libsodium download fails with the following error: {code:java} 38>CUSTOMBUILD : error : downloading 'https://download.libsodium.org/libsodium/releases/libsodium-1.0.18.tar.gz https://github.com/jedisct1/libsodium/releases/download/1.0.18-RELEASE/libsodium-1.0.18.tar.gz https://gentoo.osuosl.org/distfiles/libsodium-1.0.18.tar.gz' failed [c:\Users\ggyimesi\nifi-minif i-cpp\build\libsodium-external.vcxproj] status_code: 3 status_string: "URL using bad/illegal format or missing URL" log: --- LOG BEGIN --- URL rejected: Malformed input to a URL function Closing connection --- LOG END --- {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (MINIFICPP-2202) The MSI installer should start the ApacheNiFiMiNiFi service
[ https://issues.apache.org/jira/browse/MINIFICPP-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi reassigned MINIFICPP-2202: Assignee: Gábor Gyimesi > The MSI installer should start the ApacheNiFiMiNiFi service > --- > > Key: MINIFICPP-2202 > URL: https://issues.apache.org/jira/browse/MINIFICPP-2202 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Ferenc Gerlits >Assignee: Gábor Gyimesi >Priority: Minor > > At the end of the installation process, is it possible for the Windows MSI > installer to start the ApacheNiFiMiNiFi service? That would save an > unnecessary step for the user. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] MINIFICPP-2203 Add support for building Windows MSI without any redistributables included [nifi-minifi-cpp]
lordgamez opened a new pull request, #1734: URL: https://github.com/apache/nifi-minifi-cpp/pull/1734 - Also remove unused FORCE_WINDOWS build option -- Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org