[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store
szaszm commented on a change in pull request #947: URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r536913865 ## File path: libminifi/src/utils/tls/DistinguishedName.cpp ## @@ -0,0 +1,64 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "utils/tls/DistinguishedName.h" + +#include + +#include "utils/StringUtils.h" + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace utils { +namespace tls { + +DistinguishedName::DistinguishedName(const std::vector& components) { + std::transform(components.begin(), components.end(), std::back_inserter(components_), + [](const std::string& component) { return utils::StringUtils::trim(component); }); + std::sort(components_.begin(), components_.end()); +} + +DistinguishedName DistinguishedName::fromCommaSeparated(const std::string& comma_separated_components) { + return DistinguishedName{utils::StringUtils::split(comma_separated_components, ",")}; +} + +DistinguishedName DistinguishedName::fromSlashSeparated(const std::string _separated_components) { + return DistinguishedName{utils::StringUtils::split(slash_separated_components, "/")}; Review comment: That's a great idea, I'm fine with leaving it as is in the meantime. I think deployments with the most frequent heartbeat intervals are around 1/sec, so it should be fine in either case. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store
szaszm commented on a change in pull request #947: URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r536913327 ## File path: libminifi/src/utils/tls/DistinguishedName.cpp ## @@ -0,0 +1,64 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "utils/tls/DistinguishedName.h" + +#include + +#include "utils/StringUtils.h" + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace utils { +namespace tls { + +DistinguishedName::DistinguishedName(const std::vector& components) { + std::transform(components.begin(), components.end(), std::back_inserter(components_), + [](const std::string& component) { return utils::StringUtils::trim(component); }); + std::sort(components_.begin(), components_.end()); +} + +DistinguishedName DistinguishedName::fromCommaSeparated(const std::string& comma_separated_components) { + return DistinguishedName{utils::StringUtils::split(comma_separated_components, ",")}; +} + +DistinguishedName DistinguishedName::fromSlashSeparated(const std::string _separated_components) { + return DistinguishedName{utils::StringUtils::split(slash_separated_components, "/")}; +} + +utils::optional DistinguishedName::getCN() const { + const auto it = std::find_if(components_.begin(), components_.end(), + [](const std::string& component) { return component.substr(0, 3) == "CN="; }); Review comment: Adam, you're right, I didn't think about SSO. From a readability standpoint, both are good IMO, so I'm fine with either. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-8045) Avro Schemas should support 'fixed' field types
[ https://issues.apache.org/jira/browse/NIFI-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] HondaWei reassigned NIFI-8045: -- Assignee: HondaWei > Avro Schemas should support 'fixed' field types > --- > > Key: NIFI-8045 > URL: https://issues.apache.org/jira/browse/NIFI-8045 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.11.4 >Reporter: Chris Sampson >Assignee: HondaWei >Priority: Major > Attachments: avro-fixed.json, avro-fixed.xml > > > Attempts to use the Avro schema's "fixed" type result in an error: > {code:java} > 2020-11-25 15:36:28,187 ERROR [Timer-Driven Process Thread-5] > o.a.n.processors.standard.ValidateRecord > ValidateRecord[id=0008e1f7-0176-1000--dc31ab2d] Failed to process > StandardFlowFileRecord[uuid=fd2c0bb8-6d2e-4468-a1da-cb862ff33f61,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1606318132008-3243, > container=default, section=171], offset=552175, > length=47],offset=0,name=fd2c0bb8-6d2e-4468-a1da-cb862ff33f61,size=47]; will > route to failure: org.apache.avro.SchemaParseException: "fixed" is not a > defined name. The type of the "id" field must be a defined name or a {"type": > ...} expression. > org.apache.avro.SchemaParseException: "fixed" is not a defined name. The type > of the "id" field must be a defined name or a {"type": ...} expression. > at org.apache.avro.Schema.parse(Schema.java:1265) > at org.apache.avro.Schema$Parser.parse(Schema.java:1032) > at org.apache.avro.Schema$Parser.parse(Schema.java:1020) > at > org.apache.nifi.processors.standard.ValidateRecord.getValidationSchema(ValidateRecord.java:478) > at > org.apache.nifi.processors.standard.ValidateRecord.onTrigger(ValidateRecord.java:272) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {code} > Attached template/flow re-create this with a simply generated JSON content > passed to a ValidateRecord processor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] MikeThomsen commented on a change in pull request #4704: NIFI-7906: Implemented RecordSetWriter support for ExecuteGraphQueryRecord
MikeThomsen commented on a change in pull request #4704: URL: https://github.com/apache/nifi/pull/4704#discussion_r536841700 ## File path: nifi-nar-bundles/nifi-graph-bundle/nifi-graph-processors/src/main/java/org/apache/nifi/processors/graph/ExecuteGraphQueryRecord.java ## @@ -199,56 +200,70 @@ public void onTrigger(final ProcessContext context, final ProcessSession session .getValue())) ); - -boolean failed = false; -long delta = 0; +long delta; +FlowFile failedRecords = session.create(input); +WriteResult failedWriteResult = null; try (InputStream is = session.read(input); RecordReader reader = recordReaderFactory.createRecordReader(input, is, getLogger()); + OutputStream os = session.write(failedRecords); + RecordSetWriter failedWriter = recordSetWriterFactory.createWriter(getLogger(), reader.getSchema(), os, input.getAttributes()) ) { Record record; - long start = System.currentTimeMillis(); +failedWriter.beginRecordSet(); while ((record = reader.nextRecord()) != null) { FlowFile graph = session.create(input); -List> graphResponses = new ArrayList<>(); - -Map dynamicPropertyMap = new HashMap<>(); -for (String entry : dynamic.keySet()) { -if(!dynamicPropertyMap.containsKey(entry)) { +try { +Map dynamicPropertyMap = new HashMap<>(); +for (String entry : dynamic.keySet()) { +if (!dynamicPropertyMap.containsKey(entry)) { dynamicPropertyMap.put(entry, getRecordValue(record, dynamic.get(entry))); } -} +} -dynamicPropertyMap.putAll(input.getAttributes()); -graphResponses.addAll(executeQuery(recordScript, dynamicPropertyMap)); +dynamicPropertyMap.putAll(input.getAttributes()); +List> graphResponses = new ArrayList<>(executeQuery(recordScript, dynamicPropertyMap)); -OutputStream graphOutputStream = session.write(graph); -String graphOutput = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(graphResponses); - graphOutputStream.write(graphOutput.getBytes(StandardCharsets.UTF_8)); -graphList.add(graph); -graphOutputStream.close(); +OutputStream graphOutputStream = session.write(graph); +String graphOutput = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(graphResponses); + graphOutputStream.write(graphOutput.getBytes(StandardCharsets.UTF_8)); +graphOutputStream.close(); +session.transfer(graph, GRAPH); +} catch (Exception e) { +// write failed records to a flowfile destined for the failure relationship Review comment: It would be good to have a debug logger statement here so users can easily turn on logging for record failures. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-8073) SiteToSiteStatusReportingTask - isBackPressureEnabled field not set correctly
[ https://issues.apache.org/jira/browse/NIFI-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohsan updated NIFI-8073: - Status: Patch Available (was: Open) corrected condition for label isBackPressureEnabled. > SiteToSiteStatusReportingTask - isBackPressureEnabled field not set correctly > - > > Key: NIFI-8073 > URL: https://issues.apache.org/jira/browse/NIFI-8073 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.12.1, 1.9.0 >Reporter: Mohsan >Priority: Minor > Attachments: > 0001-NIFI-8073-corrected-condition-for-label-isBackPressu.patch > > > SiteToSiteStatusReportingTask 1.9.0.1.0.1.0-12 > the reported field isBackPressureEnabled (for Connections) is set to "true" > if the backpressure is reached due to the queue reaching its max. flowfile > count. But if the queue reaches its max. amount of data it can hold, the > field isBackPressureEnabled reports "false", even tough it should be "true". > Assumption: Check on queue capacity (Bytes) is missing. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8073) SiteToSiteStatusReportingTask - isBackPressureEnabled field not set correctly
[ https://issues.apache.org/jira/browse/NIFI-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohsan updated NIFI-8073: - Attachment: 0001-NIFI-8073-corrected-condition-for-label-isBackPressu.patch > SiteToSiteStatusReportingTask - isBackPressureEnabled field not set correctly > - > > Key: NIFI-8073 > URL: https://issues.apache.org/jira/browse/NIFI-8073 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.9.0, 1.12.1 >Reporter: Mohsan >Priority: Minor > Attachments: > 0001-NIFI-8073-corrected-condition-for-label-isBackPressu.patch > > > SiteToSiteStatusReportingTask 1.9.0.1.0.1.0-12 > the reported field isBackPressureEnabled (for Connections) is set to "true" > if the backpressure is reached due to the queue reaching its max. flowfile > count. But if the queue reaches its max. amount of data it can hold, the > field isBackPressureEnabled reports "false", even tough it should be "true". > Assumption: Check on queue capacity (Bytes) is missing. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7899) InvokeHTTP does not timeout
[ https://issues.apache.org/jira/browse/NIFI-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17244461#comment-17244461 ] HondaWei commented on NIFI-7899: Hi Do you try to test your flow by invoking another restful service? Maybe it is the problem of remote service. > InvokeHTTP does not timeout > --- > > Key: NIFI-7899 > URL: https://issues.apache.org/jira/browse/NIFI-7899 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.11.4 > Environment: Ubuntu 18.04. Nifi 1.11.4. > 4 core, 8GB mem. Java set to 4GB mem >Reporter: Jens M Kofoed >Priority: Major > > We have some issues with the InvokeHTTP process. It "randomly" hangs in the > process without timing out. The processor shows that there are 1 task running > (upper right corner) and it can runs for hours without any outputs, but with > multiply flowfiles in the queue. > Trying to stop it takes forever so I have to terminate it. restart the > processor and everything works fine for a long time. until next time it hangs. > Our configuration of the process is as follow: > Penalty: 30s, Yield: 1s, > Scheduling: timer driven, Concurrent Task: 1, Run Schedule: 0, Run duration: > 0 > HTTP Method: GET > Connection timeout: 5s > Read timeout: 15s > Idle Timeout: 5m > Max idle Connection: 5 > I could not find any other bug reports here. but there are other people > metion same issues: > [https://webcache.googleusercontent.com/search?q=cache:LMqcymQiM-IJ:https://community.cloudera.com/t5/Support-Questions/InvokeHTTP-randomly-hangs/td-p/296184+=1=da=clnk=dk] > -- This message was sent by Atlassian Jira (v8.3.4#803005)