[GitHub] [nifi-registry] praneethkumarpidugu commented on pull request #177: NIFIREG-260 Rebase capability to support branching workflow

2020-12-03 Thread GitBox


praneethkumarpidugu commented on pull request #177:
URL: https://github.com/apache/nifi-registry/pull/177#issuecomment-738600779


   Hi What is the status of this PR ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8071) Implement minimal formatting in comments

2020-12-03 Thread Dmitry Belyavsky (Jira)
Dmitry Belyavsky created NIFI-8071:
--

 Summary: Implement minimal formatting in comments
 Key: NIFI-8071
 URL: https://issues.apache.org/jira/browse/NIFI-8071
 Project: Apache NiFi
  Issue Type: Wish
  Components: Core Framework
Reporter: Dmitry Belyavsky


It would be great to be able to add at least some formatting to the comments on 
processors/processor groups 

at the minimum it ought to honor the new lines when displayed in the pop-up 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-8050) Custom Groovy writer breaks during upgrade

2020-12-03 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess resolved NIFI-8050.

Fix Version/s: 1.10.0
 Assignee: Matt Burgess
   Resolution: Fixed

The RecordSetWriter API was changed to include a Map argument in 
the createWriter() method. The Migration Guide 
(https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance) has been 
updated to include this documentation.

> Custom Groovy writer breaks during upgrade
> --
>
> Key: NIFI-8050
> URL: https://issues.apache.org/jira/browse/NIFI-8050
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.10.0
>
>
> A couple of issues when upgrading NiFi and using a custom scripted writer 
> with Groovy.
> The scripted writer was something like: 
> {code:java}
> import ...
> class GroovyRecordSetWriter implements RecordSetWriter {
> ...
> @Override
> WriteResult write(Record r) throws IOException {
> ...
> }
> @Override
> String getMimeType() { ... }
> @Override
> WriteResult write(final RecordSet rs) throws IOException {
> ...
> }
> public void beginRecordSet() throws IOException { ... }
> @Override
> public WriteResult finishRecordSet() throws IOException { ... }
> @Override
> public void close() throws IOException {}
> @Override
> public void flush() throws IOException {}
> }
> class GroovyRecordSetWriterFactory extends AbstractControllerService 
> implements RecordSetWriterFactory {
> @Override
> RecordSchema getSchema(Map variables, RecordSchema 
> readSchema) throws SchemaNotFoundException, IOException {
>null
> }
> @Override
> RecordSetWriter createWriter(ComponentLog logger, RecordSchema schema, 
> OutputStream out) throws SchemaNotFoundException, IOException {
>new GroovyRecordSetWriter(out)
> }
> }
> writer = new GroovyRecordSetWriterFactory()
> {code}
> With NIFI-6318 we changed a method in the interface RecordSetWriterFactory.
> When using the above code in NiFi 1.9.2, it works fine but after an upgrade 
> on 1.11.4, this breaks. The Controller Service, when enabled, is throwing the 
> below message:
> {quote}Can't have an abstract method in a non-abstract class. The class 
> 'GroovyRecordSetWriterFactory' must be declared abstract or the method 
> 'org.apache.nifi.serialization.RecordSetWriter 
> createWriter(org.apache.nifi.logging.ComponentLog, 
> org.apache.nifi.serialization.record.RecordSchema, java.io.OutputStream, 
> java.util.Map)' must be implemented.
> {quote}
> However the controller service is successfully enabled and the processors 
> referencing it can be started. When using the ConvertRecord processor with 
> the problematic controller service, it will throw the below NPE:
> {code:java}
> 2020-11-26 15:46:13,876 ERROR [Timer-Driven Process Thread-25] 
> o.a.n.processors.standard.ConvertRecord 
> ConvertRecord[id=8b5456ae-71dc-3bd3-d0c0-df50d196fc00] Failed to process 
> StandardFlowFileRecord[uuid=adebfcf6-b449-4d01-90a7-0463930aade0,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1606401933295-1, container=default, 
> section=1], offset=80, 
> length=296],offset=0,name=adebfcf6-b449-4d01-90a7-0463930aade0,size=296]; 
> will route to failure: java.lang.NullPointerException 
> java.lang.NullPointerException: null at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:151)
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2986)
>  at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:122)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)
>  at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>  at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> 

[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store

2020-12-03 Thread GitBox


szaszm commented on a change in pull request #947:
URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r535539954



##
File path: libminifi/src/utils/tls/DistinguishedName.cpp
##
@@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/tls/DistinguishedName.h"
+
+#include 
+
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace tls {
+
+DistinguishedName::DistinguishedName(const std::vector& 
components) {
+  std::transform(components.begin(), components.end(), 
std::back_inserter(components_),
+  [](const std::string& component) { return 
utils::StringUtils::trim(component); });
+  std::sort(components_.begin(), components_.end());
+}
+
+DistinguishedName DistinguishedName::fromCommaSeparated(const std::string& 
comma_separated_components) {
+  return 
DistinguishedName{utils::StringUtils::split(comma_separated_components, ",")};
+}
+
+DistinguishedName DistinguishedName::fromSlashSeparated(const std::string 
_separated_components) {
+  return 
DistinguishedName{utils::StringUtils::split(slash_separated_components, "/")};
+}
+
+utils::optional DistinguishedName::getCN() const {
+  const auto it = std::find_if(components_.begin(), components_.end(),
+  [](const std::string& component) { return component.substr(0, 3) == 
"CN="; });

Review comment:
   Not sure how often this would be called but this is a way to do the same 
without a temporary allocation/deallocation:
   ```suggestion
 [](const std::string& component) { return component.compare(0, 3, 
"CN="); });
   ```

##
File path: libminifi/src/utils/tls/ExtendedKeyUsage.cpp
##
@@ -0,0 +1,104 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifdef OPENSSL_SUPPORT
+
+#include "utils/tls/ExtendedKeyUsage.h"
+
+#include 
+
+#include 
+#include 
+#include 
+
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace tls {
+
+namespace {
+
+struct KeyValuePair {
+  const char* key;
+  uint8_t value;
+};
+constexpr std::array EXT_KEY_USAGE_NAME_TO_BIT_POS{{
+KeyValuePair{"Server Authentication", 1},
+KeyValuePair{"Client Authentication", 2},
+KeyValuePair{"Code Signing", 3},
+KeyValuePair{"Secure Email", 4},
+KeyValuePair{"Time Stamping", 8},
+KeyValuePair{"OCSP Signing", 9}
+}};
+
+}  // namespace
+
+void EXTENDED_KEY_USAGE_deleter::operator()(EXTENDED_KEY_USAGE* key_usage) 
const { EXTENDED_KEY_USAGE_free(key_usage); }
+
+ExtendedKeyUsage::ExtendedKeyUsage() : 
logger_(core::logging::LoggerFactory::getLogger()) {}
+
+ExtendedKeyUsage::ExtendedKeyUsage(const EXTENDED_KEY_USAGE& key_usage_asn1) : 
ExtendedKeyUsage{} {
+  const int num_oids = sk_ASN1_OBJECT_num(_usage_asn1);
+  for (int i = 0; i < num_oids; ++i) {
+const ASN1_OBJECT* const oid = sk_ASN1_OBJECT_value(_usage_asn1, i);
+assert(oid && oid->length > 0);
+const unsigned char bit_pos = oid->data[oid->length - 1];
+if (bit_pos < CHAR_BIT * sizeof(bits_)) {
+  bits_ |= (1 << bit_pos);
+}
+  }
+}
+
+ExtendedKeyUsage::ExtendedKeyUsage(const std::string& key_usage_str) : 
ExtendedKeyUsage{} {
+  const std::vector key_usages = 
utils::StringUtils::split(key_usage_str, ",");
+  for (const auto& key_usage : key_usages) {
+const 

[GitHub] [nifi] MikeThomsen commented on a change in pull request #4702: NIFI-8063: Added profile (enabled) to include most NARs, can be disabled

2020-12-03 Thread GitBox


MikeThomsen commented on a change in pull request #4702:
URL: https://github.com/apache/nifi/pull/4702#discussion_r535737353



##
File path: nifi-assembly/pom.xml
##
@@ -451,356 +218,597 @@ language governing permissions and limitations under 
the License. -->
 
 
 org.apache.nifi
-nifi-azure-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-azure-services-api-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-scripting-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-groovyx-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-elasticsearch-nar
+nifi-registry-nar
 1.13.0-SNAPSHOT
 nar
 
 
 org.apache.nifi
-nifi-elasticsearch-client-service-api-nar
+nifi-record-serialization-services-nar
 1.13.0-SNAPSHOT
 nar
 
 
 org.apache.nifi
-nifi-elasticsearch-client-service-nar
+nifi-tcp-nar
 1.13.0-SNAPSHOT
 nar
 
 
 org.apache.nifi
-nifi-elasticsearch-restapi-nar
+nifi-kerberos-credentials-service-nar
 1.13.0-SNAPSHOT
 nar
 
 
 org.apache.nifi
-nifi-lumberjack-nar
+nifi-proxy-configuration-nar
 1.13.0-SNAPSHOT
 nar
 
+
+
 
-org.apache.nifi
-nifi-beats-nar
-1.13.0-SNAPSHOT
-nar
+javax.xml.bind
+jaxb-api
+2.3.0
 
 
-org.apache.nifi
-nifi-cybersecurity-nar
-1.13.0-SNAPSHOT
-nar
+com.sun.xml.bind
+jaxb-impl
+2.3.0
 
 
-org.apache.nifi
-nifi-email-nar
-1.13.0-SNAPSHOT
-nar
+com.sun.xml.bind
+jaxb-core
+2.3.0
 
 
-org.apache.nifi
-nifi-amqp-nar
-1.13.0-SNAPSHOT
-nar
+javax.annotation
+javax.annotation-api
+1.3.2
 
 
-org.apache.nifi
-nifi-splunk-nar
-1.13.0-SNAPSHOT
-nar
+javax.activation
+javax.activation-api
+1.2.0
 
+
 
-org.apache.nifi
-nifi-jms-cf-service-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-jms-processors-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-cassandra-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-cassandra-services-api-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-cassandra-services-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-spring-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-registry-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-hive-services-api-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-hive-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-site-to-site-reporting-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-record-serialization-services-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-mqtt-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-snmp-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-evtx-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-slack-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-smb-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-nifi-windows-event-log-nar
-1.13.0-SNAPSHOT
-nar
-
-
-org.apache.nifi
-

[jira] [Updated] (NIFI-8050) Custom Groovy writer breaks during upgrade

2020-12-03 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-8050:
---
Affects Version/s: (was: 1.12.1)
   (was: 1.11.4)
   (was: 1.11.3)
   (was: 1.11.2)
   (was: 1.11.1)
   (was: 1.12.0)
   (was: 1.11.0)
   (was: 1.10.0)

> Custom Groovy writer breaks during upgrade
> --
>
> Key: NIFI-8050
> URL: https://issues.apache.org/jira/browse/NIFI-8050
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Pierre Villard
>Priority: Major
>
> A couple of issues when upgrading NiFi and using a custom scripted writer 
> with Groovy.
> The scripted writer was something like: 
> {code:java}
> import ...
> class GroovyRecordSetWriter implements RecordSetWriter {
> ...
> @Override
> WriteResult write(Record r) throws IOException {
> ...
> }
> @Override
> String getMimeType() { ... }
> @Override
> WriteResult write(final RecordSet rs) throws IOException {
> ...
> }
> public void beginRecordSet() throws IOException { ... }
> @Override
> public WriteResult finishRecordSet() throws IOException { ... }
> @Override
> public void close() throws IOException {}
> @Override
> public void flush() throws IOException {}
> }
> class GroovyRecordSetWriterFactory extends AbstractControllerService 
> implements RecordSetWriterFactory {
> @Override
> RecordSchema getSchema(Map variables, RecordSchema 
> readSchema) throws SchemaNotFoundException, IOException {
>null
> }
> @Override
> RecordSetWriter createWriter(ComponentLog logger, RecordSchema schema, 
> OutputStream out) throws SchemaNotFoundException, IOException {
>new GroovyRecordSetWriter(out)
> }
> }
> writer = new GroovyRecordSetWriterFactory()
> {code}
> With NIFI-6318 we changed a method in the interface RecordSetWriterFactory.
> When using the above code in NiFi 1.9.2, it works fine but after an upgrade 
> on 1.11.4, this breaks. The Controller Service, when enabled, is throwing the 
> below message:
> {quote}Can't have an abstract method in a non-abstract class. The class 
> 'GroovyRecordSetWriterFactory' must be declared abstract or the method 
> 'org.apache.nifi.serialization.RecordSetWriter 
> createWriter(org.apache.nifi.logging.ComponentLog, 
> org.apache.nifi.serialization.record.RecordSchema, java.io.OutputStream, 
> java.util.Map)' must be implemented.
> {quote}
> However the controller service is successfully enabled and the processors 
> referencing it can be started. When using the ConvertRecord processor with 
> the problematic controller service, it will throw the below NPE:
> {code:java}
> 2020-11-26 15:46:13,876 ERROR [Timer-Driven Process Thread-25] 
> o.a.n.processors.standard.ConvertRecord 
> ConvertRecord[id=8b5456ae-71dc-3bd3-d0c0-df50d196fc00] Failed to process 
> StandardFlowFileRecord[uuid=adebfcf6-b449-4d01-90a7-0463930aade0,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1606401933295-1, container=default, 
> section=1], offset=80, 
> length=296],offset=0,name=adebfcf6-b449-4d01-90a7-0463930aade0,size=296]; 
> will route to failure: java.lang.NullPointerException 
> java.lang.NullPointerException: null at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:151)
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2986)
>  at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:122)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)
>  at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>  at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at 

[jira] [Commented] (NIFI-8050) Custom Groovy writer breaks during upgrade

2020-12-03 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243593#comment-17243593
 ] 

Matt Burgess commented on NIFI-8050:


I verified this is no longer an issue on 1.13 (the main branch at the time of 
this writing), the CS will remain in Enabling state and report an error, and 
even though the processor can start, it will fail reporting that the CS is 
still Enabling. I have written NIFI-8069 to cover disabling the processor 
before the CS has been successfully enabled.

> Custom Groovy writer breaks during upgrade
> --
>
> Key: NIFI-8050
> URL: https://issues.apache.org/jira/browse/NIFI-8050
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.11.0, 1.12.0, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 
> 1.12.1
>Reporter: Pierre Villard
>Priority: Major
>
> A couple of issues when upgrading NiFi and using a custom scripted writer 
> with Groovy.
> The scripted writer was something like: 
> {code:java}
> import ...
> class GroovyRecordSetWriter implements RecordSetWriter {
> ...
> @Override
> WriteResult write(Record r) throws IOException {
> ...
> }
> @Override
> String getMimeType() { ... }
> @Override
> WriteResult write(final RecordSet rs) throws IOException {
> ...
> }
> public void beginRecordSet() throws IOException { ... }
> @Override
> public WriteResult finishRecordSet() throws IOException { ... }
> @Override
> public void close() throws IOException {}
> @Override
> public void flush() throws IOException {}
> }
> class GroovyRecordSetWriterFactory extends AbstractControllerService 
> implements RecordSetWriterFactory {
> @Override
> RecordSchema getSchema(Map variables, RecordSchema 
> readSchema) throws SchemaNotFoundException, IOException {
>null
> }
> @Override
> RecordSetWriter createWriter(ComponentLog logger, RecordSchema schema, 
> OutputStream out) throws SchemaNotFoundException, IOException {
>new GroovyRecordSetWriter(out)
> }
> }
> writer = new GroovyRecordSetWriterFactory()
> {code}
> With NIFI-6318 we changed a method in the interface RecordSetWriterFactory.
> When using the above code in NiFi 1.9.2, it works fine but after an upgrade 
> on 1.11.4, this breaks. The Controller Service, when enabled, is throwing the 
> below message:
> {quote}Can't have an abstract method in a non-abstract class. The class 
> 'GroovyRecordSetWriterFactory' must be declared abstract or the method 
> 'org.apache.nifi.serialization.RecordSetWriter 
> createWriter(org.apache.nifi.logging.ComponentLog, 
> org.apache.nifi.serialization.record.RecordSchema, java.io.OutputStream, 
> java.util.Map)' must be implemented.
> {quote}
> However the controller service is successfully enabled and the processors 
> referencing it can be started. When using the ConvertRecord processor with 
> the problematic controller service, it will throw the below NPE:
> {code:java}
> 2020-11-26 15:46:13,876 ERROR [Timer-Driven Process Thread-25] 
> o.a.n.processors.standard.ConvertRecord 
> ConvertRecord[id=8b5456ae-71dc-3bd3-d0c0-df50d196fc00] Failed to process 
> StandardFlowFileRecord[uuid=adebfcf6-b449-4d01-90a7-0463930aade0,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1606401933295-1, container=default, 
> section=1], offset=80, 
> length=296],offset=0,name=adebfcf6-b449-4d01-90a7-0463930aade0,size=296]; 
> will route to failure: java.lang.NullPointerException 
> java.lang.NullPointerException: null at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:151)
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2986)
>  at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:122)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)
>  at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>  at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> 

[jira] [Commented] (NIFI-8070) Add a coalesce function to RecordPath

2020-12-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243551#comment-17243551
 ] 

ASF subversion and git services commented on NIFI-8070:
---

Commit d84583690f9323932cce851679770dea9d7a435f in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=d845836 ]

NIFI-8070: Added coalesce function to RecordPath


> Add a coalesce function to RecordPath
> -
>
> Key: NIFI-8070
> URL: https://issues.apache.org/jira/browse/NIFI-8070
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> There are times when it is necessary to extract one of a few different fields 
> from a Record, whichever is not null. We should add a coalesce function, 
> similar to the analog in SQL, that will return the first non-null value in a 
> sequence of arguments. For example, given the JSON:
> {code:java}
> {
>   "id": "1234",
>   "name": null
> }{code}
> The path `coalesce(/id, /name)` should return the `id` field. But given the 
> JSON:
> {code:java}
> {
>   "id": null,
>   "name": "John Doe"
> }{code}
> The same path should return the `name` field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 commented on pull request #4708: NIFI-8070: Added coalesce function to RecordPath

2020-12-03 Thread GitBox


markap14 commented on pull request #4708:
URL: https://github.com/apache/nifi/pull/4708#issuecomment-738370638


   Thanks for reviewing @exceptionfactory. Will merge to main.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 merged pull request #4708: NIFI-8070: Added coalesce function to RecordPath

2020-12-03 Thread GitBox


markap14 merged pull request #4708:
URL: https://github.com/apache/nifi/pull/4708


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on a change in pull request #4697: NIFI-7989: Add support to UpdateHiveTable for creating external tables

2020-12-03 Thread GitBox


turcsanyip commented on a change in pull request #4697:
URL: https://github.com/apache/nifi/pull/4697#discussion_r535663284



##
File path: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive3-processors/src/main/java/org/apache/nifi/processors/hive/UpdateHive3Table.java
##
@@ -322,29 +411,55 @@ private synchronized void checkAndUpdateTableSchema(final 
ProcessSession session
 s.execute(createTableSql);
 }
 
-// Now that the table is created, describe it and determine 
its location (for placing the flowfile downstream)
-String describeTable = "DESC FORMATTED " + tableName;
-ResultSet tableInfo = s.executeQuery(describeTable);
-boolean moreRows = tableInfo.next();
-boolean locationFound = false;
-while (moreRows && !locationFound) {
-String line = tableInfo.getString(1);
-if (line.startsWith("Location:")) {
-locationFound = true;
-continue; // Don't do a next() here, need to get the 
second column value
-}
-moreRows = tableInfo.next();
-}
-outputPath = tableInfo.getString(2);
+tableCreated = true;
+}
 
-} else {
-List hiveColumns = new ArrayList<>();
+// Process the table (columns, partitions, location, etc.)
+List hiveColumns = new ArrayList<>();
 
-String describeTable = "DESC FORMATTED " + tableName;
-ResultSet tableInfo = s.executeQuery(describeTable);
-// Result is 3 columns, col_name, data_type, comment. Check 
the first row for a header and skip if so, otherwise add column name
+String describeTable = "DESC FORMATTED " + tableName;

Review comment:
   Backticks are missing around tablename which leads to an error in case 
of `_test_table` eg.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-1121) Allow components' properties to depend on one another

2020-12-03 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-1121.
--
Resolution: Fixed

> Allow components' properties to depend on one another
> -
>
> Key: NIFI-1121
> URL: https://issues.apache.org/jira/browse/NIFI-1121
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.11.4
>Reporter: Mark Payne
>Assignee: M Tien
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Concept: A Processor developer (or Controller Service or Reporting Task 
> developer) should be able to indicate when building a PropertyDescriptor that 
> the property is "dependent on" another Property. If Property A depends on 
> Property B, then the following should happen:
> Property A should not be shown in the Configure dialog unless a value is 
> selected for Property B. Additionally, if Property A is dependent on 
> particular values of Property B, then Property A should be shown only if 
> Property B is set to one of those values.
> For example, in Compress Content, the "Compression Level" property should be 
> dependent on the "Mode" property being set to "Compress." This means that if 
> the "Mode" property is set to Decompress, then the UI would not show the 
> Compression Level property. This will be far less confusing for users, as it 
> will allow the UI to hide properties that irrelevant based on the 
> configuration.
> Additionally, if Property A depends on Property B and Property A is required, 
> then a valid value must be set for Property A ONLY if Property B is set to a 
> value that Property A depends on. I.e., in the example above, the Compression 
> Level property can be required, but if the Mode is not set to Compress, then 
> it doesn't matter if the Compression Level property is set to a valid value - 
> the Processor will still be valid, because Compression Level is not a 
> relevant property in this case.
> This provides developers to provide validation much more easily, as many 
> times the developer currently must implement the customValidate method to 
> ensure that if Property A is set that Property B must also be set. In this 
> case, it is taken care of by the framework simply by adding a dependency.
> From an API perspective, it would manifest itself as having a new "dependsOn" 
> method added to the PropertyDescriptor.Builder class:
> {code}
> /**
> * Indicates that this Property is relevant if and only if the parent property 
> has some (any) value set.
> **/
> Builder dependsOn(PropertyDescriptor parent);
> {code}
> {code}
> /**
>  * Indicates that this Property is relevant if and only if the parent 
> property is set to one of the values included in the 'relevantValues' 
> Collection.
> **/
> Builder dependsOn(PropertyDescriptor parent, Collection 
> relevantValues);
> {code}
> In providing this capability, we will not only be able to hide properties 
> that are not valid based on the Processor's other configuration but will also 
> make the notion of "Strategy Properties" far more powerful/easy to use. This 
> is because we can now have a Property such as "My Capability Strategy" and 
> then have properties that are shown for each of the allowed strategies.
> For example, in MergeContent, the Header, Footer, Demarcator could become 
> dependent on the "Bin-Packing Algorithm" Merge Strategy. These properties can 
> then be thought of logically as properties of that strategy itself.
> This will require a few different parts of the application to be updated:
> * nifi-api - must be updated to support the new methods.
> * nifi-framework-core - must be updated to handle new validation logic for 
> components
> * nifi-web - must be updated to show/hide properties based on other 
> properties' values
> * nifi-mock - needs to handle the validation logic and ensure that developers 
> are using the API properly, throwing AssertionErrors if not
> * nifi-docs - need to update the Developer Guide to explain how this works
> * processors - many processors can be updated to take advantage of this new 
> capability



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-1121) Allow components' properties to depend on one another

2020-12-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243528#comment-17243528
 ] 

ASF subversion and git services commented on NIFI-1121:
---

Commit 04aaf2513102ae9ba2a74aaef9faa70e92ceb37d in nifi's branch 
refs/heads/main from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=04aaf25 ]

NIFI-1121: Use display name for dependent property documentation


> Allow components' properties to depend on one another
> -
>
> Key: NIFI-1121
> URL: https://issues.apache.org/jira/browse/NIFI-1121
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.11.4
>Reporter: Mark Payne
>Assignee: M Tien
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Concept: A Processor developer (or Controller Service or Reporting Task 
> developer) should be able to indicate when building a PropertyDescriptor that 
> the property is "dependent on" another Property. If Property A depends on 
> Property B, then the following should happen:
> Property A should not be shown in the Configure dialog unless a value is 
> selected for Property B. Additionally, if Property A is dependent on 
> particular values of Property B, then Property A should be shown only if 
> Property B is set to one of those values.
> For example, in Compress Content, the "Compression Level" property should be 
> dependent on the "Mode" property being set to "Compress." This means that if 
> the "Mode" property is set to Decompress, then the UI would not show the 
> Compression Level property. This will be far less confusing for users, as it 
> will allow the UI to hide properties that irrelevant based on the 
> configuration.
> Additionally, if Property A depends on Property B and Property A is required, 
> then a valid value must be set for Property A ONLY if Property B is set to a 
> value that Property A depends on. I.e., in the example above, the Compression 
> Level property can be required, but if the Mode is not set to Compress, then 
> it doesn't matter if the Compression Level property is set to a valid value - 
> the Processor will still be valid, because Compression Level is not a 
> relevant property in this case.
> This provides developers to provide validation much more easily, as many 
> times the developer currently must implement the customValidate method to 
> ensure that if Property A is set that Property B must also be set. In this 
> case, it is taken care of by the framework simply by adding a dependency.
> From an API perspective, it would manifest itself as having a new "dependsOn" 
> method added to the PropertyDescriptor.Builder class:
> {code}
> /**
> * Indicates that this Property is relevant if and only if the parent property 
> has some (any) value set.
> **/
> Builder dependsOn(PropertyDescriptor parent);
> {code}
> {code}
> /**
>  * Indicates that this Property is relevant if and only if the parent 
> property is set to one of the values included in the 'relevantValues' 
> Collection.
> **/
> Builder dependsOn(PropertyDescriptor parent, Collection 
> relevantValues);
> {code}
> In providing this capability, we will not only be able to hide properties 
> that are not valid based on the Processor's other configuration but will also 
> make the notion of "Strategy Properties" far more powerful/easy to use. This 
> is because we can now have a Property such as "My Capability Strategy" and 
> then have properties that are shown for each of the allowed strategies.
> For example, in MergeContent, the Header, Footer, Demarcator could become 
> dependent on the "Bin-Packing Algorithm" Merge Strategy. These properties can 
> then be thought of logically as properties of that strategy itself.
> This will require a few different parts of the application to be updated:
> * nifi-api - must be updated to support the new methods.
> * nifi-framework-core - must be updated to handle new validation logic for 
> components
> * nifi-web - must be updated to show/hide properties based on other 
> properties' values
> * nifi-mock - needs to handle the validation logic and ensure that developers 
> are using the API properly, throwing AssertionErrors if not
> * nifi-docs - need to update the Developer Guide to explain how this works
> * processors - many processors can be updated to take advantage of this new 
> capability



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 merged pull request #4698: NIFI-1121: Use display name for dependent property documentation

2020-12-03 Thread GitBox


markap14 merged pull request #4698:
URL: https://github.com/apache/nifi/pull/4698


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #4708: NIFI-8070: Added coalesce function to RecordPath

2020-12-03 Thread GitBox


exceptionfactory commented on pull request #4708:
URL: https://github.com/apache/nifi/pull/4708#issuecomment-738331777


   LGTM. Ran unit test and observed all branches of Coalesce class appear to be 
covered.  Helpful documentation. +1 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-6005) Elasticsearch processor - add exception attribute upon failure

2020-12-03 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess resolved NIFI-6005.

Resolution: Fixed

Closing as duplicate of NIFI-8036 (which has a PR)

> Elasticsearch processor - add exception attribute upon failure
> --
>
> Key: NIFI-6005
> URL: https://issues.apache.org/jira/browse/NIFI-6005
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
> Environment: Nifi Elasticsearch processor
>Reporter: ramok
>Priority: Major
>  Labels: elasticsearch, exception-handling, exceptions, 
> putelasticsearch
>
> right now upon failures there are no exceptions extracted as attribute from 
> PutElasticsearch proccessors.
> In order to do Error mitigation upon indexing failures to elastic,
> we manually edited the elasticsearch processor to extract nifi attribute 
> containing the exception of an error that occured upon indexing.
> after that we were able to match exception error to a specific record.
>  
> please modify the putElasticsearch processor so error mitigation will be 
> available.
> thank you!!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6005) Elasticsearch processor - add exception attribute upon failure

2020-12-03 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-6005:
---
Component/s: (was: Core Framework)
 Extensions

> Elasticsearch processor - add exception attribute upon failure
> --
>
> Key: NIFI-6005
> URL: https://issues.apache.org/jira/browse/NIFI-6005
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
> Environment: Nifi Elasticsearch processor
>Reporter: ramok
>Priority: Major
>  Labels: elasticsearch, exception-handling, exceptions, 
> putelasticsearch
>
> right now upon failures there are no exceptions extracted as attribute from 
> PutElasticsearch proccessors.
> In order to do Error mitigation upon indexing failures to elastic,
> we manually edited the elasticsearch processor to extract nifi attribute 
> containing the exception of an error that occured upon indexing.
> after that we were able to match exception error to a specific record.
>  
> please modify the putElasticsearch processor so error mitigation will be 
> available.
> thank you!!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6005) Elasticsearch processor - add exception attribute upon failure

2020-12-03 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-6005:
---
Affects Version/s: (was: 1.8.0)
   (was: 1.7.0)
   (was: 1.6.0)
   (was: 1.5.0)

> Elasticsearch processor - add exception attribute upon failure
> --
>
> Key: NIFI-6005
> URL: https://issues.apache.org/jira/browse/NIFI-6005
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
> Environment: Nifi Elasticsearch processor
>Reporter: ramok
>Priority: Major
>  Labels: elasticsearch, exception-handling, exceptions, 
> putelasticsearch
>
> right now upon failures there are no exceptions extracted as attribute from 
> PutElasticsearch proccessors.
> In order to do Error mitigation upon indexing failures to elastic,
> we manually edited the elasticsearch processor to extract nifi attribute 
> containing the exception of an error that occured upon indexing.
> after that we were able to match exception error to a specific record.
>  
> please modify the putElasticsearch processor so error mitigation will be 
> available.
> thank you!!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-5947) Elasticsearch lookup service that can work with LookupAttribute

2020-12-03 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5947:
---
Fix Version/s: 1.9.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Elasticsearch lookup service that can work with LookupAttribute
> ---
>
> Key: NIFI-5947
> URL: https://issues.apache.org/jira/browse/NIFI-5947
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Alex Savitsky
>Priority: Major
> Fix For: 1.9.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Create an Elasticsearch-backed lookup service that can be used in a 
> LookupAttribute processor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-5947) Elasticsearch lookup service that can work with LookupAttribute

2020-12-03 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5947:
---
Affects Version/s: (was: 1.8.0)
   Status: Patch Available  (was: Open)

> Elasticsearch lookup service that can work with LookupAttribute
> ---
>
> Key: NIFI-5947
> URL: https://issues.apache.org/jira/browse/NIFI-5947
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Alex Savitsky
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Create an Elasticsearch-backed lookup service that can be used in a 
> LookupAttribute processor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8070) Add a coalesce function to RecordPath

2020-12-03 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8070:
-
Fix Version/s: 1.13.0
   Status: Patch Available  (was: Open)

> Add a coalesce function to RecordPath
> -
>
> Key: NIFI-8070
> URL: https://issues.apache.org/jira/browse/NIFI-8070
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.13.0
>
>
> There are times when it is necessary to extract one of a few different fields 
> from a Record, whichever is not null. We should add a coalesce function, 
> similar to the analog in SQL, that will return the first non-null value in a 
> sequence of arguments. For example, given the JSON:
> {code:java}
> {
>   "id": "1234",
>   "name": null
> }{code}
> The path `coalesce(/id, /name)` should return the `id` field. But given the 
> JSON:
> {code:java}
> {
>   "id": null,
>   "name": "John Doe"
> }{code}
> The same path should return the `name` field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 opened a new pull request #4708: NIFI-8070: Added coalesce function to RecordPath

2020-12-03 Thread GitBox


markap14 opened a new pull request #4708:
URL: https://github.com/apache/nifi/pull/4708


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8058) Changing a property after deleting a dynamic property causes the dynamic property to return to the UI

2020-12-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243506#comment-17243506
 ] 

ASF subversion and git services commented on NIFI-8058:
---

Commit 8055c47a84197eeaac41d3dfb2e2fb16e44806dd in nifi's branch 
refs/heads/main from mtien
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8055c47 ]

NIFI-8058 Fixed a UI error to correctly delete dynamic properties while 
configuring processors.
Changed to check the length of all unfiltered properties instead of only 
filtered properties.
Added additional check if descriptor is a dynamic property.

Signed-off-by: Matthew Burgess 

This closes #4707


> Changing a property after deleting a dynamic property causes the dynamic 
> property to return to the UI
> -
>
> Key: NIFI-8058
> URL: https://issues.apache.org/jira/browse/NIFI-8058
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Burgess
>Assignee: M Tien
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When a dynamic property is deleted from a component configuration dialog, if 
> any other property is modified, it causes the deleted dynamic property to 
> reappear in the UI.
> To reproduce: open a processor config dialog (GenerateFlowFile, e.g.), add a 
> couple dynamic properties and hit Apply. Then open the dialog again, delete 
> one of the dynamic properties, then change another property. This causes the 
> deleted property to show up again. The workaround is to delete the dynamic 
> property, click Apply, then reopen the dialog to change the other properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8058) Changing a property after deleting a dynamic property causes the dynamic property to return to the UI

2020-12-03 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-8058:
---
Fix Version/s: 1.13.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Changing a property after deleting a dynamic property causes the dynamic 
> property to return to the UI
> -
>
> Key: NIFI-8058
> URL: https://issues.apache.org/jira/browse/NIFI-8058
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Burgess
>Assignee: M Tien
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When a dynamic property is deleted from a component configuration dialog, if 
> any other property is modified, it causes the deleted dynamic property to 
> reappear in the UI.
> To reproduce: open a processor config dialog (GenerateFlowFile, e.g.), add a 
> couple dynamic properties and hit Apply. Then open the dialog again, delete 
> one of the dynamic properties, then change another property. This causes the 
> deleted property to show up again. The workaround is to delete the dynamic 
> property, click Apply, then reopen the dialog to change the other properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 closed pull request #4707: NIFI-8058 Fixed a UI error to correctly delete dynamic properties whi…

2020-12-03 Thread GitBox


mattyb149 closed pull request #4707:
URL: https://github.com/apache/nifi/pull/4707


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #4707: NIFI-8058 Fixed a UI error to correctly delete dynamic properties whi…

2020-12-03 Thread GitBox


mattyb149 commented on pull request #4707:
URL: https://github.com/apache/nifi/pull/4707#issuecomment-738297648


   +1 LGTM, tested on a live NiFi instance, verified the expected behavior. 
Thanks for the fix! Merging to main



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8070) Add a coalesce function to RecordPath

2020-12-03 Thread Mark Payne (Jira)
Mark Payne created NIFI-8070:


 Summary: Add a coalesce function to RecordPath
 Key: NIFI-8070
 URL: https://issues.apache.org/jira/browse/NIFI-8070
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Mark Payne
Assignee: Mark Payne


There are times when it is necessary to extract one of a few different fields 
from a Record, whichever is not null. We should add a coalesce function, 
similar to the analog in SQL, that will return the first non-null value in a 
sequence of arguments. For example, given the JSON:
{code:java}
{
  "id": "1234",
  "name": null
}{code}
The path `coalesce(/id, /name)` should return the `id` field. But given the 
JSON:
{code:java}
{
  "id": null,
  "name": "John Doe"
}{code}
The same path should return the `name` field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-8069) Keep processors invalid while referenced Controller Services are Enabling

2020-12-03 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann reassigned NIFI-8069:
--

Assignee: David Handermann

> Keep processors invalid while referenced Controller Services are Enabling
> -
>
> Key: NIFI-8069
> URL: https://issues.apache.org/jira/browse/NIFI-8069
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: David Handermann
>Priority: Major
>
> Historically, we have allowed processors to be marked as valid while any 
> referenced Controller Services are enabling. This was to avoid a race 
> condition where the processor was being validated while its controller 
> services were still enabling at startup, which would cause the processor to 
> be stopped even after the CS enabled. However this means a processor can be 
> started and will immediately fail at runtime (not at validation time) if the 
> CS has not finished enabling.
> Since then there have been improvements to the startup sequence. Now, if you 
> start an invalid processor, the processor still knows that it's meant to be 
> running and as soon as it becomes valid, it will start. This Jira proposes to 
> mark a processor invalid if the CS is enabling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8069) Keep processors invalid while referenced Controller Services are Enabling

2020-12-03 Thread Matt Burgess (Jira)
Matt Burgess created NIFI-8069:
--

 Summary: Keep processors invalid while referenced Controller 
Services are Enabling
 Key: NIFI-8069
 URL: https://issues.apache.org/jira/browse/NIFI-8069
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Matt Burgess


Historically, we have allowed processors to be marked as valid while any 
referenced Controller Services are enabling. This was to avoid a race condition 
where the processor was being validated while its controller services were 
still enabling at startup, which would cause the processor to be stopped even 
after the CS enabled. However this means a processor can be started and will 
immediately fail at runtime (not at validation time) if the CS has not finished 
enabling.

Since then there have been improvements to the startup sequence. Now, if you 
start an invalid processor, the processor still knows that it's meant to be 
running and as soon as it becomes valid, it will start. This Jira proposes to 
mark a processor invalid if the CS is enabling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8058) Changing a property after deleting a dynamic property causes the dynamic property to return to the UI

2020-12-03 Thread M Tien (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

M Tien updated NIFI-8058:
-
Status: Patch Available  (was: In Progress)

> Changing a property after deleting a dynamic property causes the dynamic 
> property to return to the UI
> -
>
> Key: NIFI-8058
> URL: https://issues.apache.org/jira/browse/NIFI-8058
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Burgess
>Assignee: M Tien
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When a dynamic property is deleted from a component configuration dialog, if 
> any other property is modified, it causes the deleted dynamic property to 
> reappear in the UI.
> To reproduce: open a processor config dialog (GenerateFlowFile, e.g.), add a 
> couple dynamic properties and hit Apply. Then open the dialog again, delete 
> one of the dynamic properties, then change another property. This causes the 
> deleted property to show up again. The workaround is to delete the dynamic 
> property, click Apply, then reopen the dialog to change the other properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mtien-apache opened a new pull request #4707: NIFI-8058 Fixed a UI error to correctly delete dynamic properties whi…

2020-12-03 Thread GitBox


mtien-apache opened a new pull request #4707:
URL: https://github.com/apache/nifi/pull/4707


   …le configuring processors.
   
   Changed to check the length of all unfiltered properties instead of only 
filtered properties.
   Added additional check if descriptor is a dynamic property.
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Fixed a bug due to the recent NIFI-1121 of dependent properties. After 
deleting a dynamic property while configuring processor properties, it would 
return the deleted dynamic property back to the UI. Added an additional check 
for dynamic properties that are set for deletion. Also fixed an error that 
didn't allow adding dynamic properties by checking all unfiltered properties 
instead of only filtered properties._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [x] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8064) Convert TestSecureClientZooKeeperFactory to integration test

2020-12-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243474#comment-17243474
 ] 

ASF subversion and git services commented on NIFI-8064:
---

Commit 312fa8e85e46e6d90973ff9a549e086084251afe in nifi's branch 
refs/heads/main from Bryan Bende
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=312fa8e ]

NIFI-8064 Convert TestSecureClientZooKeeperFactory to integration test


> Convert TestSecureClientZooKeeperFactory to integration test
> 
>
> Key: NIFI-8064
> URL: https://issues.apache.org/jira/browse/NIFI-8064
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This test starts an embedded ZK which should be considered more of an 
> integration tests. It sometimes fails through GH actions due to timeouts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 merged pull request #4703: NIFI-8064 Convert TestSecureClientZooKeeperFactory to integration test

2020-12-03 Thread GitBox


markap14 merged pull request #4703:
URL: https://github.com/apache/nifi/pull/4703


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #4703: NIFI-8064 Convert TestSecureClientZooKeeperFactory to integration test

2020-12-03 Thread GitBox


markap14 commented on pull request #4703:
URL: https://github.com/apache/nifi/pull/4703#issuecomment-738258085


   Thanks @bbende  +1 merged to main



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8060) Remove dependency on volatile provenance repo from stateless NAR

2020-12-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243469#comment-17243469
 ] 

ASF subversion and git services commented on NIFI-8060:
---

Commit 2b1359a8080ac8b05058beffd272cf3f41ba9e1e in nifi's branch 
refs/heads/main from Bryan Bende
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2b1359a ]

NIFI-8060 Addressed review feedback


> Remove dependency on volatile provenance repo from stateless NAR
> 
>
> Key: NIFI-8060
> URL: https://issues.apache.org/jira/browse/NIFI-8060
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Instead of sharing the volatile prov repo between nifi and stateless, we 
> should add a minimal volatile impl to stateless.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8060) Remove dependency on volatile provenance repo from stateless NAR

2020-12-03 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8060:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove dependency on volatile provenance repo from stateless NAR
> 
>
> Key: NIFI-8060
> URL: https://issues.apache.org/jira/browse/NIFI-8060
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Instead of sharing the volatile prov repo between nifi and stateless, we 
> should add a minimal volatile impl to stateless.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8060) Remove dependency on volatile provenance repo from stateless NAR

2020-12-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243468#comment-17243468
 ] 

ASF subversion and git services commented on NIFI-8060:
---

Commit 8ac8a2bd1fbf75b8458554988eb1b3b1851b25d9 in nifi's branch 
refs/heads/main from Bryan Bende
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8ac8a2b ]

NIFI-8060 Added minimal VolatileProvenanceRepository to nifi-stateless-engine 
and remove dependency on nifi-volatile-provenance-repo module


> Remove dependency on volatile provenance repo from stateless NAR
> 
>
> Key: NIFI-8060
> URL: https://issues.apache.org/jira/browse/NIFI-8060
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Instead of sharing the volatile prov repo between nifi and stateless, we 
> should add a minimal volatile impl to stateless.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 merged pull request #4700: NIFI-8060 Added minimal VolatileProvenanceRepository to nifi-stateles…

2020-12-03 Thread GitBox


markap14 merged pull request #4700:
URL: https://github.com/apache/nifi/pull/4700


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #4700: NIFI-8060 Added minimal VolatileProvenanceRepository to nifi-stateles…

2020-12-03 Thread GitBox


markap14 commented on pull request #4700:
URL: https://github.com/apache/nifi/pull/4700#issuecomment-738252857


   @bbende thanks for updating that. This is definitely a better approach than 
the route that I took, I believe. +1 merged to main.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8068) JsonReader fails when reading a Record if the schema declares data type as a union of String or Array

2020-12-03 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8068:
-
Status: Patch Available  (was: Open)

> JsonReader fails when reading a Record if the schema declares data type as a 
> union of String or Array
> -
>
> Key: NIFI-8068
> URL: https://issues.apache.org/jira/browse/NIFI-8068
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> If I attempt to read the following JSON with an inferred schema:
> {code:java}
> {
>  "fields": [
>  {"type": "string"},
>  {"type": [{"type": "string"}]}
>  ]
> }{code}
> The Record Reader fails with the following stack trace:
> {code:java}
> 2020-12-03 12:07:34,310 ERROR [Timer-Driven Process Thread-6] 
> o.a.n.processors.standard.ConvertRecord 
> ConvertRecord[id=2991bff1-0176-1000-7771-3866d63fc7d3] Failed to process 
> StandardFlowFileRecord[uuid=86483188-4d5e-40a9-bf2a-f570e6656f0f,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1607015254266-1, container=default, 
> section=1], offset=0, 
> length=83],offset=0,name=86483188-4d5e-40a9-bf2a-f570e6656f0f,size=83]; will 
> route to failure: org.apache.nifi.processor.exception.ProcessException: Could 
> not parse incoming data
> org.apache.nifi.processor.exception.ProcessException: Could not parse 
> incoming data
> at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:171)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2986)
> at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:122)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.nifi.serialization.MalformedRecordException: 
> Successfully parsed a JSON object from input but failed to convert into a 
> Record object with the given schema
> at 
> org.apache.nifi.json.AbstractJsonRowRecordReader.nextRecord(AbstractJsonRowRecordReader.java:124)
> at 
> org.apache.nifi.serialization.RecordReader.nextRecord(RecordReader.java:50)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254)
> at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.access$100(StandardControllerServiceInvocationHandler.java:38)
> at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler$ProxiedReturnObjectInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:240)
> at com.sun.proxy.$Proxy138.nextRecord(Unknown Source)
> at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:131)
> ... 14 common frames omitted
> Caused by: java.lang.ClassCastException: 
> org.apache.nifi.serialization.record.MapRecord cannot be cast to 
> java.lang.Byte
> at 
> org.apache.nifi.serialization.record.util.DataTypeUtils.toString(DataTypeUtils.java:925)
> at 
> org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:203)
> at 
> 

[GitHub] [nifi] markap14 opened a new pull request #4706: NIFI-8068: Ensure that when we determine the best of multiple possibl…

2020-12-03 Thread GitBox


markap14 opened a new pull request #4706:
URL: https://github.com/apache/nifi/pull/4706


   …e types in a UNION that we handle Arrays of Records properly. Also 
refactored code to be a bit cleaner by extracting blocks of it into 
appropriately named methods
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8068) JsonReader fails when reading a Record if the schema declares data type as a union of String or Array

2020-12-03 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8068:
-
Summary: JsonReader fails when reading a Record if the schema declares data 
type as a union of String or Array  (was: JsonReader fails when reading 
a Record when the schema declares data type as a union of String or 
Array)

> JsonReader fails when reading a Record if the schema declares data type as a 
> union of String or Array
> -
>
> Key: NIFI-8068
> URL: https://issues.apache.org/jira/browse/NIFI-8068
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> If I attempt to read the following JSON with an inferred schema:
> {code:java}
> {
>  "fields": [
>  {"type": "string"},
>  {"type": [{"type": "string"}]}
>  ]
> }{code}
> The Record Reader fails with the following stack trace:
> {code:java}
> 2020-12-03 12:07:34,310 ERROR [Timer-Driven Process Thread-6] 
> o.a.n.processors.standard.ConvertRecord 
> ConvertRecord[id=2991bff1-0176-1000-7771-3866d63fc7d3] Failed to process 
> StandardFlowFileRecord[uuid=86483188-4d5e-40a9-bf2a-f570e6656f0f,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1607015254266-1, container=default, 
> section=1], offset=0, 
> length=83],offset=0,name=86483188-4d5e-40a9-bf2a-f570e6656f0f,size=83]; will 
> route to failure: org.apache.nifi.processor.exception.ProcessException: Could 
> not parse incoming data
> org.apache.nifi.processor.exception.ProcessException: Could not parse 
> incoming data
> at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:171)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2986)
> at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:122)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.nifi.serialization.MalformedRecordException: 
> Successfully parsed a JSON object from input but failed to convert into a 
> Record object with the given schema
> at 
> org.apache.nifi.json.AbstractJsonRowRecordReader.nextRecord(AbstractJsonRowRecordReader.java:124)
> at 
> org.apache.nifi.serialization.RecordReader.nextRecord(RecordReader.java:50)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254)
> at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.access$100(StandardControllerServiceInvocationHandler.java:38)
> at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler$ProxiedReturnObjectInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:240)
> at com.sun.proxy.$Proxy138.nextRecord(Unknown Source)
> at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:131)
> ... 14 common frames omitted
> Caused by: java.lang.ClassCastException: 
> org.apache.nifi.serialization.record.MapRecord cannot be cast to 
> java.lang.Byte
> at 
> 

[jira] [Created] (NIFI-8068) JsonReader fails when reading a Record when the schema declares data type as a union of String or Array

2020-12-03 Thread Mark Payne (Jira)
Mark Payne created NIFI-8068:


 Summary: JsonReader fails when reading a Record when the schema 
declares data type as a union of String or Array
 Key: NIFI-8068
 URL: https://issues.apache.org/jira/browse/NIFI-8068
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Mark Payne
Assignee: Mark Payne


If I attempt to read the following JSON with an inferred schema:
{code:java}
{
 "fields": [
 {"type": "string"},
 {"type": [{"type": "string"}]}
 ]
}{code}
The Record Reader fails with the following stack trace:
{code:java}
2020-12-03 12:07:34,310 ERROR [Timer-Driven Process Thread-6] 
o.a.n.processors.standard.ConvertRecord 
ConvertRecord[id=2991bff1-0176-1000-7771-3866d63fc7d3] Failed to process 
StandardFlowFileRecord[uuid=86483188-4d5e-40a9-bf2a-f570e6656f0f,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1607015254266-1, container=default, 
section=1], offset=0, 
length=83],offset=0,name=86483188-4d5e-40a9-bf2a-f570e6656f0f,size=83]; will 
route to failure: org.apache.nifi.processor.exception.ProcessException: Could 
not parse incoming data
org.apache.nifi.processor.exception.ProcessException: Could not parse incoming 
data
at 
org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:171)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2986)
at 
org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:122)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.serialization.MalformedRecordException: Successfully 
parsed a JSON object from input but failed to convert into a Record object with 
the given schema
at 
org.apache.nifi.json.AbstractJsonRowRecordReader.nextRecord(AbstractJsonRowRecordReader.java:124)
at 
org.apache.nifi.serialization.RecordReader.nextRecord(RecordReader.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254)
at 
org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.access$100(StandardControllerServiceInvocationHandler.java:38)
at 
org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler$ProxiedReturnObjectInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:240)
at com.sun.proxy.$Proxy138.nextRecord(Unknown Source)
at 
org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:131)
... 14 common frames omitted
Caused by: java.lang.ClassCastException: 
org.apache.nifi.serialization.record.MapRecord cannot be cast to java.lang.Byte
at 
org.apache.nifi.serialization.record.util.DataTypeUtils.toString(DataTypeUtils.java:925)
at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:203)
at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:148)
at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:224)
at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:148)
at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:144)
at 

[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store

2020-12-03 Thread GitBox


fgerlits commented on a change in pull request #947:
URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r535467558



##
File path: libminifi/include/utils/tls/CertificateUtils.h
##
@@ -0,0 +1,61 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+#ifdef OPENSSL_SUPPORT
+
+#include 
+
+#ifdef WIN32
+#include 
+#include 
+#endif  // WIN32
+
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace tls {
+
+struct EVP_PKEY_deleter {
+  void operator()(EVP_PKEY* pkey) const { EVP_PKEY_free(pkey); }
+};
+using EVP_PKEY_unique_ptr = std::unique_ptr;
+
+struct X509_deleter {
+  void operator()(X509* cert) const { X509_free(cert); }
+};
+using X509_unique_ptr = std::unique_ptr;
+
+#ifdef WIN32
+// Returns nullptr on errors
+X509_unique_ptr convertWindowsCertificate(const PCCERT_CONTEXT certificate);
+
+// Returns nullptr if the certificate has no associated private key, or the 
private key could not be extracted
+EVP_PKEY_unique_ptr extractPrivateKey(const PCCERT_CONTEXT certificate);

Review comment:
   Thanks, fixed in 
https://github.com/apache/nifi-minifi-cpp/pull/947/commits/e63e9e6c64be860026b6756fd4ffce85ee3d7d8f.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store

2020-12-03 Thread GitBox


fgerlits commented on a change in pull request #947:
URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r535466913



##
File path: libminifi/src/controllers/SSLContextService.cpp
##
@@ -269,10 +505,8 @@ void SSLContextService::onEnable() {
 }
 passphrase_file.close();
   }
-  // load CA certificates
-  if (!getProperty(caCert.getName(), ca_certificate_)) {
-logger_->log_error("Can not load CA certificate.");

Review comment:
   It isn't an error any longer, because now there is a valid use case when 
the server certificate name is blank and 
`nifi.security.use.system.cert.store=true`.  In that case, we will try to find 
the server certificate in the system store.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8057) Remove truststore check from SslContextFactory.createSslContext()

2020-12-03 Thread Peter Turcsanyi (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243400#comment-17243400
 ] 

Peter Turcsanyi commented on NIFI-8057:
---

[~exceptionfactory] It is absolutely fine with me and tanks for the proposed 
fix for the GRPC processors. Created a ticket for it: NIFI-8067
I believe this jira can be closed now and the generic solution for the other 
processors will be implemented in separate one.

> Remove truststore check from SslContextFactory.createSslContext()
> -
>
> Key: NIFI-8057
> URL: https://issues.apache.org/jira/browse/NIFI-8057
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.12.1
>Reporter: Peter Turcsanyi
>Priority: Major
>
> NIFI-7407 introduced a check in {{SslContextFactory.createSslContext()}}: if 
> KS is configured, then TS must be configured too 
> ([https://github.com/apache/nifi/blob/857eeca3c7d4b275fd698430594e7fae4864feff/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/SslContextFactory.java#L79])
> This constraint is too strict for server-style processors (like ListenGRPC) 
> where only a KS is needed for 1-way SSL (and the presence of TS turns on 
> 2-way SSL).
> The check should be removed/relieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8067) Fix 1-way SSL in GRPC processors

2020-12-03 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-8067:
--
Component/s: Extensions

> Fix 1-way SSL in GRPC processors
> 
>
> Key: NIFI-8067
> URL: https://issues.apache.org/jira/browse/NIFI-8067
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.12.1
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
>
> 1-way SSL is broken due to a change in NIFI-7407: 
> SslContextFactory.createSslContext() checks that the truststore must be given 
> when the keystore is given, but the presence of the truststore would turn on 
> 2-way SSL. For this reason 1-way SSL cannot be configured currently.
> The previous behavior can be restored by refactoring to remove the call to 
> createSslContext() and removing the unnecessary references to 
> SSLContext.getProvider().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8067) Fix 1-way SSL in GRPC processors

2020-12-03 Thread Peter Turcsanyi (Jira)
Peter Turcsanyi created NIFI-8067:
-

 Summary: Fix 1-way SSL in GRPC processors
 Key: NIFI-8067
 URL: https://issues.apache.org/jira/browse/NIFI-8067
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.12.1
Reporter: Peter Turcsanyi
Assignee: Peter Turcsanyi


1-way SSL is broken due to a change in NIFI-7407: 
SslContextFactory.createSslContext() checks that the truststore must be given 
when the keystore is given, but the presence of the truststore would turn on 
2-way SSL. For this reason 1-way SSL cannot be configured currently.

The previous behavior can be restored by refactoring to remove the call to 
createSslContext() and removing the unnecessary references to 
SSLContext.getProvider().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8066) Bump GRPC dependency versions

2020-12-03 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-8066:
--
Component/s: Extensions

> Bump GRPC dependency versions
> -
>
> Key: NIFI-8066
> URL: https://issues.apache.org/jira/browse/NIFI-8066
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.12.1
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
>
> Update dependencies in nifi-grpc module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8066) Bump GRPC dependency versions

2020-12-03 Thread Peter Turcsanyi (Jira)
Peter Turcsanyi created NIFI-8066:
-

 Summary: Bump GRPC dependency versions
 Key: NIFI-8066
 URL: https://issues.apache.org/jira/browse/NIFI-8066
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 1.12.1
Reporter: Peter Turcsanyi
Assignee: Peter Turcsanyi


Update dependencies in nifi-grpc module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8057) Remove truststore check from SslContextFactory.createSslContext()

2020-12-03 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243349#comment-17243349
 ] 

David Handermann commented on NIFI-8057:


[~turcsanyip] What do you think about making the proposed changes to ListenGRPC 
and addressing other processors separately as described?

> Remove truststore check from SslContextFactory.createSslContext()
> -
>
> Key: NIFI-8057
> URL: https://issues.apache.org/jira/browse/NIFI-8057
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.12.1
>Reporter: Peter Turcsanyi
>Priority: Major
>
> NIFI-7407 introduced a check in {{SslContextFactory.createSslContext()}}: if 
> KS is configured, then TS must be configured too 
> ([https://github.com/apache/nifi/blob/857eeca3c7d4b275fd698430594e7fae4864feff/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/SslContextFactory.java#L79])
> This constraint is too strict for server-style processors (like ListenGRPC) 
> where only a KS is needed for 1-way SSL (and the presence of TS turns on 
> 2-way SSL).
> The check should be removed/relieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (NIFI-7906) Add graph processor with flexibility to query graph database conditioned on flowfile content and attirbutes

2020-12-03 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reopened NIFI-7906:


Reopening to add https://github.com/apache/nifi/pull/4704

> Add graph processor with flexibility to query graph database conditioned on 
> flowfile content and attirbutes
> ---
>
> Key: NIFI-7906
> URL: https://issues.apache.org/jira/browse/NIFI-7906
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Levi Lentz
>Assignee: Levi Lentz
>Priority: Minor
>  Labels: graph
> Fix For: 1.13.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> The current graph bundle currently does not allow you to query the graph 
> database (as defined in the GraphClientService) with attributes or content 
> available in the flow file.
>  
> This functionality would allow uses to perform dynamic queries/mutations of 
> the underlying graph data based. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6999) Encrypt Config Toolkit fails on very large flow.xml.gz files

2020-12-03 Thread Nathan Gough (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243324#comment-17243324
 ] 

Nathan Gough commented on NIFI-6999:


Working on a change to use streams for the flow.xml.gz file.

> Encrypt Config Toolkit fails on very large flow.xml.gz files
> 
>
> Key: NIFI-6999
> URL: https://issues.apache.org/jira/browse/NIFI-6999
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.2.0, 1.10.0
>Reporter: Andy LoPresto
>Assignee: Nathan Gough
>Priority: Critical
>  Labels: documentation, encryption, heap, security, streaming, 
> toolkit
>
> A user reported failure when using the encrypt config toolkit to process 
> (encrypt) a large {{flow.xml.gz}}. The compressed file was 49 MB, but was 687 
> MB uncompressed. It contained 545 encrypted values, and approximately 90 
> templates. This caused the toolkit to fail during {{loadFlowXml()}} unless 
> the toolkit invocation set the heap to 8 GB via {{-Xms2g -Xmx8g}}. Even with 
> the expanded heap, the serialization of the newly-encrypted flow XML to the 
> file system fails with the following exception:
> {code}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> at java.lang.StringCoding.encode(StringCoding.java:350)
> at java.lang.String.getBytes(String.java:941)
> at org.apache.commons.io.IOUtils.write(IOUtils.java:1857)
> at org.apache.commons.io.IOUtils$write$0.call(Unknown Source)
> at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:141)
> at 
> org.apache.nifi.properties.ConfigEncryptionTool$_writeFlowXmlToFile_closure5$_closure20.doCall(ConfigEncryptionTool.groovy:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
> at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
> at 
> org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
> at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019)
> at groovy.lang.Closure.call(Closure.java:426)
> at groovy.lang.Closure.call(Closure.java:442)
> at 
> org.codehaus.groovy.runtime.IOGroovyMethods.withCloseable(IOGroovyMethods.java:1622)
> at 
> org.codehaus.groovy.runtime.NioGroovyMethods.withCloseable(NioGroovyMethods.java:1754)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.codehaus.groovy.runtime.metaclass.ReflectionMetaMethod.invoke(ReflectionMetaMethod.java:54)
> at 
> org.codehaus.groovy.runtime.metaclass.NewInstanceMetaMethod.invoke(NewInstanceMetaMethod.java:56)
> at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:274)
> at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56)
> at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
> at 
> org.apache.nifi.properties.ConfigEncryptionTool$_writeFlowXmlToFile_closure5.doCall(ConfigEncryptionTool.groovy:691)
> {code}
> The immediate fix was to remove the duplicated template definitions in the 
> flow definition, returning the file to a reasonable size. However, if run as 
> an inline replacement, this can cause the {{flow.xml.gz}} to be overwritten 
> with an empty file, potentially leading to data loss. The following steps 
> should be taken:
> # Guard against loading/operating on/serializing large files (log statements, 
> simple conditional checks)
> # Handle large files internally (change from direct {{String}} access to 
> {{BufferedInputStream}}, etc.)
> # Document the internal memory usage of the toolkit in the toolkit guide
> # Document best practices and steps to resolve issue in the toolkit guide



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-6999) Encrypt Config Toolkit fails on very large flow.xml.gz files

2020-12-03 Thread Nathan Gough (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Gough reassigned NIFI-6999:
--

Assignee: Nathan Gough  (was: Andy LoPresto)

> Encrypt Config Toolkit fails on very large flow.xml.gz files
> 
>
> Key: NIFI-6999
> URL: https://issues.apache.org/jira/browse/NIFI-6999
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.2.0, 1.10.0
>Reporter: Andy LoPresto
>Assignee: Nathan Gough
>Priority: Critical
>  Labels: documentation, encryption, heap, security, streaming, 
> toolkit
>
> A user reported failure when using the encrypt config toolkit to process 
> (encrypt) a large {{flow.xml.gz}}. The compressed file was 49 MB, but was 687 
> MB uncompressed. It contained 545 encrypted values, and approximately 90 
> templates. This caused the toolkit to fail during {{loadFlowXml()}} unless 
> the toolkit invocation set the heap to 8 GB via {{-Xms2g -Xmx8g}}. Even with 
> the expanded heap, the serialization of the newly-encrypted flow XML to the 
> file system fails with the following exception:
> {code}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> at java.lang.StringCoding.encode(StringCoding.java:350)
> at java.lang.String.getBytes(String.java:941)
> at org.apache.commons.io.IOUtils.write(IOUtils.java:1857)
> at org.apache.commons.io.IOUtils$write$0.call(Unknown Source)
> at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:141)
> at 
> org.apache.nifi.properties.ConfigEncryptionTool$_writeFlowXmlToFile_closure5$_closure20.doCall(ConfigEncryptionTool.groovy:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
> at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
> at 
> org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
> at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019)
> at groovy.lang.Closure.call(Closure.java:426)
> at groovy.lang.Closure.call(Closure.java:442)
> at 
> org.codehaus.groovy.runtime.IOGroovyMethods.withCloseable(IOGroovyMethods.java:1622)
> at 
> org.codehaus.groovy.runtime.NioGroovyMethods.withCloseable(NioGroovyMethods.java:1754)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.codehaus.groovy.runtime.metaclass.ReflectionMetaMethod.invoke(ReflectionMetaMethod.java:54)
> at 
> org.codehaus.groovy.runtime.metaclass.NewInstanceMetaMethod.invoke(NewInstanceMetaMethod.java:56)
> at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:274)
> at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56)
> at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
> at 
> org.apache.nifi.properties.ConfigEncryptionTool$_writeFlowXmlToFile_closure5.doCall(ConfigEncryptionTool.groovy:691)
> {code}
> The immediate fix was to remove the duplicated template definitions in the 
> flow definition, returning the file to a reasonable size. However, if run as 
> an inline replacement, this can cause the {{flow.xml.gz}} to be overwritten 
> with an empty file, potentially leading to data loss. The following steps 
> should be taken:
> # Guard against loading/operating on/serializing large files (log statements, 
> simple conditional checks)
> # Handle large files internally (change from direct {{String}} access to 
> {{BufferedInputStream}}, etc.)
> # Document the internal memory usage of the toolkit in the toolkit guide
> # Document best practices and steps to resolve issue in the toolkit guide



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8057) Remove truststore check from SslContextFactory.createSslContext()

2020-12-03 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243318#comment-17243318
 ] 

Joe Witt commented on NIFI-8057:


I think that is totally up to you if you're doing the work.  

> Remove truststore check from SslContextFactory.createSslContext()
> -
>
> Key: NIFI-8057
> URL: https://issues.apache.org/jira/browse/NIFI-8057
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.12.1
>Reporter: Peter Turcsanyi
>Priority: Major
>
> NIFI-7407 introduced a check in {{SslContextFactory.createSslContext()}}: if 
> KS is configured, then TS must be configured too 
> ([https://github.com/apache/nifi/blob/857eeca3c7d4b275fd698430594e7fae4864feff/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/SslContextFactory.java#L79])
> This constraint is too strict for server-style processors (like ListenGRPC) 
> where only a KS is needed for 1-way SSL (and the presence of TS turns on 
> 2-way SSL).
> The check should be removed/relieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8057) Remove truststore check from SslContextFactory.createSslContext()

2020-12-03 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243317#comment-17243317
 ] 

David Handermann commented on NIFI-8057:


That makes sense.  For the particular issue with the ListenGRPC Processor, the 
previous behavior can be restored by refactoring to remove the call to 
createSslContext() and removing the unnecessary references to 
SSLContext.getProvider(), without changing the behavior of 
SslContextFactory.createSslContext().

[~joewitt] Do you recommend addressing issues with other Processors under this 
issue, or creating a new issue for each Processor impacted?

> Remove truststore check from SslContextFactory.createSslContext()
> -
>
> Key: NIFI-8057
> URL: https://issues.apache.org/jira/browse/NIFI-8057
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.12.1
>Reporter: Peter Turcsanyi
>Priority: Major
>
> NIFI-7407 introduced a check in {{SslContextFactory.createSslContext()}}: if 
> KS is configured, then TS must be configured too 
> ([https://github.com/apache/nifi/blob/857eeca3c7d4b275fd698430594e7fae4864feff/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/SslContextFactory.java#L79])
> This constraint is too strict for server-style processors (like ListenGRPC) 
> where only a KS is needed for 1-way SSL (and the presence of TS turns on 
> 2-way SSL).
> The check should be removed/relieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8057) Remove truststore check from SslContextFactory.createSslContext()

2020-12-03 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243311#comment-17243311
 ] 

Joe Witt commented on NIFI-8057:


Fair point we already created disruption.  But lets still try to minimize it as 
best we can.  Thanks

> Remove truststore check from SslContextFactory.createSslContext()
> -
>
> Key: NIFI-8057
> URL: https://issues.apache.org/jira/browse/NIFI-8057
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.12.1
>Reporter: Peter Turcsanyi
>Priority: Major
>
> NIFI-7407 introduced a check in {{SslContextFactory.createSslContext()}}: if 
> KS is configured, then TS must be configured too 
> ([https://github.com/apache/nifi/blob/857eeca3c7d4b275fd698430594e7fae4864feff/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/SslContextFactory.java#L79])
> This constraint is too strict for server-style processors (like ListenGRPC) 
> where only a KS is needed for 1-way SSL (and the presence of TS turns on 
> 2-way SSL).
> The check should be removed/relieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8057) Remove truststore check from SslContextFactory.createSslContext()

2020-12-03 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243308#comment-17243308
 ] 

David Handermann commented on NIFI-8057:


Reviewing the release history, it appears that this change was released in 
version 1.12.0, so anyone upgrading from previous versions would already be 
impacted.

Reviewing ListenGRPC more closely, it appears that the createSslContext() call 
is not necessary, since the GRPC server depends on the Netty SslContextBuilder, 
which does not use the javax.net.ssl.SSLContext.  For this particular issue, 
ListenGRPC could be refactored to support the behavior from 1.11 and previous 
versions, which would still involve the implied one-way or two-way TLS handling 
based on whether trust store properties are configured.

Other processors would need to be evaluated separately, but it seems best to 
preserve the checks for empty trust store properties introduced in 1.12.0.

As far as maintaining backward compatibility in other processors, one option 
would be to review where createSslContext() is being called, determine whether 
that behavior exists now, and introduce an additional method that would 
explicitly load the JVM default trust store.  The component could log a warning 
indicating what is happening.  Introducing explicit loading of the default 
trust store at a higher level introduces more code, but it would preserve the 
sanity checking in the NiFi SslContextFactory.

> Remove truststore check from SslContextFactory.createSslContext()
> -
>
> Key: NIFI-8057
> URL: https://issues.apache.org/jira/browse/NIFI-8057
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.12.1
>Reporter: Peter Turcsanyi
>Priority: Major
>
> NIFI-7407 introduced a check in {{SslContextFactory.createSslContext()}}: if 
> KS is configured, then TS must be configured too 
> ([https://github.com/apache/nifi/blob/857eeca3c7d4b275fd698430594e7fae4864feff/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/SslContextFactory.java#L79])
> This constraint is too strict for server-style processors (like ListenGRPC) 
> where only a KS is needed for 1-way SSL (and the presence of TS turns on 
> 2-way SSL).
> The check should be removed/relieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8057) Remove truststore check from SslContextFactory.createSslContext()

2020-12-03 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8057:
---
Affects Version/s: 1.12.0

> Remove truststore check from SslContextFactory.createSslContext()
> -
>
> Key: NIFI-8057
> URL: https://issues.apache.org/jira/browse/NIFI-8057
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.12.1
>Reporter: Peter Turcsanyi
>Priority: Major
>
> NIFI-7407 introduced a check in {{SslContextFactory.createSslContext()}}: if 
> KS is configured, then TS must be configured too 
> ([https://github.com/apache/nifi/blob/857eeca3c7d4b275fd698430594e7fae4864feff/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/SslContextFactory.java#L79])
> This constraint is too strict for server-style processors (like ListenGRPC) 
> where only a KS is needed for 1-way SSL (and the presence of TS turns on 
> 2-way SSL).
> The check should be removed/relieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8057) Remove truststore check from SslContextFactory.createSslContext()

2020-12-03 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243274#comment-17243274
 ] 

Joe Witt commented on NIFI-8057:


Any thoughts on how to do that in a backward compatible way?  Such that old 
flows would keep working as they did?

> Remove truststore check from SslContextFactory.createSslContext()
> -
>
> Key: NIFI-8057
> URL: https://issues.apache.org/jira/browse/NIFI-8057
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.1
>Reporter: Peter Turcsanyi
>Priority: Major
>
> NIFI-7407 introduced a check in {{SslContextFactory.createSslContext()}}: if 
> KS is configured, then TS must be configured too 
> ([https://github.com/apache/nifi/blob/857eeca3c7d4b275fd698430594e7fae4864feff/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/SslContextFactory.java#L79])
> This constraint is too strict for server-style processors (like ListenGRPC) 
> where only a KS is needed for 1-way SSL (and the presence of TS turns on 
> 2-way SSL).
> The check should be removed/relieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store

2020-12-03 Thread GitBox


adamdebreceni commented on a change in pull request #947:
URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r535328333



##
File path: libminifi/src/controllers/SSLContextService.cpp
##
@@ -269,10 +505,8 @@ void SSLContextService::onEnable() {
 }
 passphrase_file.close();
   }
-  // load CA certificates
-  if (!getProperty(caCert.getName(), ca_certificate_)) {
-logger_->log_error("Can not load CA certificate.");

Review comment:
   is the log no longer necessary?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store

2020-12-03 Thread GitBox


adamdebreceni commented on a change in pull request #947:
URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r535295187



##
File path: libminifi/include/utils/tls/CertificateUtils.h
##
@@ -0,0 +1,61 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+#ifdef OPENSSL_SUPPORT
+
+#include 
+
+#ifdef WIN32
+#include 
+#include 
+#endif  // WIN32
+
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace tls {
+
+struct EVP_PKEY_deleter {
+  void operator()(EVP_PKEY* pkey) const { EVP_PKEY_free(pkey); }
+};
+using EVP_PKEY_unique_ptr = std::unique_ptr;
+
+struct X509_deleter {
+  void operator()(X509* cert) const { X509_free(cert); }
+};
+using X509_unique_ptr = std::unique_ptr;
+
+#ifdef WIN32
+// Returns nullptr on errors
+X509_unique_ptr convertWindowsCertificate(const PCCERT_CONTEXT certificate);
+
+// Returns nullptr if the certificate has no associated private key, or the 
private key could not be extracted
+EVP_PKEY_unique_ptr extractPrivateKey(const PCCERT_CONTEXT certificate);

Review comment:
   it seems like top-level `const` has no effect here, a parameter passed 
by value being `const` is an implementation detail IMO and should be only in 
the definition
   https://abseil.io/tips/109





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store

2020-12-03 Thread GitBox


adamdebreceni commented on a change in pull request #947:
URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r535295187



##
File path: libminifi/include/utils/tls/CertificateUtils.h
##
@@ -0,0 +1,61 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+#ifdef OPENSSL_SUPPORT
+
+#include 
+
+#ifdef WIN32
+#include 
+#include 
+#endif  // WIN32
+
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace tls {
+
+struct EVP_PKEY_deleter {
+  void operator()(EVP_PKEY* pkey) const { EVP_PKEY_free(pkey); }
+};
+using EVP_PKEY_unique_ptr = std::unique_ptr;
+
+struct X509_deleter {
+  void operator()(X509* cert) const { X509_free(cert); }
+};
+using X509_unique_ptr = std::unique_ptr;
+
+#ifdef WIN32
+// Returns nullptr on errors
+X509_unique_ptr convertWindowsCertificate(const PCCERT_CONTEXT certificate);
+
+// Returns nullptr if the certificate has no associated private key, or the 
private key could not be extracted
+EVP_PKEY_unique_ptr extractPrivateKey(const PCCERT_CONTEXT certificate);

Review comment:
   top-level `const` has no effect here, a parameter passed by value being 
`const` is an implementation detail IMO and should be only in the definition
   https://abseil.io/tips/109





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8057) Remove truststore check from SslContextFactory.createSslContext()

2020-12-03 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243228#comment-17243228
 ] 

David Handermann commented on NIFI-8057:


Removing the null check for trust store properties seems like it would obscure 
the behavior of SSLContext references.  Although some processors, such as 
ListenHTTP, use the lack of trust store properties to infer one-way TLS 
communication, it would be much better to make this behavior explicit through 
the addition of Client Authentication properties on relevant processors.

The fundamental issue is that when trust store properties are not provided, 
null is passed to SSLContext.init() as the argument for Trust Managers.  Under 
the hood, the JVM attempts to load the system default trust store, which 
includes standard public certificate authorities.  These public certificate 
authorities are not referenced for one-way TLS, since the client is not 
presenting a certificate, but they would be referenced in two-way TLS.  In that 
scenario, if someone did not pass trust store properties as part of the TLS 
configuration, but attempted to enable two-way TLS, client certificate 
validation would fail unless the client certificate was signed by a public 
certificate authority.  There is at least one other open issue for supporting 
SSLContextService configuration using the JVM default trust store when 
communicating with public HTTPS services, but using empty values to imply the 
system default trust store does not seem like the best approach.

With that background, it seems better to address this issue by adding Client 
Authentication properties to ListenGRPC and any other applicable listening 
processors.

> Remove truststore check from SslContextFactory.createSslContext()
> -
>
> Key: NIFI-8057
> URL: https://issues.apache.org/jira/browse/NIFI-8057
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.1
>Reporter: Peter Turcsanyi
>Priority: Major
>
> NIFI-7407 introduced a check in {{SslContextFactory.createSslContext()}}: if 
> KS is configured, then TS must be configured too 
> ([https://github.com/apache/nifi/blob/857eeca3c7d4b275fd698430594e7fae4864feff/nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/util/SslContextFactory.java#L79])
> This constraint is too strict for server-style processors (like ListenGRPC) 
> where only a KS is needed for 1-way SSL (and the presence of TS turns on 
> 2-way SSL).
> The check should be removed/relieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7906) Add graph processor with flexibility to query graph database conditioned on flowfile content and attirbutes

2020-12-03 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-7906.

Resolution: Fixed

> Add graph processor with flexibility to query graph database conditioned on 
> flowfile content and attirbutes
> ---
>
> Key: NIFI-7906
> URL: https://issues.apache.org/jira/browse/NIFI-7906
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Levi Lentz
>Assignee: Levi Lentz
>Priority: Minor
>  Labels: graph
> Fix For: 1.13.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> The current graph bundle currently does not allow you to query the graph 
> database (as defined in the GraphClientService) with attributes or content 
> available in the flow file.
>  
> This functionality would allow uses to perform dynamic queries/mutations of 
> the underlying graph data based. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-4985) Allow users to define a specific offset when starting ConsumeKafka

2020-12-03 Thread Dennis Jaheruddin (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243211#comment-17243211
 ] 

Dennis Jaheruddin commented on NIFI-4985:
-

Further thought:

Starting at a certain offset would conceptually be hard to define: When exactly 
do you start here?!
 # If you never ran a processor like this: Yes of course
 # If you ran the processor, and stopped and started it?: Probably not
 # If you added a node to an existing cluster? : Definitely not
 # If you deleted the processor, and dropped it in, or perhaps not even deleted 
it but just added a copy: No idea what we should want

Therefore, I would say: Rather than saying a processor should be able to start 
at a certain offset, it may make more sense to say:

A processor should be able to set its starting offset, if no offset exists for 
the consumer group.

To go further, we would probably first want to implement some kind of basic 
consumer offset management (my previous comment assumed this was already in 
place). At the very least:

Enable NiFi to set the consumer offset (allowing re-reads without changing the 
consumer group or doing anything on the side of kafka), and from here go more 
granular. Of course, this would still come with the 4 conceptual challenges 
listed above.

> Allow users to define a specific offset when starting ConsumeKafka
> --
>
> Key: NIFI-4985
> URL: https://issues.apache.org/jira/browse/NIFI-4985
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Sandish Kumar HN
>Priority: Major
>
> It'd be useful to add support for dynamic properties in ConsumeKafka set of 
> processors so that users can define the offset to use when starting the 
> processor. The properties could be something like:
> {noformat}
> kafka...offset{noformat}
> If, for a configured topic, such a property is not defined for a given 
> partition, the consumer would use the auto offset property.
> If a custom offset is defined for a topic/partition, it'd be used when 
> initializing the consumer by calling:
> {noformat}
> seek(TopicPartition, long){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7906) Add graph processor with flexibility to query graph database conditioned on flowfile content and attirbutes

2020-12-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243209#comment-17243209
 ] 

ASF subversion and git services commented on NIFI-7906:
---

Commit b90a6e893d4920a0e1038ad286b96520d74ab070 in nifi's branch 
refs/heads/main from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b90a6e8 ]

NIFI-7906 This closes #4701. Updated test case to fix a windows-centric bug.
NIFI-7906 Removed unused test code.

Signed-off-by: Joe Witt 


> Add graph processor with flexibility to query graph database conditioned on 
> flowfile content and attirbutes
> ---
>
> Key: NIFI-7906
> URL: https://issues.apache.org/jira/browse/NIFI-7906
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Levi Lentz
>Assignee: Levi Lentz
>Priority: Minor
>  Labels: graph
> Fix For: 1.13.0
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> The current graph bundle currently does not allow you to query the graph 
> database (as defined in the GraphClientService) with attributes or content 
> available in the flow file.
>  
> This functionality would allow uses to perform dynamic queries/mutations of 
> the underlying graph data based. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7906) Add graph processor with flexibility to query graph database conditioned on flowfile content and attirbutes

2020-12-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243210#comment-17243210
 ] 

ASF subversion and git services commented on NIFI-7906:
---

Commit b90a6e893d4920a0e1038ad286b96520d74ab070 in nifi's branch 
refs/heads/main from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b90a6e8 ]

NIFI-7906 This closes #4701. Updated test case to fix a windows-centric bug.
NIFI-7906 Removed unused test code.

Signed-off-by: Joe Witt 


> Add graph processor with flexibility to query graph database conditioned on 
> flowfile content and attirbutes
> ---
>
> Key: NIFI-7906
> URL: https://issues.apache.org/jira/browse/NIFI-7906
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Levi Lentz
>Assignee: Levi Lentz
>Priority: Minor
>  Labels: graph
> Fix For: 1.13.0
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> The current graph bundle currently does not allow you to query the graph 
> database (as defined in the GraphClientService) with attributes or content 
> available in the flow file.
>  
> This functionality would allow uses to perform dynamic queries/mutations of 
> the underlying graph data based. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4701: NIFI-7906 fixed windows build

2020-12-03 Thread GitBox


asfgit closed pull request #4701:
URL: https://github.com/apache/nifi/pull/4701


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8065) ListFTP processor tries to resolve the hostname even if proxy is configured

2020-12-03 Thread Denes Arvay (Jira)
Denes Arvay created NIFI-8065:
-

 Summary: ListFTP processor tries to resolve the hostname even if 
proxy is configured
 Key: NIFI-8065
 URL: https://issues.apache.org/jira/browse/NIFI-8065
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Denes Arvay


When using a ListFTP processor with proxy configured it might happen that the 
instance running NiFi can't resolve the destination host name. This should be 
ok as it's enough for the proxy to be able to resolve it.
But NiFi tries to resolve the hostname (see [1]) that leads to an 
UnknownHostException.

[1] 
https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/FTPTransfer.java#L593-L601



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7884) Separate "read-filesystem" restricted permission into local file system and HDFS file system permissions

2020-12-03 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann reassigned NIFI-7884:
--

Assignee: David Handermann

> Separate "read-filesystem" restricted permission into local file system and 
> HDFS file system permissions
> 
>
> Key: NIFI-7884
> URL: https://issues.apache.org/jira/browse/NIFI-7884
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Affects Versions: 1.12.1
>Reporter: Andy LoPresto
>Assignee: David Handermann
>Priority: Major
>  Labels: file-system, hdfs, restricted, security
>
> Currently the {{read-filesystem}} value for {{RequiredPermission}} is used 
> for both the processors which read directly from the local file system of the 
> machine hosting NiFi ({{GetFile}}, {{ListFile}}, etc.) and the processors 
> which read from external file systems like HDFS ({{GetHDFS}}, {{PutHDFS}}, 
> etc.). There are use cases where NiFi users should be able to interact with 
> the HDFS file system without having permissions to access the local file 
> system. 
> This will also require introducing a global setting in {{nifi.properties}} 
> that an admin can set to allow local file system access via the HDFS 
> processors (default {{true}} for backward compatibility), and additional 
> validation logic in the HDFS processors (ideally the abstract shared logic) 
> to ensure that if this setting is disabled, the HDFS processors are not 
> accessing the local file system via the {{file:///}} protocol in their 
> configuration. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7885) Add global property for LFS access from HDFS processors

2020-12-03 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann reassigned NIFI-7885:
--

Assignee: David Handermann

> Add global property for LFS access from HDFS processors
> ---
>
> Key: NIFI-7885
> URL: https://issues.apache.org/jira/browse/NIFI-7885
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Configuration, Core Framework, Extensions
>Affects Versions: 1.12.1
>Reporter: Andy LoPresto
>Assignee: David Handermann
>Priority: Major
>  Labels: file-system, permission, properties, security, validation
>
> From https://issues.apache.org/jira/browse/NIFI-7884: 
> {quote}
> This will also require introducing a global setting in {{nifi.properties}} 
> that an admin can set to allow local file system access via the HDFS 
> processors (default {{true}} for backward compatibility), and additional 
> validation logic in the HDFS processors (ideally the abstract shared logic) 
> to ensure that if this setting is disabled, the HDFS processors are not 
> accessing the local file system via the {{file:///}} protocol in their 
> configuration. 
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store

2020-12-03 Thread GitBox


lordgamez commented on a change in pull request #947:
URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r535199561



##
File path: libminifi/src/utils/tls/WindowsCertStoreLocation.cpp
##
@@ -0,0 +1,85 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifdef WIN32
+
+#include "utils/tls/WindowsCertStoreLocation.h"
+
+#include 
+
+#include 
+#include 
+
+#pragma comment(lib, "crypt32.lib")
+#pragma comment(lib, "Ws2_32.lib")
+
+namespace {
+
+constexpr std::array, 8> SYSTEM_STORE_LOCATIONS{{

Review comment:
   I see, thanks for the explanation :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store

2020-12-03 Thread GitBox


fgerlits commented on a change in pull request #947:
URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r535196611



##
File path: libminifi/src/utils/tls/WindowsCertStoreLocation.cpp
##
@@ -0,0 +1,85 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifdef WIN32
+
+#include "utils/tls/WindowsCertStoreLocation.h"
+
+#include 
+
+#include 
+#include 
+
+#pragma comment(lib, "crypt32.lib")
+#pragma comment(lib, "Ws2_32.lib")
+
+namespace {
+
+constexpr std::array, 8> SYSTEM_STORE_LOCATIONS{{

Review comment:
   There is a reason, but not a very good one.  :)
   
   Originally both were `std::pair`; I prefer that as it means less code, and 
it is more similar to `std::map`.  But the constructor of `std::pair` is not 
constexpr before C++14, and AppleClang is strict about that, so I had to change 
`std::pair` to a custom struct in code which is compiled on MacOS.  
`WindowsCertStoreLocation` is only compiled on Windows, which is on C++14 
already, so I could use `std::pair` here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #947: MINIFICPP-1401 Read certificates from the Windows system store

2020-12-03 Thread GitBox


lordgamez commented on a change in pull request #947:
URL: https://github.com/apache/nifi-minifi-cpp/pull/947#discussion_r535137971



##
File path: libminifi/src/utils/tls/WindowsCertStoreLocation.cpp
##
@@ -0,0 +1,85 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifdef WIN32
+
+#include "utils/tls/WindowsCertStoreLocation.h"
+
+#include 
+
+#include 
+#include 
+
+#pragma comment(lib, "crypt32.lib")
+#pragma comment(lib, "Ws2_32.lib")
+
+namespace {
+
+constexpr std::array, 8> SYSTEM_STORE_LOCATIONS{{

Review comment:
   Is there any reason using std::pair here but using a custom struct 
KeyValuePair in ExtendedKeyUsage? It just tickles my OCD :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534998855



##
File path: libminifi/include/utils/ProcessorConfigUtils.h
##
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+
+#include "utils/StringUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+std::string getRequiredPropertyOrThrow(const core::ProcessContext* context, 
const std::string& property_name) {

Review comment:
   Thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


lordgamez commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534990092



##
File path: extensions/librdkafka/tests/ConsumeKafkaTests.cpp
##
@@ -0,0 +1,595 @@
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define CATCH_CONFIG_MAIN
+
+#include 
+#include 
+#include 
+#include 
+
+// #include "TestBase.h"
+#include "../../../libminifi/test/TestBase.h"
+
+#include "../ConsumeKafka.h"
+#include "../rdkafka_utils.h"
+#include "../../standard-processors/processors/ExtractText.h"
+#include "utils/file/FileUtils.h"
+#include "utils/OptionalUtils.h"
+#include "utils/RegexUtils.h"
+#include "utils/StringUtils.h"
+#include "utils/TestUtils.h"
+
+#include "utils/IntegrationTestUtils.h"
+
+namespace {
+using org::apache::nifi::minifi::utils::optional;
+
+class KafkaTestProducer {
+ public:
+  enum class PublishEvent {
+PUBLISH,
+TRANSACTION_START,
+TRANSACTION_COMMIT,
+CANCEL
+  };
+  KafkaTestProducer(const std::string& kafka_brokers, const std::string& 
topic, const bool transactional) :
+  logger_(logging::LoggerFactory::getLogger()) {
+using utils::setKafkaConfigurationField;
+
+std::unique_ptr conf = { 
rd_kafka_conf_new(), utils::rd_kafka_conf_deleter() };
+
+setKafkaConfigurationField(conf.get(), "bootstrap.servers", kafka_brokers);
+// setKafkaConfigurationField(conf.get(), "client.id", 
PRODUCER_CLIENT_NAME);
+setKafkaConfigurationField(conf.get(), "compression.codec", "snappy");
+setKafkaConfigurationField(conf.get(), "batch.num.messages", "1");
+
+if (transactional) {
+  setKafkaConfigurationField(conf.get(), "transactional.id", 
"ConsumeKafkaTest_transaction_id");
+}
+
+static std::array errstr{};
+producer_ = { rd_kafka_new(RD_KAFKA_PRODUCER, conf.release(), 
errstr.data(), errstr.size()), utils::rd_kafka_producer_deleter() };
+if (producer_ == nullptr) {
+  auto error_msg = utils::StringUtils::join_pack("Failed to create Kafka 
producer %s", errstr.data());
+  throw std::runtime_error(error_msg);
+}
+
+// The last argument is a config here, but it is already owned by the 
consumer. I assume that this would mean an override on the original config if 
used
+topic_ = { rd_kafka_topic_new(producer_.get(), topic.c_str(), nullptr), 
utils::rd_kafka_topic_deleter() };
+
+if (transactional) {
+  rd_kafka_init_transactions(producer_.get(), 
TRANSACTIONS_TIMEOUT_MS.count());
+}
+  }
+
+  // Uses all the headers for every published message
+  void publish_messages_to_topic(
+  const std::vector& messages_on_topic, const std::string& 
message_key, std::vector events,
+  const std::vector>& message_headers, 
const optional& message_header_encoding) {
+auto next_message = messages_on_topic.cbegin();
+for (const PublishEvent event : events) {
+  switch (event) {
+case PublishEvent::PUBLISH:
+  REQUIRE(messages_on_topic.cend() != next_message);
+  publish_message(*next_message, message_key, message_headers, 
message_header_encoding);
+  std::advance(next_message, 1);
+  break;
+case PublishEvent::TRANSACTION_START:
+  logger_->log_debug("Starting new transaction...");
+  rd_kafka_begin_transaction(producer_.get());
+  break;
+case PublishEvent::TRANSACTION_COMMIT:
+  logger_->log_debug("Committing transaction...");
+  rd_kafka_commit_transaction(producer_.get(), 
TRANSACTIONS_TIMEOUT_MS.count());
+  break;
+case PublishEvent::CANCEL:
+  logger_->log_debug("Cancelling transaction...");
+  rd_kafka_abort_transaction(producer_.get(), 
TRANSACTIONS_TIMEOUT_MS.count());
+  }
+}
+  }
+
+ private:
+  void publish_message(
+  const std::string& message, const std::string& message_key, const 
std::vector>& message_headers, const 
optional& message_header_encoding) {
+logger_->log_debug("Producing: %s", message.c_str());
+std::unique_ptr 
headers(rd_kafka_headers_new(message_headers.size()), 
utils::rd_kafka_headers_deleter());
+if (!headers) {
+  throw std::runtime_error("Generating message headers failed.");
+}
+ 

[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


lordgamez commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534990092



##
File path: extensions/librdkafka/tests/ConsumeKafkaTests.cpp
##
@@ -0,0 +1,595 @@
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define CATCH_CONFIG_MAIN
+
+#include 
+#include 
+#include 
+#include 
+
+// #include "TestBase.h"
+#include "../../../libminifi/test/TestBase.h"
+
+#include "../ConsumeKafka.h"
+#include "../rdkafka_utils.h"
+#include "../../standard-processors/processors/ExtractText.h"
+#include "utils/file/FileUtils.h"
+#include "utils/OptionalUtils.h"
+#include "utils/RegexUtils.h"
+#include "utils/StringUtils.h"
+#include "utils/TestUtils.h"
+
+#include "utils/IntegrationTestUtils.h"
+
+namespace {
+using org::apache::nifi::minifi::utils::optional;
+
+class KafkaTestProducer {
+ public:
+  enum class PublishEvent {
+PUBLISH,
+TRANSACTION_START,
+TRANSACTION_COMMIT,
+CANCEL
+  };
+  KafkaTestProducer(const std::string& kafka_brokers, const std::string& 
topic, const bool transactional) :
+  logger_(logging::LoggerFactory::getLogger()) {
+using utils::setKafkaConfigurationField;
+
+std::unique_ptr conf = { 
rd_kafka_conf_new(), utils::rd_kafka_conf_deleter() };
+
+setKafkaConfigurationField(conf.get(), "bootstrap.servers", kafka_brokers);
+// setKafkaConfigurationField(conf.get(), "client.id", 
PRODUCER_CLIENT_NAME);
+setKafkaConfigurationField(conf.get(), "compression.codec", "snappy");
+setKafkaConfigurationField(conf.get(), "batch.num.messages", "1");
+
+if (transactional) {
+  setKafkaConfigurationField(conf.get(), "transactional.id", 
"ConsumeKafkaTest_transaction_id");
+}
+
+static std::array errstr{};
+producer_ = { rd_kafka_new(RD_KAFKA_PRODUCER, conf.release(), 
errstr.data(), errstr.size()), utils::rd_kafka_producer_deleter() };
+if (producer_ == nullptr) {
+  auto error_msg = utils::StringUtils::join_pack("Failed to create Kafka 
producer %s", errstr.data());
+  throw std::runtime_error(error_msg);
+}
+
+// The last argument is a config here, but it is already owned by the 
consumer. I assume that this would mean an override on the original config if 
used
+topic_ = { rd_kafka_topic_new(producer_.get(), topic.c_str(), nullptr), 
utils::rd_kafka_topic_deleter() };
+
+if (transactional) {
+  rd_kafka_init_transactions(producer_.get(), 
TRANSACTIONS_TIMEOUT_MS.count());
+}
+  }
+
+  // Uses all the headers for every published message
+  void publish_messages_to_topic(
+  const std::vector& messages_on_topic, const std::string& 
message_key, std::vector events,
+  const std::vector>& message_headers, 
const optional& message_header_encoding) {
+auto next_message = messages_on_topic.cbegin();
+for (const PublishEvent event : events) {
+  switch (event) {
+case PublishEvent::PUBLISH:
+  REQUIRE(messages_on_topic.cend() != next_message);
+  publish_message(*next_message, message_key, message_headers, 
message_header_encoding);
+  std::advance(next_message, 1);
+  break;
+case PublishEvent::TRANSACTION_START:
+  logger_->log_debug("Starting new transaction...");
+  rd_kafka_begin_transaction(producer_.get());
+  break;
+case PublishEvent::TRANSACTION_COMMIT:
+  logger_->log_debug("Committing transaction...");
+  rd_kafka_commit_transaction(producer_.get(), 
TRANSACTIONS_TIMEOUT_MS.count());
+  break;
+case PublishEvent::CANCEL:
+  logger_->log_debug("Cancelling transaction...");
+  rd_kafka_abort_transaction(producer_.get(), 
TRANSACTIONS_TIMEOUT_MS.count());
+  }
+}
+  }
+
+ private:
+  void publish_message(
+  const std::string& message, const std::string& message_key, const 
std::vector>& message_headers, const 
optional& message_header_encoding) {
+logger_->log_debug("Producing: %s", message.c_str());
+std::unique_ptr 
headers(rd_kafka_headers_new(message_headers.size()), 
utils::rd_kafka_headers_deleter());
+if (!headers) {
+  throw std::runtime_error("Generating message headers failed.");
+}
+ 

[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534973423



##
File path: libminifi/test/TestBase.cpp
##
@@ -247,45 +221,65 @@ void TestPlan::reset(bool reschedule) {
   }
 }
 
-bool TestPlan::runNextProcessor(std::function, const 
std::shared_ptr)> verify) {
-  if (!finalized) {
-finalize();
+std::vector>::iterator 
TestPlan::getProcessorItByUuid(const std::string& uuid) {
+  const auto processor_node_matches_processor = [] (const 
std::shared_ptr& processor) {
+return processor->getUUIDStr() == uuid;
+  };
+  auto processor_found_at = std::find_if(processor_queue_.begin(), 
processor_queue_.end(), processor_node_matches_processor);
+  if (processor_found_at == processor_queue_.end()) {
+throw std::runtime_error("Processor not found in test plan.");
   }
-  logger_->log_info("Running next processor %d, processor_queue_.size %d, 
processor_contexts_.size %d", location, processor_queue_.size(), 
processor_contexts_.size());
-  std::lock_guard guard(mutex);
-  location++;
-  std::shared_ptr processor = processor_queue_.at(location);
-  std::shared_ptr context = 
processor_contexts_.at(location);
-  std::shared_ptr factory = 
std::make_shared(context);
-  factories_.push_back(factory);
+  return processor_found_at;
+}
+
+std::shared_ptr 
TestPlan::getProcessContextForProcessor(const std::shared_ptr& 
processor) {
+  const auto contextMatchesProcessor = [] (const 
std::shared_ptr& context) {
+return context->getProcessorNode()->getUUIDStr() ==  
processor->getUUIDStr();
+  };
+  const auto context_found_at = std::find_if(processor_contexts_.begin(), 
processor_contexts_.end(), contextMatchesProcessor);
+  if (context_found_at == processor_contexts_.end()) {
+throw std::runtime_error("Context not found in test plan.");
+  }
+  return *context_found_at;
+}
+
+void TestPlan::schedule_processors() {
+  for(std::size_t target_location = 0; target_location < 
processor_queue_.size(); ++target_location) {
+std::shared_ptr processor = 
processor_queue_.at(target_location);
+std::shared_ptr context = 
processor_contexts_.at(target_location);
+schedule_processor(processor, context);
+  }
+}
+
+void TestPlan::schedule_processor(const std::shared_ptr& 
processor) {
+  schedule_processor(processor, getProcessContextForProcessor(processor));
+}
+
+void TestPlan::schedule_processor(const std::shared_ptr& 
processor, const std::shared_ptr& context) {

Review comment:
   Will change this to camelCase.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534960543



##
File path: extensions/librdkafka/tests/ConsumeKafkaTests.cpp
##
@@ -0,0 +1,595 @@
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define CATCH_CONFIG_MAIN
+
+#include 
+#include 
+#include 
+#include 
+
+// #include "TestBase.h"
+#include "../../../libminifi/test/TestBase.h"
+
+#include "../ConsumeKafka.h"
+#include "../rdkafka_utils.h"
+#include "../../standard-processors/processors/ExtractText.h"
+#include "utils/file/FileUtils.h"
+#include "utils/OptionalUtils.h"
+#include "utils/RegexUtils.h"
+#include "utils/StringUtils.h"
+#include "utils/TestUtils.h"
+
+#include "utils/IntegrationTestUtils.h"
+
+namespace {
+using org::apache::nifi::minifi::utils::optional;
+
+class KafkaTestProducer {
+ public:
+  enum class PublishEvent {
+PUBLISH,
+TRANSACTION_START,
+TRANSACTION_COMMIT,
+CANCEL
+  };
+  KafkaTestProducer(const std::string& kafka_brokers, const std::string& 
topic, const bool transactional) :
+  logger_(logging::LoggerFactory::getLogger()) {
+using utils::setKafkaConfigurationField;
+
+std::unique_ptr conf = { 
rd_kafka_conf_new(), utils::rd_kafka_conf_deleter() };
+
+setKafkaConfigurationField(conf.get(), "bootstrap.servers", kafka_brokers);
+// setKafkaConfigurationField(conf.get(), "client.id", 
PRODUCER_CLIENT_NAME);
+setKafkaConfigurationField(conf.get(), "compression.codec", "snappy");
+setKafkaConfigurationField(conf.get(), "batch.num.messages", "1");
+
+if (transactional) {
+  setKafkaConfigurationField(conf.get(), "transactional.id", 
"ConsumeKafkaTest_transaction_id");
+}
+
+static std::array errstr{};
+producer_ = { rd_kafka_new(RD_KAFKA_PRODUCER, conf.release(), 
errstr.data(), errstr.size()), utils::rd_kafka_producer_deleter() };
+if (producer_ == nullptr) {
+  auto error_msg = utils::StringUtils::join_pack("Failed to create Kafka 
producer %s", errstr.data());
+  throw std::runtime_error(error_msg);
+}
+
+// The last argument is a config here, but it is already owned by the 
consumer. I assume that this would mean an override on the original config if 
used
+topic_ = { rd_kafka_topic_new(producer_.get(), topic.c_str(), nullptr), 
utils::rd_kafka_topic_deleter() };
+
+if (transactional) {
+  rd_kafka_init_transactions(producer_.get(), 
TRANSACTIONS_TIMEOUT_MS.count());
+}
+  }
+
+  // Uses all the headers for every published message
+  void publish_messages_to_topic(
+  const std::vector& messages_on_topic, const std::string& 
message_key, std::vector events,
+  const std::vector>& message_headers, 
const optional& message_header_encoding) {
+auto next_message = messages_on_topic.cbegin();
+for (const PublishEvent event : events) {
+  switch (event) {
+case PublishEvent::PUBLISH:
+  REQUIRE(messages_on_topic.cend() != next_message);
+  publish_message(*next_message, message_key, message_headers, 
message_header_encoding);
+  std::advance(next_message, 1);
+  break;
+case PublishEvent::TRANSACTION_START:
+  logger_->log_debug("Starting new transaction...");
+  rd_kafka_begin_transaction(producer_.get());
+  break;
+case PublishEvent::TRANSACTION_COMMIT:
+  logger_->log_debug("Committing transaction...");
+  rd_kafka_commit_transaction(producer_.get(), 
TRANSACTIONS_TIMEOUT_MS.count());
+  break;
+case PublishEvent::CANCEL:
+  logger_->log_debug("Cancelling transaction...");
+  rd_kafka_abort_transaction(producer_.get(), 
TRANSACTIONS_TIMEOUT_MS.count());
+  }
+}
+  }
+
+ private:
+  void publish_message(
+  const std::string& message, const std::string& message_key, const 
std::vector>& message_headers, const 
optional& message_header_encoding) {
+logger_->log_debug("Producing: %s", message.c_str());
+std::unique_ptr 
headers(rd_kafka_headers_new(message_headers.size()), 
utils::rd_kafka_headers_deleter());
+if (!headers) {
+  throw std::runtime_error("Generating message headers failed.");
+}

[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534946721



##
File path: extensions/librdkafka/tests/ConsumeKafkaTests.cpp
##
@@ -0,0 +1,595 @@
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define CATCH_CONFIG_MAIN
+
+#include 
+#include 
+#include 
+#include 
+
+// #include "TestBase.h"
+#include "../../../libminifi/test/TestBase.h"
+
+#include "../ConsumeKafka.h"
+#include "../rdkafka_utils.h"
+#include "../../standard-processors/processors/ExtractText.h"
+#include "utils/file/FileUtils.h"
+#include "utils/OptionalUtils.h"
+#include "utils/RegexUtils.h"
+#include "utils/StringUtils.h"
+#include "utils/TestUtils.h"
+
+#include "utils/IntegrationTestUtils.h"
+
+namespace {
+using org::apache::nifi::minifi::utils::optional;
+
+class KafkaTestProducer {
+ public:
+  enum class PublishEvent {
+PUBLISH,
+TRANSACTION_START,
+TRANSACTION_COMMIT,
+CANCEL
+  };
+  KafkaTestProducer(const std::string& kafka_brokers, const std::string& 
topic, const bool transactional) :
+  logger_(logging::LoggerFactory::getLogger()) {
+using utils::setKafkaConfigurationField;
+
+std::unique_ptr conf = { 
rd_kafka_conf_new(), utils::rd_kafka_conf_deleter() };
+
+setKafkaConfigurationField(conf.get(), "bootstrap.servers", kafka_brokers);
+// setKafkaConfigurationField(conf.get(), "client.id", 
PRODUCER_CLIENT_NAME);

Review comment:
   It was not obvious to me, so I left it commented out and forgotten about 
it. It is probably completely unneccessary as the testing went fine without it, 
will remove :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534941010



##
File path: extensions/librdkafka/tests/ConsumeKafkaTests.cpp
##
@@ -0,0 +1,595 @@
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define CATCH_CONFIG_MAIN
+
+#include 
+#include 
+#include 
+#include 
+
+// #include "TestBase.h"

Review comment:
   Ah, forgotten about this. I would prefer this as the way of including 
TestBase instead of the long one below. Will correct this by adding the 
`libminifi/test` include directory to the cmake file. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534937049



##
File path: extensions/librdkafka/tests/CMakeLists.txt
##
@@ -29,8 +29,11 @@ FOREACH(testfile ${KAFKA_TESTS})
 createTests("${testfilename}")
 MATH(EXPR KAFKA_TEST_COUNT "${KAFKA_TEST_COUNT}+1")
 # The line below handles integration test
-add_test(NAME "${testfilename}" COMMAND "${testfilename}" 
"${TEST_RESOURCES}/TestKafkaOnSchedule.yml"  "${TEST_RESOURCES}/")
+   target_include_directories(${testfilename} BEFORE PRIVATE 
"../../standard-processors/processors")

Review comment:
   Good catch!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534936351



##
File path: extensions/librdkafka/rdkafka_utils.h
##
@@ -0,0 +1,104 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/OptionalUtils.h"
+#include "rdkafka.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+enum class KafkaEncoding {
+  UTF8,
+  HEX
+};
+
+struct rd_kafka_conf_deleter {
+  void operator()(rd_kafka_conf_t* ptr) const noexcept { 
rd_kafka_conf_destroy(ptr); }
+};
+
+struct rd_kafka_producer_deleter {
+  void operator()(rd_kafka_t* ptr) const noexcept {
+rd_kafka_resp_err_t flush_ret = rd_kafka_flush(ptr, 1 /* ms */);  // 
Matching the wait time of KafkaConnection.cpp
+// If concerned, we could log potential errors here:
+// if (RD_KAFKA_RESP_ERR__TIMED_OUT == flush_ret) {
+//   std::cerr << "Deleting producer failed: time-out while trying to 
flush" << std::endl;
+// }

Review comment:
   I think as the error-enum returned is not obvious this is nice to have 
here. Another point is that one might not immediately think this could be a 
point of failure. I would have had this code in as debug log, but that would 
mean that any deleter would have to have an access to a logger.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534927466



##
File path: extensions/librdkafka/ConsumeKafka.cpp
##
@@ -0,0 +1,522 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ConsumeKafka.h"
+
+#include 
+#include 
+
+#include "core/PropertyValidation.h"
+#include "utils/ProcessorConfigUtils.h"
+#include "utils/gsl.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+constexpr const std::size_t ConsumeKafka::DEFAULT_MAX_POLL_RECORDS;
+constexpr char const* ConsumeKafka::DEFAULT_MAX_POLL_TIME;
+
+core::Property 
ConsumeKafka::KafkaBrokers(core::PropertyBuilder::createProperty("Kafka 
Brokers")
+  ->withDescription("A comma-separated list of known Kafka Brokers in the 
format :.")
+  ->withDefaultValue("localhost:9092", 
core::StandardValidators::get().NON_BLANK_VALIDATOR)
+  ->supportsExpressionLanguage(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::SecurityProtocol(core::PropertyBuilder::createProperty("Security 
Protocol")
+  ->withDescription("This property is currently not supported. Protocol used 
to communicate with brokers. Corresponds to Kafka's 'security.protocol' 
property.")
+  ->withAllowableValues({SECURITY_PROTOCOL_PLAINTEXT/*, 
SECURITY_PROTOCOL_SSL, SECURITY_PROTOCOL_SASL_PLAINTEXT, 
SECURITY_PROTOCOL_SASL_SSL*/ })
+  ->withDefaultValue(SECURITY_PROTOCOL_PLAINTEXT)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNames(core::PropertyBuilder::createProperty("Topic Names")
+  ->withDescription("The name of the Kafka Topic(s) to pull from. More than 
one can be supplied if comma separated.")
+  ->supportsExpressionLanguage(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNameFormat(core::PropertyBuilder::createProperty("Topic Name 
Format")
+  ->withDescription("Specifies whether the Topic(s) provided are a comma 
separated list of names or a single regular expression.")
+  ->withAllowableValues({TOPIC_FORMAT_NAMES, 
TOPIC_FORMAT_PATTERNS})
+  ->withDefaultValue(TOPIC_FORMAT_NAMES)
+  ->build());
+
+core::Property 
ConsumeKafka::HonorTransactions(core::PropertyBuilder::createProperty("Honor 
Transactions")
+  ->withDescription(
+  "Specifies whether or not NiFi should honor transactional guarantees 
when communicating with Kafka. If false, the Processor will use an \"isolation 
level\" of "
+  "read_uncomitted. This means that messages will be received as soon as 
they are written to Kafka but will be pulled, even if the producer cancels the 
transactions. "
+  "If this value is true, NiFi will not receive any messages for which the 
producer's transaction was canceled, but this can result in some latency since 
the consumer "
+  "must wait for the producer to finish its entire transaction instead of 
pulling as the messages become available.")
+  ->withDefaultValue(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::GroupID(core::PropertyBuilder::createProperty("Group ID")
+  ->withDescription("A Group ID is used to identify consumers that are within 
the same consumer group. Corresponds to Kafka's 'group.id' property.")
+  ->supportsExpressionLanguage(true)
+  ->build());
+
+core::Property 
ConsumeKafka::OffsetReset(core::PropertyBuilder::createProperty("Offset Reset")
+  ->withDescription("Allows you to manage the condition when there is no 
initial offset in Kafka or if the current offset does not exist any more on the 
server (e.g. because that "
+  "data has been deleted). Corresponds to Kafka's 'auto.offset.reset' 
property.")
+  ->withAllowableValues({OFFSET_RESET_EARLIEST, 
OFFSET_RESET_LATEST, OFFSET_RESET_NONE})
+  ->withDefaultValue(OFFSET_RESET_LATEST)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::KeyAttributeEncoding(core::PropertyBuilder::createProperty("Key 
Attribute Encoding")
+  ->withDescription("FlowFiles that are emitted have an attribute named 
'kafka.key'. This property dictates how the value of the attribute should be 
encoded.")
+  ->withAllowableValues({KEY_ATTR_ENCODING_UTF_8, 
KEY_ATTR_ENCODING_HEX})
+  

[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-03 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534926679



##
File path: extensions/librdkafka/ConsumeKafka.cpp
##
@@ -0,0 +1,522 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ConsumeKafka.h"
+
+#include 
+#include 
+
+#include "core/PropertyValidation.h"
+#include "utils/ProcessorConfigUtils.h"
+#include "utils/gsl.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+constexpr const std::size_t ConsumeKafka::DEFAULT_MAX_POLL_RECORDS;
+constexpr char const* ConsumeKafka::DEFAULT_MAX_POLL_TIME;
+
+core::Property 
ConsumeKafka::KafkaBrokers(core::PropertyBuilder::createProperty("Kafka 
Brokers")
+  ->withDescription("A comma-separated list of known Kafka Brokers in the 
format :.")
+  ->withDefaultValue("localhost:9092", 
core::StandardValidators::get().NON_BLANK_VALIDATOR)
+  ->supportsExpressionLanguage(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::SecurityProtocol(core::PropertyBuilder::createProperty("Security 
Protocol")
+  ->withDescription("This property is currently not supported. Protocol used 
to communicate with brokers. Corresponds to Kafka's 'security.protocol' 
property.")
+  ->withAllowableValues({SECURITY_PROTOCOL_PLAINTEXT/*, 
SECURITY_PROTOCOL_SSL, SECURITY_PROTOCOL_SASL_PLAINTEXT, 
SECURITY_PROTOCOL_SASL_SSL*/ })
+  ->withDefaultValue(SECURITY_PROTOCOL_PLAINTEXT)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNames(core::PropertyBuilder::createProperty("Topic Names")
+  ->withDescription("The name of the Kafka Topic(s) to pull from. More than 
one can be supplied if comma separated.")
+  ->supportsExpressionLanguage(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNameFormat(core::PropertyBuilder::createProperty("Topic Name 
Format")
+  ->withDescription("Specifies whether the Topic(s) provided are a comma 
separated list of names or a single regular expression.")
+  ->withAllowableValues({TOPIC_FORMAT_NAMES, 
TOPIC_FORMAT_PATTERNS})
+  ->withDefaultValue(TOPIC_FORMAT_NAMES)
+  ->build());
+
+core::Property 
ConsumeKafka::HonorTransactions(core::PropertyBuilder::createProperty("Honor 
Transactions")
+  ->withDescription(
+  "Specifies whether or not NiFi should honor transactional guarantees 
when communicating with Kafka. If false, the Processor will use an \"isolation 
level\" of "
+  "read_uncomitted. This means that messages will be received as soon as 
they are written to Kafka but will be pulled, even if the producer cancels the 
transactions. "
+  "If this value is true, NiFi will not receive any messages for which the 
producer's transaction was canceled, but this can result in some latency since 
the consumer "
+  "must wait for the producer to finish its entire transaction instead of 
pulling as the messages become available.")
+  ->withDefaultValue(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::GroupID(core::PropertyBuilder::createProperty("Group ID")
+  ->withDescription("A Group ID is used to identify consumers that are within 
the same consumer group. Corresponds to Kafka's 'group.id' property.")
+  ->supportsExpressionLanguage(true)
+  ->build());
+
+core::Property 
ConsumeKafka::OffsetReset(core::PropertyBuilder::createProperty("Offset Reset")
+  ->withDescription("Allows you to manage the condition when there is no 
initial offset in Kafka or if the current offset does not exist any more on the 
server (e.g. because that "
+  "data has been deleted). Corresponds to Kafka's 'auto.offset.reset' 
property.")
+  ->withAllowableValues({OFFSET_RESET_EARLIEST, 
OFFSET_RESET_LATEST, OFFSET_RESET_NONE})
+  ->withDefaultValue(OFFSET_RESET_LATEST)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::KeyAttributeEncoding(core::PropertyBuilder::createProperty("Key 
Attribute Encoding")
+  ->withDescription("FlowFiles that are emitted have an attribute named 
'kafka.key'. This property dictates how the value of the attribute should be 
encoded.")
+  ->withAllowableValues({KEY_ATTR_ENCODING_UTF_8, 
KEY_ATTR_ENCODING_HEX})
+  

[GitHub] [nifi] r65535 opened a new pull request #4705: Added FIFO options to PutSQS

2020-12-03 Thread GitBox


r65535 opened a new pull request #4705:
URL: https://github.com/apache/nifi/pull/4705


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [x] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org