[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up minifiexe and MINIFI_HOME logic

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up 
minifiexe and MINIFI_HOME logic
URL: https://github.com/apache/nifi-minifi-cpp/pull/709#discussion_r367275409
 
 

 ##
 File path: libminifi/src/utils/file/PathUtils.cpp
 ##
 @@ -48,6 +56,35 @@ bool PathUtils::getFileNameAndPath(const std::string &path, 
std::string &filePat
   return true;
 }
 
+std::string PathUtils::getFullPath(const std::string& path) {
+#ifdef WIN32
+  std::vector buffer(MAX_PATH);
+  uint32_t len = 0U;
+  while (true) {
+  len = GetFullPathNameA(path.c_str(), buffer.size(), buffer.data(), 
nullptr /*lpFilePart*/);
 
 Review comment:
   nit - - can be used GetFullPathName.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up minifiexe and MINIFI_HOME logic

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up 
minifiexe and MINIFI_HOME logic
URL: https://github.com/apache/nifi-minifi-cpp/pull/709#discussion_r367274461
 
 

 ##
 File path: libminifi/src/utils/file/PathUtils.cpp
 ##
 @@ -48,6 +56,35 @@ bool PathUtils::getFileNameAndPath(const std::string &path, 
std::string &filePat
   return true;
 }
 
+std::string PathUtils::getFullPath(const std::string& path) {
+#ifdef WIN32
+  std::vector buffer(MAX_PATH);
 
 Review comment:
   MAX_PATH just 260, could be used `char buffer[MAX_PATH+1];`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up minifiexe and MINIFI_HOME logic

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up 
minifiexe and MINIFI_HOME logic
URL: https://github.com/apache/nifi-minifi-cpp/pull/709#discussion_r367274461
 
 

 ##
 File path: libminifi/src/utils/file/PathUtils.cpp
 ##
 @@ -48,6 +56,35 @@ bool PathUtils::getFileNameAndPath(const std::string &path, 
std::string &filePat
   return true;
 }
 
+std::string PathUtils::getFullPath(const std::string& path) {
+#ifdef WIN32
+  std::vector buffer(MAX_PATH);
 
 Review comment:
   MAX_PATH just 260, could be used `char buffer[MAX_PATH+1];`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up minifiexe and MINIFI_HOME logic

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up 
minifiexe and MINIFI_HOME logic
URL: https://github.com/apache/nifi-minifi-cpp/pull/709#discussion_r367273019
 
 

 ##
 File path: libminifi/src/utils/Environment.cpp
 ##
 @@ -0,0 +1,192 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/Environment.h"
+
+#ifdef WIN32
+#include 
+#else
+#include 
+#include 
+#include 
+#endif
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool Environment::runningAsService_(false);
+
+void Environment::accessEnvironment(const std::function& func) {
+  static std::recursive_mutex environmentMutex;
+  std::lock_guard lock(environmentMutex);
+  func();
+}
+
+std::pair Environment::getEnvironmentVariable(const char* 
name) {
+  bool exists = false;
+  std::string value;
+
+  Environment::accessEnvironment([&exists, &value, name](){
+#ifdef WIN32
+std::vector buffer(32767U);  // 
https://docs.microsoft.com/en-gb/windows/win32/api/processenv/nf-processenv-getenvironmentvariablea
+// GetEnvironmentVariableA does not set last error to 0 on success, so an 
error from a pervious API call would influence the GetLastError() later,
+// so we set the last error to 0 before calling
+SetLastError(ERROR_SUCCESS);
+uint32_t ret = GetEnvironmentVariableA(name, buffer.data(), buffer.size());
+if (ret > 0U) {
+  exists = true;
+  value = std::string(buffer.data(), ret);
+} else if (GetLastError() == ERROR_SUCCESS) {
+  // Exists, but empty
+  exists = true;
+}
+#else
+char* ret = getenv(name);
+if (ret != nullptr) {
+  exists = true;
+  value = ret;
+}
+#endif
+  });
+
+  return std::make_pair(exists, std::move(value));
+}
+
+bool Environment::setEnvironmentVariable(const char* name, const char* value, 
bool overwrite /*= true*/) {
+  bool success = false;
+
+  Environment::accessEnvironment([&success, name, value, overwrite](){
+#ifdef WIN32
+if (!overwrite && Environment::getEnvironmentVariable(name).first) {
+  success = true;
+} else {
+  success = SetEnvironmentVariableA(name, value);
+}
+#else
+int ret = setenv(name, value, static_cast(overwrite));
+success = ret == 0;
+#endif
+  });
+
+  return success;
+}
+
+bool Environment::unsetEnvironmentVariable(const char* name) {
+  bool success = false;
+
+  Environment::accessEnvironment([&success, name](){
+#ifdef WIN32
+success = SetEnvironmentVariableA(name, nullptr);
+#else
+int ret = unsetenv(name);
+success = ret == 0;
+#endif
+  });
+
+  return success;
+}
+
+std::string Environment::getCurrentWorkingDirectory() {
+  std::string cwd;
+
+  Environment::accessEnvironment([&cwd](){
+#ifdef WIN32
+uint32_t len = 0U;
 
 Review comment:
   This works, but simpler would be:
   char dir[MAX_PATH+1] ;
   GetCurrentDirectory( sizeof(dir), dir ) ;
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up minifiexe and MINIFI_HOME logic

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up 
minifiexe and MINIFI_HOME logic
URL: https://github.com/apache/nifi-minifi-cpp/pull/709#discussion_r367268406
 
 

 ##
 File path: libminifi/src/utils/Environment.cpp
 ##
 @@ -0,0 +1,192 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/Environment.h"
+
+#ifdef WIN32
+#include 
+#else
+#include 
+#include 
+#include 
+#endif
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool Environment::runningAsService_(false);
+
+void Environment::accessEnvironment(const std::function& func) {
+  static std::recursive_mutex environmentMutex;
+  std::lock_guard lock(environmentMutex);
+  func();
+}
+
+std::pair Environment::getEnvironmentVariable(const char* 
name) {
+  bool exists = false;
+  std::string value;
+
+  Environment::accessEnvironment([&exists, &value, name](){
+#ifdef WIN32
+std::vector buffer(32767U);  // 
https://docs.microsoft.com/en-gb/windows/win32/api/processenv/nf-processenv-getenvironmentvariablea
+// GetEnvironmentVariableA does not set last error to 0 on success, so an 
error from a pervious API call would influence the GetLastError() later,
+// so we set the last error to 0 before calling
+SetLastError(ERROR_SUCCESS);
+uint32_t ret = GetEnvironmentVariableA(name, buffer.data(), buffer.size());
+if (ret > 0U) {
+  exists = true;
+  value = std::string(buffer.data(), ret);
+} else if (GetLastError() == ERROR_SUCCESS) {
+  // Exists, but empty
+  exists = true;
+}
+#else
+char* ret = getenv(name);
+if (ret != nullptr) {
+  exists = true;
+  value = ret;
+}
+#endif
+  });
+
+  return std::make_pair(exists, std::move(value));
+}
+
+bool Environment::setEnvironmentVariable(const char* name, const char* value, 
bool overwrite /*= true*/) {
+  bool success = false;
+
+  Environment::accessEnvironment([&success, name, value, overwrite](){
+#ifdef WIN32
+if (!overwrite && Environment::getEnvironmentVariable(name).first) {
+  success = true;
+} else {
+  success = SetEnvironmentVariableA(name, value);
+}
+#else
+int ret = setenv(name, value, static_cast(overwrite));
+success = ret == 0;
+#endif
+  });
+
+  return success;
+}
+
+bool Environment::unsetEnvironmentVariable(const char* name) {
+  bool success = false;
+
+  Environment::accessEnvironment([&success, name](){
+#ifdef WIN32
+success = SetEnvironmentVariableA(name, nullptr);
+#else
+int ret = unsetenv(name);
+success = ret == 0;
+#endif
+  });
+
+  return success;
+}
+
+std::string Environment::getCurrentWorkingDirectory() {
+  std::string cwd;
+
+  Environment::accessEnvironment([&cwd](){
+#ifdef WIN32
+uint32_t len = 0U;
+std::vector buffer;
+// 
https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-getcurrentdirectory
+// "If the buffer that is pointed to by lpBuffer is not large enough,
+// the return value specifies the required size of the buffer,
+// in characters, including the null-terminating character."
+while (true) {
+  len = GetCurrentDirectoryA(buffer.size(), buffer.data());
+  if (len > buffer.size()) {
+buffer.resize(len);
+continue;
+  } else {
+break;
+  }
+}
+if (len > 0U) {
+  cwd = std::string(buffer.data(), len);
+}
+#else
+std::vector buffer(1024U);
+char* path = nullptr;
+while (true) {
+  path = getcwd(buffer.data(), buffer.size());
+  if (path == nullptr) {
+if (errno == ERANGE) {
+  buffer.resize(buffer.size() * 2);
+  continue;
+} else {
+  break;
+}
+  } else {
+break;
+  }
+}
+if (path != nullptr) {
+  cwd = path;
+}
+#endif
+  });
+
+  return cwd;
+}
+
+bool Environment::setCurrentWorkingDirectory(const char* directory) {
+  bool success = false;
+
+  Environment::accessEnvironment([&success, directory](){
+#ifdef WIN32
+success = SetCurrentDirectoryA(directory);
 
 Review comment:
   nit - can be used SetCurrentDirectory.

--

[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up minifiexe and MINIFI_HOME logic

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up 
minifiexe and MINIFI_HOME logic
URL: https://github.com/apache/nifi-minifi-cpp/pull/709#discussion_r367268047
 
 

 ##
 File path: libminifi/src/utils/Environment.cpp
 ##
 @@ -0,0 +1,192 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/Environment.h"
+
+#ifdef WIN32
+#include 
+#else
+#include 
+#include 
+#include 
+#endif
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool Environment::runningAsService_(false);
+
+void Environment::accessEnvironment(const std::function& func) {
+  static std::recursive_mutex environmentMutex;
+  std::lock_guard lock(environmentMutex);
+  func();
+}
+
+std::pair Environment::getEnvironmentVariable(const char* 
name) {
+  bool exists = false;
+  std::string value;
+
+  Environment::accessEnvironment([&exists, &value, name](){
+#ifdef WIN32
+std::vector buffer(32767U);  // 
https://docs.microsoft.com/en-gb/windows/win32/api/processenv/nf-processenv-getenvironmentvariablea
+// GetEnvironmentVariableA does not set last error to 0 on success, so an 
error from a pervious API call would influence the GetLastError() later,
+// so we set the last error to 0 before calling
+SetLastError(ERROR_SUCCESS);
+uint32_t ret = GetEnvironmentVariableA(name, buffer.data(), buffer.size());
+if (ret > 0U) {
+  exists = true;
+  value = std::string(buffer.data(), ret);
+} else if (GetLastError() == ERROR_SUCCESS) {
+  // Exists, but empty
+  exists = true;
+}
+#else
+char* ret = getenv(name);
+if (ret != nullptr) {
+  exists = true;
+  value = ret;
+}
+#endif
+  });
+
+  return std::make_pair(exists, std::move(value));
+}
+
+bool Environment::setEnvironmentVariable(const char* name, const char* value, 
bool overwrite /*= true*/) {
+  bool success = false;
+
+  Environment::accessEnvironment([&success, name, value, overwrite](){
+#ifdef WIN32
+if (!overwrite && Environment::getEnvironmentVariable(name).first) {
+  success = true;
+} else {
+  success = SetEnvironmentVariableA(name, value);
+}
+#else
+int ret = setenv(name, value, static_cast(overwrite));
+success = ret == 0;
+#endif
+  });
+
+  return success;
+}
+
+bool Environment::unsetEnvironmentVariable(const char* name) {
+  bool success = false;
+
+  Environment::accessEnvironment([&success, name](){
+#ifdef WIN32
+success = SetEnvironmentVariableA(name, nullptr);
 
 Review comment:
   nit - can used SetEnvironmentVariable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up minifiexe and MINIFI_HOME logic

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up 
minifiexe and MINIFI_HOME logic
URL: https://github.com/apache/nifi-minifi-cpp/pull/709#discussion_r367268177
 
 

 ##
 File path: libminifi/src/utils/Environment.cpp
 ##
 @@ -0,0 +1,192 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/Environment.h"
+
+#ifdef WIN32
+#include 
+#else
+#include 
+#include 
+#include 
+#endif
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool Environment::runningAsService_(false);
+
+void Environment::accessEnvironment(const std::function& func) {
+  static std::recursive_mutex environmentMutex;
+  std::lock_guard lock(environmentMutex);
+  func();
+}
+
+std::pair Environment::getEnvironmentVariable(const char* 
name) {
+  bool exists = false;
+  std::string value;
+
+  Environment::accessEnvironment([&exists, &value, name](){
+#ifdef WIN32
+std::vector buffer(32767U);  // 
https://docs.microsoft.com/en-gb/windows/win32/api/processenv/nf-processenv-getenvironmentvariablea
+// GetEnvironmentVariableA does not set last error to 0 on success, so an 
error from a pervious API call would influence the GetLastError() later,
+// so we set the last error to 0 before calling
+SetLastError(ERROR_SUCCESS);
+uint32_t ret = GetEnvironmentVariableA(name, buffer.data(), buffer.size());
+if (ret > 0U) {
+  exists = true;
+  value = std::string(buffer.data(), ret);
+} else if (GetLastError() == ERROR_SUCCESS) {
+  // Exists, but empty
+  exists = true;
+}
+#else
+char* ret = getenv(name);
+if (ret != nullptr) {
+  exists = true;
+  value = ret;
+}
+#endif
+  });
+
+  return std::make_pair(exists, std::move(value));
+}
+
+bool Environment::setEnvironmentVariable(const char* name, const char* value, 
bool overwrite /*= true*/) {
+  bool success = false;
+
+  Environment::accessEnvironment([&success, name, value, overwrite](){
+#ifdef WIN32
+if (!overwrite && Environment::getEnvironmentVariable(name).first) {
+  success = true;
+} else {
+  success = SetEnvironmentVariableA(name, value);
+}
+#else
+int ret = setenv(name, value, static_cast(overwrite));
+success = ret == 0;
+#endif
+  });
+
+  return success;
+}
+
+bool Environment::unsetEnvironmentVariable(const char* name) {
+  bool success = false;
+
+  Environment::accessEnvironment([&success, name](){
+#ifdef WIN32
+success = SetEnvironmentVariableA(name, nullptr);
+#else
+int ret = unsetenv(name);
+success = ret == 0;
+#endif
+  });
+
+  return success;
+}
+
+std::string Environment::getCurrentWorkingDirectory() {
+  std::string cwd;
+
+  Environment::accessEnvironment([&cwd](){
+#ifdef WIN32
+uint32_t len = 0U;
+std::vector buffer;
+// 
https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-getcurrentdirectory
+// "If the buffer that is pointed to by lpBuffer is not large enough,
+// the return value specifies the required size of the buffer,
+// in characters, including the null-terminating character."
+while (true) {
+  len = GetCurrentDirectoryA(buffer.size(), buffer.data());
 
 Review comment:
   nit - can be used GetCurrentDirectory.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up minifiexe and MINIFI_HOME logic

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up 
minifiexe and MINIFI_HOME logic
URL: https://github.com/apache/nifi-minifi-cpp/pull/709#discussion_r367267981
 
 

 ##
 File path: libminifi/src/utils/Environment.cpp
 ##
 @@ -0,0 +1,192 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/Environment.h"
+
+#ifdef WIN32
+#include 
+#else
+#include 
+#include 
+#include 
+#endif
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool Environment::runningAsService_(false);
+
+void Environment::accessEnvironment(const std::function& func) {
+  static std::recursive_mutex environmentMutex;
+  std::lock_guard lock(environmentMutex);
+  func();
+}
+
+std::pair Environment::getEnvironmentVariable(const char* 
name) {
+  bool exists = false;
+  std::string value;
+
+  Environment::accessEnvironment([&exists, &value, name](){
+#ifdef WIN32
+std::vector buffer(32767U);  // 
https://docs.microsoft.com/en-gb/windows/win32/api/processenv/nf-processenv-getenvironmentvariablea
+// GetEnvironmentVariableA does not set last error to 0 on success, so an 
error from a pervious API call would influence the GetLastError() later,
+// so we set the last error to 0 before calling
+SetLastError(ERROR_SUCCESS);
+uint32_t ret = GetEnvironmentVariableA(name, buffer.data(), buffer.size());
+if (ret > 0U) {
+  exists = true;
+  value = std::string(buffer.data(), ret);
+} else if (GetLastError() == ERROR_SUCCESS) {
+  // Exists, but empty
+  exists = true;
+}
+#else
+char* ret = getenv(name);
+if (ret != nullptr) {
+  exists = true;
+  value = ret;
+}
+#endif
+  });
+
+  return std::make_pair(exists, std::move(value));
+}
+
+bool Environment::setEnvironmentVariable(const char* name, const char* value, 
bool overwrite /*= true*/) {
+  bool success = false;
+
+  Environment::accessEnvironment([&success, name, value, overwrite](){
+#ifdef WIN32
+if (!overwrite && Environment::getEnvironmentVariable(name).first) {
+  success = true;
+} else {
+  success = SetEnvironmentVariableA(name, value);
 
 Review comment:
   nit - can used SetEnvironmentVariable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up minifiexe and MINIFI_HOME logic

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #709: MINIFICPP-1088 - clean up 
minifiexe and MINIFI_HOME logic
URL: https://github.com/apache/nifi-minifi-cpp/pull/709#discussion_r367267576
 
 

 ##
 File path: libminifi/src/utils/Environment.cpp
 ##
 @@ -0,0 +1,192 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/Environment.h"
+
+#ifdef WIN32
+#include 
+#else
+#include 
+#include 
+#include 
+#endif
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool Environment::runningAsService_(false);
+
+void Environment::accessEnvironment(const std::function& func) {
+  static std::recursive_mutex environmentMutex;
+  std::lock_guard lock(environmentMutex);
+  func();
+}
+
+std::pair Environment::getEnvironmentVariable(const char* 
name) {
+  bool exists = false;
+  std::string value;
+
+  Environment::accessEnvironment([&exists, &value, name](){
+#ifdef WIN32
+std::vector buffer(32767U);  // 
https://docs.microsoft.com/en-gb/windows/win32/api/processenv/nf-processenv-getenvironmentvariablea
+// GetEnvironmentVariableA does not set last error to 0 on success, so an 
error from a pervious API call would influence the GetLastError() later,
+// so we set the last error to 0 before calling
+SetLastError(ERROR_SUCCESS);
+uint32_t ret = GetEnvironmentVariableA(name, buffer.data(), buffer.size());
 
 Review comment:
   nit - can be used GetEnvironmentVariable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r367262504
 
 

 ##
 File path: extensions/sql/data/JSONSQLWriter.cpp
 ##
 @@ -0,0 +1,156 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "JSONSQLWriter.h"
+#include "rapidjson/writer.h"
+#include "rapidjson/stringbuffer.h"
+#include "rapidjson/prettywriter.h"
+#include "Exception.h"
+#include "Utils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+
+JSONSQLWriter::JSONSQLWriter(const soci::rowset &rowset, 
std::ostream *out, MaxCollector* pMaxCollector)
+  : SQLWriter(rowset), json_payload_(rapidjson::kArrayType), 
output_stream_(out), pMaxCollector_(pMaxCollector) {
+}
+
+JSONSQLWriter::~JSONSQLWriter() {
+}
+
+bool JSONSQLWriter::addRow(const soci::row &row, size_t rowCount) {
+  rapidjson::Document::AllocatorType &alloc = json_payload_.GetAllocator();
+  rapidjson::Value rowobj(rapidjson::kObjectType);
+
+  // 'countColumnsInMaxCollector' is used to check that all columns in 
maxCollector are in row columns.
+  // It is checked here since don't know if it is possible in 'soci' to get 
coulmns info of a select statements without executing query.
+  int countColumnsInMaxCollector = 0;
+
+  for (std::size_t i = 0; i != row.size(); ++i) {
+const soci::column_properties & props = row.get_properties(i);
+
+const auto& columnName = utils::toLower(props.get_name());
+
+if (pMaxCollector_ && rowCount == 0 && 
pMaxCollector_->hasColumn(columnName)) {
+  countColumnsInMaxCollector++;
+}
+
+rapidjson::Value name;
+name.SetString(props.get_name().c_str(), props.get_name().length(), alloc);
+
+rapidjson::Value valueVal;
+
+if (row.get_indicator(i) == soci::i_null) {
+  const std::string null = "NULL";
+  valueVal.SetString(null.c_str(), null.length(), alloc);
+} else {
+  switch (const auto dataType = props.get_data_type()) {
+case soci::data_type::dt_string: {
+  const auto value = std::string(row.get(i));
+  if (pMaxCollector_) {
+pMaxCollector_->updateMaxValue(columnName, '\'' + value + '\'');
+  }
+  valueVal.SetString(value.c_str(), value.length(), alloc);
+}
+break;
+case soci::data_type::dt_double: {
+  const auto value = row.get(i);
+  if (pMaxCollector_) {
+pMaxCollector_->updateMaxValue(columnName, value);
+  }
+  valueVal.SetDouble(value);
+}
+break;
+case soci::data_type::dt_integer: {
+  const auto value = row.get(i);
+  if (pMaxCollector_) {
+pMaxCollector_->updateMaxValue(columnName, value);
+  }
+  valueVal.SetInt(value);
+}
+break;
+case soci::data_type::dt_long_long: {
+  const auto value = row.get(i);
+  if (pMaxCollector_) {
+pMaxCollector_->updateMaxValue(columnName, value);
+  }
+  valueVal.SetInt64(value);
+}
+break;
+case soci::data_type::dt_unsigned_long_long: {
+  const auto value = row.get(i);
+  if (pMaxCollector_) {
+pMaxCollector_->updateMaxValue(columnName, value);
+  }
+  valueVal.SetUint64(value);
+}
+break;
+case soci::data_type::dt_date: {
+  // It looks like design bug in soci - for `dt_date`, it returns 
std::tm, which doesn't have mmilliseconds, but DB 'datetime' has milliseconds.
+  // Don't know if it is possible to get a string representation for 
'dt_date' type.
+  // The problem for maxCollector, if milliseconds value is not stored 
as a maximum, then when running query with 'select ... datetimeColumn > 
maxValue', 
+  // it will be always at least one record since DB has milliseconds 
"maxValue.milliseconds".
+  // For a workaround in the string representation for 'dt_date', add 
'999' for maxCollector (won't work for cases where time precision is important).
+  const std::tm when = row.get(i);
+
+  char strWhen[128];
+  if (!std:

[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r367262504
 
 

 ##
 File path: extensions/sql/data/JSONSQLWriter.cpp
 ##
 @@ -0,0 +1,156 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "JSONSQLWriter.h"
+#include "rapidjson/writer.h"
+#include "rapidjson/stringbuffer.h"
+#include "rapidjson/prettywriter.h"
+#include "Exception.h"
+#include "Utils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+
+JSONSQLWriter::JSONSQLWriter(const soci::rowset &rowset, 
std::ostream *out, MaxCollector* pMaxCollector)
+  : SQLWriter(rowset), json_payload_(rapidjson::kArrayType), 
output_stream_(out), pMaxCollector_(pMaxCollector) {
+}
+
+JSONSQLWriter::~JSONSQLWriter() {
+}
+
+bool JSONSQLWriter::addRow(const soci::row &row, size_t rowCount) {
+  rapidjson::Document::AllocatorType &alloc = json_payload_.GetAllocator();
+  rapidjson::Value rowobj(rapidjson::kObjectType);
+
+  // 'countColumnsInMaxCollector' is used to check that all columns in 
maxCollector are in row columns.
+  // It is checked here since don't know if it is possible in 'soci' to get 
coulmns info of a select statements without executing query.
+  int countColumnsInMaxCollector = 0;
+
+  for (std::size_t i = 0; i != row.size(); ++i) {
+const soci::column_properties & props = row.get_properties(i);
+
+const auto& columnName = utils::toLower(props.get_name());
+
+if (pMaxCollector_ && rowCount == 0 && 
pMaxCollector_->hasColumn(columnName)) {
+  countColumnsInMaxCollector++;
+}
+
+rapidjson::Value name;
+name.SetString(props.get_name().c_str(), props.get_name().length(), alloc);
+
+rapidjson::Value valueVal;
+
+if (row.get_indicator(i) == soci::i_null) {
+  const std::string null = "NULL";
+  valueVal.SetString(null.c_str(), null.length(), alloc);
+} else {
+  switch (const auto dataType = props.get_data_type()) {
+case soci::data_type::dt_string: {
+  const auto value = std::string(row.get(i));
+  if (pMaxCollector_) {
+pMaxCollector_->updateMaxValue(columnName, '\'' + value + '\'');
+  }
+  valueVal.SetString(value.c_str(), value.length(), alloc);
+}
+break;
+case soci::data_type::dt_double: {
+  const auto value = row.get(i);
+  if (pMaxCollector_) {
+pMaxCollector_->updateMaxValue(columnName, value);
+  }
+  valueVal.SetDouble(value);
+}
+break;
+case soci::data_type::dt_integer: {
+  const auto value = row.get(i);
+  if (pMaxCollector_) {
+pMaxCollector_->updateMaxValue(columnName, value);
+  }
+  valueVal.SetInt(value);
+}
+break;
+case soci::data_type::dt_long_long: {
+  const auto value = row.get(i);
+  if (pMaxCollector_) {
+pMaxCollector_->updateMaxValue(columnName, value);
+  }
+  valueVal.SetInt64(value);
+}
+break;
+case soci::data_type::dt_unsigned_long_long: {
+  const auto value = row.get(i);
+  if (pMaxCollector_) {
+pMaxCollector_->updateMaxValue(columnName, value);
+  }
+  valueVal.SetUint64(value);
+}
+break;
+case soci::data_type::dt_date: {
+  // It looks like design bug in soci - for `dt_date`, it returns 
std::tm, which doesn't have mmilliseconds, but DB 'datetime' has milliseconds.
+  // Don't know if it is possible to get a string representation for 
'dt_date' type.
+  // The problem for maxCollector, if milliseconds value is not stored 
as a maximum, then when running query with 'select ... datetimeColumn > 
maxValue', 
+  // it will be always at least one record since DB has milliseconds 
"maxValue.milliseconds".
+  // For a workaround in the string representation for 'dt_date', add 
'999' for maxCollector (won't work for cases where time precision is important).
+  const std::tm when = row.get(i);
+
+  char strWhen[128];
+  if (!std:

[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r367261419
 
 

 ##
 File path: extensions/sql/services/DatabaseService.cpp
 ##
 @@ -0,0 +1,69 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "core/logging/LoggerConfiguration.h"
+#include "core/controller/ControllerService.h"
+#include 
+#include 
+#include 
+#include "core/Property.h"
+#include "DatabaseService.h"
+#include "DatabaseService.h"
+#include "io/validation.h"
+#include "properties/Configure.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+namespace controllers {
+
+static core::Property RemoteServer;
+static core::Property Port;
+static core::Property MaxQueueSize;
+
+core::Property 
DatabaseService::ConnectionString(core::PropertyBuilder::createProperty("Connection
 String")->withDescription("Database Connection 
String")->isRequired(true)->build());
+
+void DatabaseService::initialize() {
+  if (initialized_)
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r367261245
 
 

 ##
 File path: extensions/sql/data/SQLRowSubscriber.h
 ##
 @@ -0,0 +1,46 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+
+struct SQLRowSubscriber {
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r367261303
 
 

 ##
 File path: extensions/sql/data/JSONSQLWriter.cpp
 ##
 @@ -0,0 +1,101 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "JSONSQLWriter.h"
+#include "rapidjson/writer.h"
+#include "rapidjson/stringbuffer.h"
+#include "rapidjson/prettywriter.h"
+#include "Exception.h"
+#include "Utils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+
+JSONSQLWriter::JSONSQLWriter()
+  : jsonPayload_(rapidjson::kArrayType) {
+}
+
+JSONSQLWriter::~JSONSQLWriter() {}
+
+void JSONSQLWriter::beginProcessRow() {
+  jsonRow_ = rapidjson::kObjectType;
+}
+
+void JSONSQLWriter::endProcessRow() {
+  jsonPayload_.PushBack(jsonRow_, jsonPayload_.GetAllocator());
+}
+
+void JSONSQLWriter::processColumnName(const std::string& name) {}
+
+void JSONSQLWriter::processColumn(const std::string& name, const std::string& 
value) {
+  addToJSONRow(name, toJSONString(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, double value) {
+  addToJSONRow(name, rapidjson::Value().SetDouble(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, int value) {
+  addToJSONRow(name, rapidjson::Value().SetInt(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, long long value) {
+  addToJSONRow(name, rapidjson::Value().SetInt64(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, unsigned long long 
value) {
+  addToJSONRow(name, rapidjson::Value().SetUint64(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, const char* value) {
+  addToJSONRow(name, toJSONString(value));
+}
+
+void JSONSQLWriter::addToJSONRow(const std::string& columnName, 
rapidjson::Value& jsonValue) {
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r367261363
 
 

 ##
 File path: extensions/sql/processors/QueryDatabaseTable.cpp
 ##
 @@ -0,0 +1,465 @@
+/**
+ * @file QueryDatabaseTable.cpp
+ * PutSQL class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "QueryDatabaseTable.h"
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "io/DataStream.h"
+#include "core/ProcessContext.h"
+#include "core/ProcessSession.h"
+#include "Exception.h"
+#include "utils/OsUtils.h"
+#include "data/DatabaseConnectors.h"
+#include "data/JSONSQLWriter.h"
+#include "data/SQLRowsetProcessor.h"
+#include "data/WriteCallback.h"
+#include "data/MaxCollector.h"
+#include "data/Utils.h"
+#include "utils/file/FileUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+const std::string QueryDatabaseTable::ProcessorName("QueryDatabaseTable");
+
+const core::Property QueryDatabaseTable::s_tableName(
+  core::PropertyBuilder::createProperty("Table 
Name")->isRequired(true)->withDescription("The name of the database table to be 
queried.")->supportsExpressionLanguage(true)->build());
+
+const core::Property QueryDatabaseTable::s_columnNames(
+  core::PropertyBuilder::createProperty("Columns to 
Return")->isRequired(false)->withDescription(
+"A comma-separated list of column names to be used in the query. If your 
database requires special treatment of the names (quoting, e.g.), each name 
should include such treatment. "
+"If no column names are supplied, all columns in the specified table will 
be returned. "
+"NOTE: It is important to use consistent column names for a given table 
for incremental fetch to work 
properly.")->supportsExpressionLanguage(true)->build());
+
+const core::Property QueryDatabaseTable::s_maxValueColumnNames(
+  core::PropertyBuilder::createProperty("Maximum-value 
Columns")->isRequired(false)->withDescription(
+"A comma-separated list of column names. The processor will keep track of 
the maximum value for each column that has been returned since the processor 
started running. "
+"Using multiple columns implies an order to the column list, and each 
column's values are expected to increase more slowly than the previous columns' 
values. "
+"Thus, using multiple columns implies a hierarchical structure of columns, 
which is usually used for partitioning tables. "
+"This processor can be used to retrieve only those rows that have been 
added/updated since the last retrieval. "
+"Note that some ODBC types such as bit/boolean are not conducive to 
maintaining maximum value, so columns of these types should not be listed in 
this property, and will result in error(s) during processing. "
+"If no columns are provided, all rows from the table will be considered, 
which could have a performance impact. "
+"NOTE: It is important to use consistent max-value column names for a 
given table for incremental fetch to work 
properly.")->supportsExpressionLanguage(true)->build());
+
+const core::Property QueryDatabaseTable::s_whereClause(
+  
core::PropertyBuilder::createProperty("db-fetch-where-clause")->isRequired(false)->withDescription(
+"A custom clause to be added in the WHERE condition when building SQL 
queries.")->supportsExpressionLanguage(true)->build());
+
+const core::Property QueryDatabaseTable::s_sqlQuery(
+  
core::PropertyBuilder::createProperty("db-fetch-sql-query")->isRequired(false)->withDescription(
+"A custom SQL query used to retrieve data. Instead of building a SQL query 
from other properties, this query will be wrapped as a sub-query. "
+"Query must have no ORDER BY 
statement.")->supportsExpressionLanguage(true)->build());
+
+const core::Property QueryDatabaseTable::s_maxRowsPerFlowFile(
+  
core::PropertyBuilder::createProperty("qdbt-max-rows")->isRequired(true)->withDefaultValue(0)->withDescription(
+"The maximum number of result rows that will be included in a single 
FlowFile. This wil

[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r367261189
 
 

 ##
 File path: extensions/sql/data/JSONSQLWriter.cpp
 ##
 @@ -0,0 +1,101 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "JSONSQLWriter.h"
+#include "rapidjson/writer.h"
+#include "rapidjson/stringbuffer.h"
+#include "rapidjson/prettywriter.h"
+#include "Exception.h"
+#include "Utils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+
+JSONSQLWriter::JSONSQLWriter()
+  : jsonPayload_(rapidjson::kArrayType) {
+}
+
+JSONSQLWriter::~JSONSQLWriter() {}
+
+void JSONSQLWriter::beginProcessRow() {
+  jsonRow_ = rapidjson::kObjectType;
+}
+
+void JSONSQLWriter::endProcessRow() {
+  jsonPayload_.PushBack(jsonRow_, jsonPayload_.GetAllocator());
+}
+
+void JSONSQLWriter::processColumnName(const std::string& name) {}
+
+void JSONSQLWriter::processColumn(const std::string& name, const std::string& 
value) {
+  addToJSONRow(name, toJSONString(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, double value) {
+  addToJSONRow(name, rapidjson::Value().SetDouble(value));
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
am-c-p-p commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r367261147
 
 

 ##
 File path: win_build_vs.bat
 ##
 @@ -65,7 +65,7 @@ cd %builddir%\
 
 
 
-cmake -G %generator% -DCMAKE_BUILD_TYPE_INIT=%cmake_build_type% 
-DCMAKE_BUILD_TYPE=%cmake_build_type% -DWIN32=WIN32 
-DENABLE_LIBRDKAFKA=%build_kafka% -DENABLE_JNI=%build_jni% -DOPENSSL_OFF=OFF 
-DENABLE_COAP=%build_coap% -DUSE_SHARED_LIBS=OFF -DDISABLE_CONTROLLER=ON  
-DBUILD_ROCKSDB=ON -DFORCE_WINDOWS=ON -DUSE_SYSTEM_UUID=OFF 
-DDISABLE_LIBARCHIVE=ON -DDISABLE_SCRIPTING=ON -DEXCLUDE_BOOST=ON 
-DENABLE_WEL=TRUE -DFAIL_ON_WARNINGS=OFF -DSKIP_TESTS=%skiptests% .. && msbuild 
/m nifi-minifi-cpp.sln /property:Configuration=%build_type% 
/property:Platform=%build_platform% && copy main\%build_type%\minifi.exe main\
+cmake -G %generator% -DENABLE_SQL=ON 
-DCMAKE_BUILD_TYPE_INIT=%cmake_build_type% 
-DCMAKE_BUILD_TYPE=%cmake_build_type% -DWIN32=WIN32 
-DENABLE_LIBRDKAFKA=%build_kafka% -DENABLE_JNI=%build_jni% -DOPENSSL_OFF=OFF 
-DENABLE_COAP=%build_coap% -DUSE_SHARED_LIBS=OFF -DDISABLE_CONTROLLER=ON  
-DBUILD_ROCKSDB=ON -DFORCE_WINDOWS=ON -DUSE_SYSTEM_UUID=OFF 
-DDISABLE_LIBARCHIVE=ON -DDISABLE_SCRIPTING=ON -DEXCLUDE_BOOST=ON 
-DENABLE_WEL=TRUE -DFAIL_ON_WARNINGS=OFF -DSKIP_TESTS=%skiptests% .. && msbuild 
/m nifi-minifi-cpp.sln /property:Configuration=%build_type% 
/property:Platform=%build_platform% && copy main\%build_type%\minifi.exe main\
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6974) Processor for Nakadi streaming

2020-01-15 Thread Martin (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin updated NIFI-6974:
-
Issue Type: New Feature  (was: Wish)

> Processor for Nakadi streaming
> --
>
> Key: NIFI-6974
> URL: https://issues.apache.org/jira/browse/NIFI-6974
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Martin
>Priority: Major
>  Labels: Kafka, Nakadi, Stream
>
> It would be great to have a processor for Nakadi.
> "Nakadi is a distributed event bus broker that implements a RESTful API 
> abstraction on top of Kafka-like queues, which can be used to send, receive, 
> and analyze streaming data in real time, in a reliable and highly available 
> manner."
> https://github.com/zalando/nakadi



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6976) Native integrations of Apache Nifi with Delta Lake

2020-01-15 Thread Martin (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin updated NIFI-6976:
-
Affects Version/s: (was: 1.11.0)
   1.12.0
  Description: 
"Delta Lake is an open-source storage layer that brings ACID transactions to 
Apache Spar and big data workloads" ([https://delta.io/])

 

NiFi already offers many features that make it unique. A Delta integration 
would complement this scope in a sensible way.

 

Source: [https://github.com/delta-io/delta]

 ## Update ##

Here is a demo how this looks at Streamsets. Having the same feature set on 
Nifi would we awesome.

Video: [https://youtu.be/VLd_qOrKrTI]

 

Here you can find all available integrations right now.
[https://docs.delta.io/0.5.0/integrations.html]

 

 

  was:
"Delta Lake is an open-source storage layer that brings ACID transactions to 
Apache Spar and big data workloads" ([https://delta.io/])

 

NiFi already offers many features that make it unique. A Delta integration 
would complement this scope in a sensible way.

 

Source: [https://github.com/delta-io/delta]

 ## Update ##

Here is a demo how this looks at Streamsets. Having the same feature set on 
Nifi would we awesome.

Video: [https://youtu.be/VLd_qOrKrTI]

 


> Native integrations of Apache Nifi with Delta Lake
> --
>
> Key: NIFI-6976
> URL: https://issues.apache.org/jira/browse/NIFI-6976
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.12.0
>Reporter: Martin
>Priority: Major
>  Labels: ACID, delta, integration, spark, storage-layer
>
> "Delta Lake is an open-source storage layer that brings ACID transactions to 
> Apache Spar and big data workloads" ([https://delta.io/])
>  
> NiFi already offers many features that make it unique. A Delta integration 
> would complement this scope in a sensible way.
>  
> Source: [https://github.com/delta-io/delta]
>  ## Update ##
> Here is a demo how this looks at Streamsets. Having the same feature set on 
> Nifi would we awesome.
> Video: [https://youtu.be/VLd_qOrKrTI]
>  
> Here you can find all available integrations right now.
> [https://docs.delta.io/0.5.0/integrations.html]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] sjyang18 commented on issue #3954: Change the reporting behavior of Azure Reporting task to report report the time when metrics are generated

2020-01-15 Thread GitBox
sjyang18 commented on issue #3954: Change the reporting behavior of Azure 
Reporting task to report report the time when metrics are generated
URL: https://github.com/apache/nifi/pull/3954#issuecomment-574964449
 
 
   @markap14 When you get a chance, would you review this PR? Recently, there 
was a PR merged, and we want to add fix on top of that merge. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-7032) Processor Details no longer appears when clicking 'View Processor Details'

2020-01-15 Thread Nissim Shiman (Jira)
Nissim Shiman created NIFI-7032:
---

 Summary: Processor Details no longer appears when clicking 'View 
Processor Details'
 Key: NIFI-7032
 URL: https://issues.apache.org/jira/browse/NIFI-7032
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 1.10.0
Reporter: Nissim Shiman


To reproduce:

>From main gui page, choose the button in upper right hand corner with the 3 
>lines
Summary -> Processors
choose one of i's to the left of a processor name
Processor Details should pop up at this point.

This worked in 1.9.2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6721) jms_expiration attribute problem

2020-01-15 Thread Tim Chermak (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016351#comment-17016351
 ] 

Tim Chermak commented on NIFI-6721:
---

Hi, yes, this is from one of our other developers/users, but they said it is 
still an issue.

Thanks

-Tim

> jms_expiration attribute problem
> 
>
> Key: NIFI-6721
> URL: https://issues.apache.org/jira/browse/NIFI-6721
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.8.0
> Environment: Linux CENTOS 7
>Reporter: Tim Chermak
>Priority: Minor
>
> The documentation for PublishJMS indicates the JMSExpiration is set with the 
> attribute jms_expiration. However, this value is really the time-to-live 
> (ttl) in milliseconds. The JMSExpiration is calculated by the provider 
> library as "expiration = timestamp + ttl"
> So, this NiFi flowfile attribute should really be named jms_ttl. The current 
> setup works correctly when NiFi creates and publishes a message, but has 
> problems when you try to republish a JMS message.
> GetFile -> UpdateAttibute -> PublishJMS creates a valid JMSExpiration in the 
> message, however, when a JMS has the expiration set, ConsumeJMS -> PublishJMS 
> shows an error in the nifi.--app.log file: 
> "o.apache.nifi.jms.processors.PublishJMS PublishJMS[id=016b1005-xx...] 
> Incompatible value for attribute jms_expiration [1566428032803] is not a 
> number. Ignoring this attribute."
> Looks like ConsumeJMS set the flowfile attribute to the expiration value 
> rather than the time-ti-live value. Time-to-live should be jms_ttl = 
> expiration - current_time.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7031) Update copyright year info to 2020 in NOTICEs

2020-01-15 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-7031.

Resolution: Fixed

made the updates and merged.

> Update copyright year info to 2020 in NOTICEs
> -
>
> Key: NIFI-7031
> URL: https://issues.apache.org/jira/browse/NIFI-7031
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Trivial
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7022) Zookeeper 3.5 Starts Admin Server on Port 8080

2020-01-15 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-7022:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1 merged to master

> Zookeeper 3.5 Starts Admin Server on Port 8080
> --
>
> Key: NIFI-7022
> URL: https://issues.apache.org/jira/browse/NIFI-7022
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Assignee: Shawn Weeks
>Priority: Major
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The new version of Zookeeper automatically starts an admin server on port 
> 8080 causing NiFi Cluster not to start if you use the defaults.
> See https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7022) Zookeeper 3.5 Starts Admin Server on Port 8080

2020-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016331#comment-17016331
 ] 

ASF subversion and git services commented on NIFI-7022:
---

Commit aa2466480171a31cda970f500df53753765aceab in nifi's branch 
refs/heads/master from Shawn Weeks
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=aa24664 ]

NIFI-7022 - This closes #3989. Disable Zookeeper Admin Server for Embedded 
Zookeeper

Signed-off-by: Joe Witt 


> Zookeeper 3.5 Starts Admin Server on Port 8080
> --
>
> Key: NIFI-7022
> URL: https://issues.apache.org/jira/browse/NIFI-7022
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Assignee: Shawn Weeks
>Priority: Major
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The new version of Zookeeper automatically starts an admin server on port 
> 8080 causing NiFi Cluster not to start if you use the defaults.
> See https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7031) Update copyright year info to 2020 in NOTICEs

2020-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016332#comment-17016332
 ] 

ASF subversion and git services commented on NIFI-7031:
---

Commit 23c8234586287f4ddb23bcfb3c155fe793048d63 in nifi's branch 
refs/heads/master from Joe Witt
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=23c8234 ]

NIFI-7031 updating copyright year on NOTICES


> Update copyright year info to 2020 in NOTICEs
> -
>
> Key: NIFI-7031
> URL: https://issues.apache.org/jira/browse/NIFI-7031
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Trivial
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #3989: NIFI-7022 - Disable Zookeeper Admin Server for Embedded Zookeeper

2020-01-15 Thread GitBox
asfgit closed pull request #3989: NIFI-7022 - Disable Zookeeper Admin Server 
for Embedded Zookeeper
URL: https://github.com/apache/nifi/pull/3989
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-7031) Update copyright year info to 2020 in NOTICEs

2020-01-15 Thread Joe Witt (Jira)
Joe Witt created NIFI-7031:
--

 Summary: Update copyright year info to 2020 in NOTICEs
 Key: NIFI-7031
 URL: https://issues.apache.org/jira/browse/NIFI-7031
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Witt
Assignee: Joe Witt
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6987) Remove "Claim Management" section from Admin Guide

2020-01-15 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-6987:
---
Fix Version/s: (was: 1.12.0)
   1.11.0

> Remove "Claim Management" section from Admin Guide
> --
>
> Key: NIFI-6987
> URL: https://issues.apache.org/jira/browse/NIFI-6987
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Andrew M. Lim
>Assignee: Andrew M. Lim
>Priority: Minor
> Fix For: 1.11.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This section is outdated and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] kaHaleMaKai commented on issue #3543: NIFI-6388 Add dynamic relationships to the ExecuteScript processor.

2020-01-15 Thread GitBox
kaHaleMaKai commented on issue #3543: NIFI-6388 Add dynamic relationships to 
the ExecuteScript processor.
URL: https://github.com/apache/nifi/pull/3543#issuecomment-574815244
 
 
   If required, I'll rebase the code against 1.12-SNAPSHOT, after 1.11 gets 
released.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] kaHaleMaKai commented on a change in pull request #3543: NIFI-6388 Add dynamic relationships to the ExecuteScript processor.

2020-01-15 Thread GitBox
kaHaleMaKai commented on a change in pull request #3543: NIFI-6388 Add dynamic 
relationships to the ExecuteScript processor.
URL: https://github.com/apache/nifi/pull/3543#discussion_r367060256
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/ExecuteScript.java
 ##
 @@ -91,13 +112,36 @@
 @SeeAlso({InvokeScriptedProcessor.class})
 public class ExecuteScript extends AbstractSessionFactoryProcessor implements 
Searchable {
 
-// Constants maintained for backwards compatibility
 public static final Relationship REL_SUCCESS = 
ScriptingComponentUtils.REL_SUCCESS;
 public static final Relationship REL_FAILURE = 
ScriptingComponentUtils.REL_FAILURE;
 
 private String scriptToRun = null;
 volatile ScriptingComponentHelper scriptingComponentHelper = new 
ScriptingComponentHelper();
 
+/** Whether to use dynamic relationships or not. */
+private volatile boolean useDynamicRelationships = 
Boolean.parseBoolean(USE_DYNAMIC_RELATIONSHIPS.getDefaultValue());
+
+/** Map to keep dynamic property keys and values.
+ * They need to be stored for the case that the value of {@code 
useDynamicProperties} changes.
+ */
+private final Map dynamicProperties = new 
ConcurrentHashMap<>();
+
+private final Set relationships;
+private ComponentLog log;
+
+public ExecuteScript() {
+super();
+relationships = new ConcurrentSkipListSet<>();
+relationships.add(ExecuteScript.REL_SUCCESS);
+relationships.add(ExecuteScript.REL_FAILURE);
+}
+
+@Override
+protected void init(ProcessorInitializationContext context) {
+super.init(context);
+log = getLogger();
+log.warn("The default for USE_DYNAMIC_RELATIONSHIPS will change from 
\"false\" to \"true\" starting in NIFI 2.0");
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] kaHaleMaKai commented on a change in pull request #3543: NIFI-6388 Add dynamic relationships to the ExecuteScript processor.

2020-01-15 Thread GitBox
kaHaleMaKai commented on a change in pull request #3543: NIFI-6388 Add dynamic 
relationships to the ExecuteScript processor.
URL: https://github.com/apache/nifi/pull/3543#discussion_r367059913
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/test/groovy/org/apache/nifi/processors/script/ExecuteScriptGroovyTest.groovy
 ##
 @@ -16,23 +16,20 @@
  */
 package org.apache.nifi.processors.script
 
+
 import org.apache.nifi.script.ScriptingComponentUtils
 import org.apache.nifi.util.MockFlowFile
 import org.apache.nifi.util.StopWatch
 import org.apache.nifi.util.TestRunners
-import org.junit.After
-import org.junit.Before
-import org.junit.BeforeClass
-import org.junit.Ignore
-import org.junit.Test
+import org.junit.*
 import org.junit.runner.RunWith
 import org.junit.runners.JUnit4
 import org.slf4j.Logger
 import org.slf4j.LoggerFactory
 
 import java.util.concurrent.TimeUnit
 
-import static org.junit.Assert.assertNotNull
+import static org.junit.Assert.*
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] kaHaleMaKai commented on a change in pull request #3543: NIFI-6388 Add dynamic relationships to the ExecuteScript processor.

2020-01-15 Thread GitBox
kaHaleMaKai commented on a change in pull request #3543: NIFI-6388 Add dynamic 
relationships to the ExecuteScript processor.
URL: https://github.com/apache/nifi/pull/3543#discussion_r367059812
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/script/ScriptingComponentUtils.java
 ##
 @@ -64,5 +68,15 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
+
+/** A property descriptor for specifying whether to use dynamic 
relationships or not */
+public static final PropertyDescriptor USE_DYNAMIC_RELATIONSHIPS = new 
PropertyDescriptor.Builder()
+.name("Use Dynamic Relationships")
+.description("Whether properties prefixed with \"REL_\" should 
create a dynamic relationship or not. The default will change to \"true\" in 
NIFI 2.0.")
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] kaHaleMaKai commented on a change in pull request #3543: NIFI-6388 Add dynamic relationships to the ExecuteScript processor.

2020-01-15 Thread GitBox
kaHaleMaKai commented on a change in pull request #3543: NIFI-6388 Add dynamic 
relationships to the ExecuteScript processor.
URL: https://github.com/apache/nifi/pull/3543#discussion_r367060166
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/ExecuteScript.java
 ##
 @@ -148,6 +196,87 @@ protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String
 .build();
 }
 
+private PropertyDescriptor getDynamicRelationshipDescriptor(final String 
propertyDescriptorName) {
+// we allow for arbitrary relationship names, even empty strings
+final String relName = getRelationshipName(propertyDescriptorName);
+if (!isValidRelationshipName(relName)) {
+log.warn("dynamic property for relationship is invalid: '{}'. It 
must not be named REL_SUCCESS or REL_FAILURE (case in-sensitive)",
+new Object[]{propertyDescriptorName}
+);
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] kaHaleMaKai commented on a change in pull request #3543: NIFI-6388 Add dynamic relationships to the ExecuteScript processor.

2020-01-15 Thread GitBox
kaHaleMaKai commented on a change in pull request #3543: NIFI-6388 Add dynamic 
relationships to the ExecuteScript processor.
URL: https://github.com/apache/nifi/pull/3543#discussion_r367059984
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/script/ScriptingComponentUtils.java
 ##
 @@ -64,5 +68,15 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
+
+/** A property descriptor for specifying whether to use dynamic 
relationships or not */
+public static final PropertyDescriptor USE_DYNAMIC_RELATIONSHIPS = new 
PropertyDescriptor.Builder()
+.name("Use Dynamic Relationships")
+.description("Whether properties prefixed with \"REL_\" should 
create a dynamic relationship or not. The default will change to \"true\" in 
NIFI 2.0.")
+.required(false)
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-7022) Zookeeper 3.5 Starts Admin Server on Port 8080

2020-01-15 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-7022:
---
Fix Version/s: 1.11.0

> Zookeeper 3.5 Starts Admin Server on Port 8080
> --
>
> Key: NIFI-7022
> URL: https://issues.apache.org/jira/browse/NIFI-7022
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Assignee: Shawn Weeks
>Priority: Major
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The new version of Zookeeper automatically starts an admin server on port 
> 8080 causing NiFi Cluster not to start if you use the defaults.
> See https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7022) Zookeeper 3.5 Starts Admin Server on Port 8080

2020-01-15 Thread Shawn Weeks (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016237#comment-17016237
 ] 

Shawn Weeks commented on NIFI-7022:
---

For anyone that needs a workaround on 1.10 just add this to your bootstrap.conf 
file.

{code:java}
java.arg.17=-Dzookeeper.admin.enableServer=false
{code}


> Zookeeper 3.5 Starts Admin Server on Port 8080
> --
>
> Key: NIFI-7022
> URL: https://issues.apache.org/jira/browse/NIFI-7022
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Assignee: Shawn Weeks
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The new version of Zookeeper automatically starts an admin server on port 
> 8080 causing NiFi Cluster not to start if you use the defaults.
> See https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7022) Zookeeper 3.5 Starts Admin Server on Port 8080

2020-01-15 Thread Shawn Weeks (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Weeks updated NIFI-7022:
--
Status: Patch Available  (was: Open)

> Zookeeper 3.5 Starts Admin Server on Port 8080
> --
>
> Key: NIFI-7022
> URL: https://issues.apache.org/jira/browse/NIFI-7022
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Assignee: Shawn Weeks
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The new version of Zookeeper automatically starts an admin server on port 
> 8080 causing NiFi Cluster not to start if you use the defaults.
> See https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7022) Zookeeper 3.5 Starts Admin Server on Port 8080

2020-01-15 Thread Shawn Weeks (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Weeks reassigned NIFI-7022:
-

Assignee: Shawn Weeks

> Zookeeper 3.5 Starts Admin Server on Port 8080
> --
>
> Key: NIFI-7022
> URL: https://issues.apache.org/jira/browse/NIFI-7022
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Assignee: Shawn Weeks
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The new version of Zookeeper automatically starts an admin server on port 
> 8080 causing NiFi Cluster not to start if you use the defaults.
> See https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] shawnweeks opened a new pull request #3989: NIFI-7022 - Disable Zookeeper Admin Server for Embedded Zookeeper

2020-01-15 Thread GitBox
shawnweeks opened a new pull request #3989: NIFI-7022 - Disable Zookeeper Admin 
Server for Embedded Zookeeper
URL: https://github.com/apache/nifi/pull/3989
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
   Zookeeper 3.5 now includes an Admin Server that starts on port 8080 by 
default. This conflicts with the NiFi default. Disable by default for now and 
the let the end user configure if desired.
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6988) Add password properties to NiFi components that support kerberos

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-6988:
--
Remaining Estimate: 240h  (was: 336h)
 Original Estimate: 240h  (was: 336h)

> Add password properties to NiFi components that support kerberos
> 
>
> Key: NIFI-6988
> URL: https://issues.apache.org/jira/browse/NIFI-6988
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> In addition to the principal/keytab and KerberosCredentialsService options 
> for accessing kerberized services from NiFi components, a password field 
> should be added to the affected component configurations.
> Components should validate that only one set of options should be configured:
>  * principal and keytab
>  * principal and password
>  * KerberosCredentialsService
> The components that will be affected by this change are listed as sub-tasks 
> of this JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7018) Add kerberos password property to NiFi HDFS components

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-7018:
--
Description: 
In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized HDFS endpoints from NiFi HDFS components, a password field 
should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService

  was:
In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized services from NiFi HDFS components, a password field 
should be added.

Components should validate that only one set of options should be configured:

principal and keytab
principal and password
KerberosCredentialsService


> Add kerberos password property to NiFi HDFS components
> --
>
> Key: NIFI-7018
> URL: https://issues.apache.org/jira/browse/NIFI-7018
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> In addition to the principal/keytab and KerberosCredentialsService options 
> for accessing kerberized HDFS endpoints from NiFi HDFS components, a password 
> field should be added.
> Components should validate that only one set of options should be configured:
>  * principal and keytab
>  * principal and password
>  * KerberosCredentialsService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7019) Add kerberos principal and password properties to NiFi DBPCConnectionPool

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-7019:
--
Summary: Add kerberos principal and password properties to NiFi 
DBPCConnectionPool  (was: Add Kerberos principal and password properties to 
NiFi DBPCConnectionPool)

> Add kerberos principal and password properties to NiFi DBPCConnectionPool
> -
>
> Key: NIFI-7019
> URL: https://issues.apache.org/jira/browse/NIFI-7019
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> In addition to the KerberosCredentialsService option for accessing kerberized 
> databases from DBCPConnectionPool, principal and password fields should be 
> added.
> Components should validate that only one set of options should be configured:
>  * principal and password
>  * KerberosCredentialsService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6988) Add password properties to NiFi components that support kerberos

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-6988:
--
Description: 
In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized services from NiFi components, a password field should be 
added to the affected component configurations.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService

The components that will be affected by this change are listed as sub-tasks of 
this JIRA.

  was:
In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized services from NiFi Hadoop components, a password field 
should be added.

Components should validate that only one set of options should be configured:
* principal and keytab
* principal and password
* KerberosCredentialsService

The components that will be affected by this change:
* AbstractHadoopProcessor
* AbstractKuduProcessor
* DBCPConnectionPool
* HBase_1_1_2_ClientService
* HBase_2_ClientService
* Hive3ConnectionPool
* Hive_1_1ConnectionPool
* HiveConnectionPool
* HortonworksSchemaRegistry
* KafkaProcessorUtils
* KafkaRecordSink_1_0
* KafkaRecordSink_2_0
* Kerberos (in the nifi-atlas-reporting-task module)
* KuduLookupService
* PutHive3Streaming
* PutHiveStreaming
* ReportLineageToAtlas
* SolrProcessor
* SolrUtils (in the nifi-solr-processors module)


> Add password properties to NiFi components that support kerberos
> 
>
> Key: NIFI-6988
> URL: https://issues.apache.org/jira/browse/NIFI-6988
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> In addition to the principal/keytab and KerberosCredentialsService options 
> for accessing kerberized services from NiFi components, a password field 
> should be added to the affected component configurations.
> Components should validate that only one set of options should be configured:
>  * principal and keytab
>  * principal and password
>  * KerberosCredentialsService
> The components that will be affected by this change are listed as sub-tasks 
> of this JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7027) Add kerberos password property to NiFi Kafka components

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-7027:
--
Summary: Add kerberos password property to NiFi Kafka components  (was: Add 
kerberos password property to Kafka components)

> Add kerberos password property to NiFi Kafka components
> ---
>
> Key: NIFI-7027
> URL: https://issues.apache.org/jira/browse/NIFI-7027
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> In addition to the principal/keytab and KerberosCredentialsService options 
> for accessing kerberized Kafka endpoints from NiFi's Kafka components, a 
> password field should be added.
> Components should validate that only one set of options should be configured:
>  * principal and keytab
>  * principal and password
>  * KerberosCredentialsService
> The components that will be affected by this change:
>  * KafkaProcessorUtils
>  * KafkaRecordSink_1_0
>  * KafkaRecordSink_2_0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7029) Add kerberos password property to NiFi Kudu components

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-7029:
--
Description: 
In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized Kudu endpoints from NiFi Kudu components, a password field 
should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService

The components that will be affected by this change:
 * AbstractKuduProcessor
 * PutKudu
 * KuduLookupService

  was:
In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized Kudu endpoints from NiFi Kudu components, a password field 
should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService

The components that will be affected by this change:
 * PutKudu
 * KuduLookupService


> Add kerberos password property to NiFi Kudu components
> --
>
> Key: NIFI-7029
> URL: https://issues.apache.org/jira/browse/NIFI-7029
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> In addition to the principal/keytab and KerberosCredentialsService options 
> for accessing kerberized Kudu endpoints from NiFi Kudu components, a password 
> field should be added.
> Components should validate that only one set of options should be configured:
>  * principal and keytab
>  * principal and password
>  * KerberosCredentialsService
> The components that will be affected by this change:
>  * AbstractKuduProcessor
>  * PutKudu
>  * KuduLookupService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7026) Add kerberos password property to NiFi HortonworksSchemaRegistry

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-7026:
--
Summary: Add kerberos password property to NiFi HortonworksSchemaRegistry  
(was: Add kerberos password property to HortonworksSchemaRegistry)

> Add kerberos password property to NiFi HortonworksSchemaRegistry
> 
>
> Key: NIFI-7026
> URL: https://issues.apache.org/jira/browse/NIFI-7026
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> In addition to the principal/keytab and KerberosCredentialsService options 
> for accessing kerberized Schema Registry from the HortonworksSchemaRegistry 
> controller service, a password field should be added.
> Components should validate that only one set of options should be configured:
>  * principal and keytab
>  * principal and password
>  * KerberosCredentialsService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7018) Add kerberos password property to NiFi HDFS components

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-7018:
--
Summary: Add kerberos password property to NiFi HDFS components  (was: Add 
password property to HDFS components)

> Add kerberos password property to NiFi HDFS components
> --
>
> Key: NIFI-7018
> URL: https://issues.apache.org/jira/browse/NIFI-7018
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> In addition to the principal/keytab and KerberosCredentialsService options 
> for accessing kerberized services from NiFi HDFS components, a password field 
> should be added.
> Components should validate that only one set of options should be configured:
> principal and keytab
> principal and password
> KerberosCredentialsService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7019) Add Kerberos principal and password properties to NiFi DBPCConnectionPool

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-7019:
--
Summary: Add Kerberos principal and password properties to NiFi 
DBPCConnectionPool  (was: Add principal and password properties to NiFi 
DBPCConnectionPool)

> Add Kerberos principal and password properties to NiFi DBPCConnectionPool
> -
>
> Key: NIFI-7019
> URL: https://issues.apache.org/jira/browse/NIFI-7019
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> In addition to the KerberosCredentialsService option for accessing kerberized 
> databases from DBCPConnectionPool, principal and password fields should be 
> added.
> Components should validate that only one set of options should be configured:
>  * principal and password
>  * KerberosCredentialsService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7030) Add kerberos password property to NiFi Solr components

2020-01-15 Thread Jeff Storck (Jira)
Jeff Storck created NIFI-7030:
-

 Summary: Add kerberos password property to NiFi Solr components
 Key: NIFI-7030
 URL: https://issues.apache.org/jira/browse/NIFI-7030
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Extensions
Reporter: Jeff Storck
Assignee: Jeff Storck


In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized Solr endpoints from NiFi Solr components, a password field 
should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService

The components that will be affected by this change:
 * SolrProcessor
 * SolrUtils (in the nifi-solr-processors module)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6976) Native integrations of Apache Nifi with Delta Lake

2020-01-15 Thread Martin (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin updated NIFI-6976:
-
Description: 
"Delta Lake is an open-source storage layer that brings ACID transactions to 
Apache Spar and big data workloads" ([https://delta.io/])

 

NiFi already offers many features that make it unique. A Delta integration 
would complement this scope in a sensible way.

 

Source: [https://github.com/delta-io/delta]

 ## Update ##

Here is a demo how this looks at Streamsets. Having the same feature set on 
Nifi would we awesome.

Video: [https://youtu.be/VLd_qOrKrTI]

 

  was:
"Delta Lake is an open-source storage layer that brings ACID transactions to 
Apache Spar and big data workloads" ([https://delta.io/])

 

NiFi already offers many features that make it unique. A Delta integration 
would complement this scope in a sensible way.

 

Source: [https://github.com/delta-io/delta]

 

## Update ##

Here is a demo how this looks at Streamsets. Having the same feature set on 
Nifi would we awesome.

Video: [https://youtu.be/VLd_qOrKrTI]

 


> Native integrations of Apache Nifi with Delta Lake
> --
>
> Key: NIFI-6976
> URL: https://issues.apache.org/jira/browse/NIFI-6976
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.11.0
>Reporter: Martin
>Priority: Major
>  Labels: ACID, delta, integration, spark, storage-layer
>
> "Delta Lake is an open-source storage layer that brings ACID transactions to 
> Apache Spar and big data workloads" ([https://delta.io/])
>  
> NiFi already offers many features that make it unique. A Delta integration 
> would complement this scope in a sensible way.
>  
> Source: [https://github.com/delta-io/delta]
>  ## Update ##
> Here is a demo how this looks at Streamsets. Having the same feature set on 
> Nifi would we awesome.
> Video: [https://youtu.be/VLd_qOrKrTI]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6976) Native integrations of Apache Nifi with Delta Lake

2020-01-15 Thread Martin (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin updated NIFI-6976:
-
Description: 
"Delta Lake is an open-source storage layer that brings ACID transactions to 
Apache Spar and big data workloads" ([https://delta.io/])

 

NiFi already offers many features that make it unique. A Delta integration 
would complement this scope in a sensible way.

 

Source: [https://github.com/delta-io/delta]

 

 

## Update ##

Here is a demo how this looks at Streamsets. Having the same feature set on 
Nifi would we awesome.

Video: https://youtu.be/VLd_qOrKrTI

 

  was:
"Delta Lake is an open-source storage layer that brings ACID transactions to 
Apache Spar and big data workloads" ([https://delta.io/])

 

NiFi already offers many features that make it unique. A Delta integration 
would complement this scope in a sensible way.

 

Source: [https://github.com/delta-io/delta]

 


> Native integrations of Apache Nifi with Delta Lake
> --
>
> Key: NIFI-6976
> URL: https://issues.apache.org/jira/browse/NIFI-6976
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.11.0
>Reporter: Martin
>Priority: Major
>  Labels: ACID, delta, integration, spark, storage-layer
>
> "Delta Lake is an open-source storage layer that brings ACID transactions to 
> Apache Spar and big data workloads" ([https://delta.io/])
>  
> NiFi already offers many features that make it unique. A Delta integration 
> would complement this scope in a sensible way.
>  
> Source: [https://github.com/delta-io/delta]
>  
>  
> ## Update ##
> Here is a demo how this looks at Streamsets. Having the same feature set on 
> Nifi would we awesome.
> Video: https://youtu.be/VLd_qOrKrTI
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6976) Native integrations of Apache Nifi with Delta Lake

2020-01-15 Thread Martin (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin updated NIFI-6976:
-
Description: 
"Delta Lake is an open-source storage layer that brings ACID transactions to 
Apache Spar and big data workloads" ([https://delta.io/])

 

NiFi already offers many features that make it unique. A Delta integration 
would complement this scope in a sensible way.

 

Source: [https://github.com/delta-io/delta]

 

## Update ##

Here is a demo how this looks at Streamsets. Having the same feature set on 
Nifi would we awesome.

Video: [https://youtu.be/VLd_qOrKrTI]

 

  was:
"Delta Lake is an open-source storage layer that brings ACID transactions to 
Apache Spar and big data workloads" ([https://delta.io/])

 

NiFi already offers many features that make it unique. A Delta integration 
would complement this scope in a sensible way.

 

Source: [https://github.com/delta-io/delta]

 

 

## Update ##

Here is a demo how this looks at Streamsets. Having the same feature set on 
Nifi would we awesome.

Video: https://youtu.be/VLd_qOrKrTI

 


> Native integrations of Apache Nifi with Delta Lake
> --
>
> Key: NIFI-6976
> URL: https://issues.apache.org/jira/browse/NIFI-6976
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.11.0
>Reporter: Martin
>Priority: Major
>  Labels: ACID, delta, integration, spark, storage-layer
>
> "Delta Lake is an open-source storage layer that brings ACID transactions to 
> Apache Spar and big data workloads" ([https://delta.io/])
>  
> NiFi already offers many features that make it unique. A Delta integration 
> would complement this scope in a sensible way.
>  
> Source: [https://github.com/delta-io/delta]
>  
> ## Update ##
> Here is a demo how this looks at Streamsets. Having the same feature set on 
> Nifi would we awesome.
> Video: [https://youtu.be/VLd_qOrKrTI]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7029) Add kerberos password property to NiFi Kudu components

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-7029:
--
Description: 
In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized Kudu endpoints from NiFi Kudu components, a password field 
should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService

The components that will be affected by this change:
 * PutKudu
 * KuduLookupService

  was:
In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized Kudu endpoints from NiFi Kudu components, a password field 
should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService


> Add kerberos password property to NiFi Kudu components
> --
>
> Key: NIFI-7029
> URL: https://issues.apache.org/jira/browse/NIFI-7029
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> In addition to the principal/keytab and KerberosCredentialsService options 
> for accessing kerberized Kudu endpoints from NiFi Kudu components, a password 
> field should be added.
> Components should validate that only one set of options should be configured:
>  * principal and keytab
>  * principal and password
>  * KerberosCredentialsService
> The components that will be affected by this change:
>  * PutKudu
>  * KuduLookupService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7029) Add kerberos password property to NiFi Kudu components

2020-01-15 Thread Jeff Storck (Jira)
Jeff Storck created NIFI-7029:
-

 Summary: Add kerberos password property to NiFi Kudu components
 Key: NIFI-7029
 URL: https://issues.apache.org/jira/browse/NIFI-7029
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Extensions
Reporter: Jeff Storck
Assignee: Jeff Storck


In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized Kudu endpoints from NiFi Kudu components, a password field 
should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7028) Add kerberos password property to NiFi Atlas components

2020-01-15 Thread Jeff Storck (Jira)
Jeff Storck created NIFI-7028:
-

 Summary: Add kerberos password property to NiFi Atlas components
 Key: NIFI-7028
 URL: https://issues.apache.org/jira/browse/NIFI-7028
 Project: Apache NiFi
  Issue Type: Sub-task
Reporter: Jeff Storck
Assignee: Jeff Storck


In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized Atlas endpoints from NiFi's Atlas components, a password 
field should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService

The components that will be affected by this change:
 * Kerberos (in the nifi-atlas-reporting-task module)
 * ReportLineageToAtlas



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7022) Zookeeper 3.5 Starts Admin Server on Port 8080

2020-01-15 Thread Shawn Weeks (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016195#comment-17016195
 ] 

Shawn Weeks commented on NIFI-7022:
---

Just to be clear in my actual instance I had real servers listed.

> Zookeeper 3.5 Starts Admin Server on Port 8080
> --
>
> Key: NIFI-7022
> URL: https://issues.apache.org/jira/browse/NIFI-7022
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Priority: Major
>
> The new version of Zookeeper automatically starts an admin server on port 
> 8080 causing NiFi Cluster not to start if you use the defaults.
> See https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7027) Add kerberos password property to Kafka components

2020-01-15 Thread Jeff Storck (Jira)
Jeff Storck created NIFI-7027:
-

 Summary: Add kerberos password property to Kafka components
 Key: NIFI-7027
 URL: https://issues.apache.org/jira/browse/NIFI-7027
 Project: Apache NiFi
  Issue Type: Sub-task
Reporter: Jeff Storck
Assignee: Jeff Storck


In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized Kafka endpoints from NiFi's Kafka components, a password 
field should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService

The components that will be affected by this change:
 * KafkaProcessorUtils
 * KafkaRecordSink_1_0
 * KafkaRecordSink_2_0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7026) Add kerberos password property to HortonworksSchemaRegistry

2020-01-15 Thread Jeff Storck (Jira)
Jeff Storck created NIFI-7026:
-

 Summary: Add kerberos password property to 
HortonworksSchemaRegistry
 Key: NIFI-7026
 URL: https://issues.apache.org/jira/browse/NIFI-7026
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Extensions
Reporter: Jeff Storck
Assignee: Jeff Storck


In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized Schema Registry from the HortonworksSchemaRegistry 
controller service, a password field should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7022) Zookeeper 3.5 Starts Admin Server on Port 8080

2020-01-15 Thread Shawn Weeks (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016192#comment-17016192
 ] 

Shawn Weeks commented on NIFI-7022:
---

Tracked it down furthur. It only does it if you have multiple servers in your 
zookeeper.properties. For example add a second one like this to the end. You 
don't even have to have multiple nifi instances.
{code:java}
server.2=nifi-node2-hostname:2888:3888;2181
{code}


> Zookeeper 3.5 Starts Admin Server on Port 8080
> --
>
> Key: NIFI-7022
> URL: https://issues.apache.org/jira/browse/NIFI-7022
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Priority: Major
>
> The new version of Zookeeper automatically starts an admin server on port 
> 8080 causing NiFi Cluster not to start if you use the defaults.
> See https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7019) Add principal and password properties to NiFi DBPCConnectionPool

2020-01-15 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-7019:
--
Description: 
In addition to the KerberosCredentialsService option for accessing kerberized 
databases from DBCPConnectionPool, principal and password fields should be 
added.

Components should validate that only one set of options should be configured:
 * principal and password
 * KerberosCredentialsService

  was:
In addition to the KerberosCredentialsService option for accessing kerberized 
services from DBCPConnectionPool, principal and password fields should be added.

Components should validate that only one set of options should be configured:

principal and password
KerberosCredentialsService


> Add principal and password properties to NiFi DBPCConnectionPool
> 
>
> Key: NIFI-7019
> URL: https://issues.apache.org/jira/browse/NIFI-7019
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> In addition to the KerberosCredentialsService option for accessing kerberized 
> databases from DBCPConnectionPool, principal and password fields should be 
> added.
> Components should validate that only one set of options should be configured:
>  * principal and password
>  * KerberosCredentialsService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7022) Zookeeper 3.5 Starts Admin Server on Port 8080

2020-01-15 Thread Bryan Bende (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016191#comment-17016191
 ] 

Bryan Bende commented on NIFI-7022:
---

I launched clusters on master and 1.10.0 and didn’t run into this issue. The ZK 
docs do say the admin server is enabled by default, but wondering if it only 
applies to launching a full external ZK, and not for the way we do it for 
embedded.

> Zookeeper 3.5 Starts Admin Server on Port 8080
> --
>
> Key: NIFI-7022
> URL: https://issues.apache.org/jira/browse/NIFI-7022
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Priority: Major
>
> The new version of Zookeeper automatically starts an admin server on port 
> 8080 causing NiFi Cluster not to start if you use the defaults.
> See https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7025) Add kerberos password property to NiFi Hive components

2020-01-15 Thread Jeff Storck (Jira)
Jeff Storck created NIFI-7025:
-

 Summary: Add kerberos password property to NiFi Hive components
 Key: NIFI-7025
 URL: https://issues.apache.org/jira/browse/NIFI-7025
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Extensions
Reporter: Jeff Storck
Assignee: Jeff Storck


In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized services from NiFi Hive components, a password field 
should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService

The components that will be affected by this change:
 * Hive3ConnectionPool
 * Hive_1_1ConnectionPool
 * HiveConnectionPool
 * PutHive3Streaming
 * PutHiveStreaming



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7024) Add kerberos password property to NiFi HBase components

2020-01-15 Thread Jeff Storck (Jira)
Jeff Storck created NIFI-7024:
-

 Summary: Add kerberos password property to NiFi HBase components
 Key: NIFI-7024
 URL: https://issues.apache.org/jira/browse/NIFI-7024
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Extensions
Reporter: Jeff Storck
Assignee: Jeff Storck


In addition to the principal/keytab and KerberosCredentialsService options for 
accessing kerberized services from NiFi HBase components, a password field 
should be added.

Components should validate that only one set of options should be configured:
 * principal and keytab
 * principal and password
 * KerberosCredentialsService

The components that will be affected by this change:
 * HBase_1_1_2_ClientService
 * HBase_2_ClientService



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7023) Improve template handling

2020-01-15 Thread Andy LoPresto (Jira)
Andy LoPresto created NIFI-7023:
---

 Summary: Improve template handling
 Key: NIFI-7023
 URL: https://issues.apache.org/jira/browse/NIFI-7023
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.10.0
Reporter: Andy LoPresto
Assignee: Andy LoPresto
 Fix For: 1.11.0


Change template handling process. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
bakaid commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r366982045
 
 

 ##
 File path: extensions/sql/data/JSONSQLWriter.cpp
 ##
 @@ -0,0 +1,101 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "JSONSQLWriter.h"
+#include "rapidjson/writer.h"
+#include "rapidjson/stringbuffer.h"
+#include "rapidjson/prettywriter.h"
+#include "Exception.h"
+#include "Utils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+
+JSONSQLWriter::JSONSQLWriter()
+  : jsonPayload_(rapidjson::kArrayType) {
+}
+
+JSONSQLWriter::~JSONSQLWriter() {}
+
+void JSONSQLWriter::beginProcessRow() {
+  jsonRow_ = rapidjson::kObjectType;
+}
+
+void JSONSQLWriter::endProcessRow() {
+  jsonPayload_.PushBack(jsonRow_, jsonPayload_.GetAllocator());
+}
+
+void JSONSQLWriter::processColumnName(const std::string& name) {}
+
+void JSONSQLWriter::processColumn(const std::string& name, const std::string& 
value) {
+  addToJSONRow(name, toJSONString(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, double value) {
+  addToJSONRow(name, rapidjson::Value().SetDouble(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, int value) {
+  addToJSONRow(name, rapidjson::Value().SetInt(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, long long value) {
+  addToJSONRow(name, rapidjson::Value().SetInt64(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, unsigned long long 
value) {
+  addToJSONRow(name, rapidjson::Value().SetUint64(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, const char* value) {
+  addToJSONRow(name, toJSONString(value));
+}
+
+void JSONSQLWriter::addToJSONRow(const std::string& columnName, 
rapidjson::Value& jsonValue) {
 
 Review comment:
   And for the solution in the previous comment to work change this to
   ```
   void JSONSQLWriter::addToJSONRow(const std::string& columnName, 
rapidjson::Value&& jsonValue) {
 jsonRow_.AddMember(toJSONString(columnName), std::move(jsonValue), 
jsonPayload_.GetAllocator());
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
bakaid commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r366988843
 
 

 ##
 File path: extensions/sql/data/SQLRowSubscriber.h
 ##
 @@ -0,0 +1,46 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+
+struct SQLRowSubscriber {
 
 Review comment:
   As far as I can see we don't use this for generic collections, and don't 
destruct objects through this interface, but in some later refactor we might, 
so to be on the safe side I think it would be better to add a virtual 
destructor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-7022) Zookeeper 3.5 Starts Admin Server on Port 8080

2020-01-15 Thread Shawn Weeks (Jira)
Shawn Weeks created NIFI-7022:
-

 Summary: Zookeeper 3.5 Starts Admin Server on Port 8080
 Key: NIFI-7022
 URL: https://issues.apache.org/jira/browse/NIFI-7022
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Shawn Weeks


The new version of Zookeeper automatically starts an admin server on port 8080 
causing NiFi Cluster not to start if you use the defaults.

See https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
bakaid commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r366888519
 
 

 ##
 File path: win_build_vs.bat
 ##
 @@ -65,7 +65,7 @@ cd %builddir%\
 
 
 
-cmake -G %generator% -DCMAKE_BUILD_TYPE_INIT=%cmake_build_type% 
-DCMAKE_BUILD_TYPE=%cmake_build_type% -DWIN32=WIN32 
-DENABLE_LIBRDKAFKA=%build_kafka% -DENABLE_JNI=%build_jni% -DOPENSSL_OFF=OFF 
-DENABLE_COAP=%build_coap% -DUSE_SHARED_LIBS=OFF -DDISABLE_CONTROLLER=ON  
-DBUILD_ROCKSDB=ON -DFORCE_WINDOWS=ON -DUSE_SYSTEM_UUID=OFF 
-DDISABLE_LIBARCHIVE=ON -DDISABLE_SCRIPTING=ON -DEXCLUDE_BOOST=ON 
-DENABLE_WEL=TRUE -DFAIL_ON_WARNINGS=OFF -DSKIP_TESTS=%skiptests% .. && msbuild 
/m nifi-minifi-cpp.sln /property:Configuration=%build_type% 
/property:Platform=%build_platform% && copy main\%build_type%\minifi.exe main\
+cmake -G %generator% -DENABLE_SQL=ON 
-DCMAKE_BUILD_TYPE_INIT=%cmake_build_type% 
-DCMAKE_BUILD_TYPE=%cmake_build_type% -DWIN32=WIN32 
-DENABLE_LIBRDKAFKA=%build_kafka% -DENABLE_JNI=%build_jni% -DOPENSSL_OFF=OFF 
-DENABLE_COAP=%build_coap% -DUSE_SHARED_LIBS=OFF -DDISABLE_CONTROLLER=ON  
-DBUILD_ROCKSDB=ON -DFORCE_WINDOWS=ON -DUSE_SYSTEM_UUID=OFF 
-DDISABLE_LIBARCHIVE=ON -DDISABLE_SCRIPTING=ON -DEXCLUDE_BOOST=ON 
-DENABLE_WEL=TRUE -DFAIL_ON_WARNINGS=OFF -DSKIP_TESTS=%skiptests% .. && msbuild 
/m nifi-minifi-cpp.sln /property:Configuration=%build_type% 
/property:Platform=%build_platform% && copy main\%build_type%\minifi.exe main\
 
 Review comment:
   Please make this configurable via a command line option like other 
extensions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
bakaid commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r366892148
 
 

 ##
 File path: extensions/sql/services/DatabaseService.cpp
 ##
 @@ -0,0 +1,69 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "core/logging/LoggerConfiguration.h"
+#include "core/controller/ControllerService.h"
+#include 
+#include 
+#include 
+#include "core/Property.h"
+#include "DatabaseService.h"
+#include "DatabaseService.h"
+#include "io/validation.h"
+#include "properties/Configure.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+namespace controllers {
+
+static core::Property RemoteServer;
+static core::Property Port;
+static core::Property MaxQueueSize;
+
+core::Property 
DatabaseService::ConnectionString(core::PropertyBuilder::createProperty("Connection
 String")->withDescription("Database Connection 
String")->isRequired(true)->build());
+
+void DatabaseService::initialize() {
+  if (initialized_)
 
 Review comment:
   This is now a simple bool, we can only access it under an 
initialization_mutex_ lock, otherwise it is not thread-safe.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
bakaid commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r366995351
 
 

 ##
 File path: extensions/sql/processors/QueryDatabaseTable.cpp
 ##
 @@ -0,0 +1,465 @@
+/**
+ * @file QueryDatabaseTable.cpp
+ * PutSQL class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "QueryDatabaseTable.h"
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "io/DataStream.h"
+#include "core/ProcessContext.h"
+#include "core/ProcessSession.h"
+#include "Exception.h"
+#include "utils/OsUtils.h"
+#include "data/DatabaseConnectors.h"
+#include "data/JSONSQLWriter.h"
+#include "data/SQLRowsetProcessor.h"
+#include "data/WriteCallback.h"
+#include "data/MaxCollector.h"
+#include "data/Utils.h"
+#include "utils/file/FileUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+const std::string QueryDatabaseTable::ProcessorName("QueryDatabaseTable");
+
+const core::Property QueryDatabaseTable::s_tableName(
+  core::PropertyBuilder::createProperty("Table 
Name")->isRequired(true)->withDescription("The name of the database table to be 
queried.")->supportsExpressionLanguage(true)->build());
+
+const core::Property QueryDatabaseTable::s_columnNames(
+  core::PropertyBuilder::createProperty("Columns to 
Return")->isRequired(false)->withDescription(
+"A comma-separated list of column names to be used in the query. If your 
database requires special treatment of the names (quoting, e.g.), each name 
should include such treatment. "
+"If no column names are supplied, all columns in the specified table will 
be returned. "
+"NOTE: It is important to use consistent column names for a given table 
for incremental fetch to work 
properly.")->supportsExpressionLanguage(true)->build());
+
+const core::Property QueryDatabaseTable::s_maxValueColumnNames(
+  core::PropertyBuilder::createProperty("Maximum-value 
Columns")->isRequired(false)->withDescription(
+"A comma-separated list of column names. The processor will keep track of 
the maximum value for each column that has been returned since the processor 
started running. "
+"Using multiple columns implies an order to the column list, and each 
column's values are expected to increase more slowly than the previous columns' 
values. "
+"Thus, using multiple columns implies a hierarchical structure of columns, 
which is usually used for partitioning tables. "
+"This processor can be used to retrieve only those rows that have been 
added/updated since the last retrieval. "
+"Note that some ODBC types such as bit/boolean are not conducive to 
maintaining maximum value, so columns of these types should not be listed in 
this property, and will result in error(s) during processing. "
+"If no columns are provided, all rows from the table will be considered, 
which could have a performance impact. "
+"NOTE: It is important to use consistent max-value column names for a 
given table for incremental fetch to work 
properly.")->supportsExpressionLanguage(true)->build());
+
+const core::Property QueryDatabaseTable::s_whereClause(
+  
core::PropertyBuilder::createProperty("db-fetch-where-clause")->isRequired(false)->withDescription(
+"A custom clause to be added in the WHERE condition when building SQL 
queries.")->supportsExpressionLanguage(true)->build());
+
+const core::Property QueryDatabaseTable::s_sqlQuery(
+  
core::PropertyBuilder::createProperty("db-fetch-sql-query")->isRequired(false)->withDescription(
+"A custom SQL query used to retrieve data. Instead of building a SQL query 
from other properties, this query will be wrapped as a sub-query. "
+"Query must have no ORDER BY 
statement.")->supportsExpressionLanguage(true)->build());
+
+const core::Property QueryDatabaseTable::s_maxRowsPerFlowFile(
+  
core::PropertyBuilder::createProperty("qdbt-max-rows")->isRequired(true)->withDefaultValue(0)->withDescription(
+"The maximum number of result rows that will be included in a single 
FlowFile. This will 

[GitHub] [nifi-minifi-cpp] bakaid commented on a change in pull request #656: MINIFI-1013 Used soci library.

2020-01-15 Thread GitBox
bakaid commented on a change in pull request #656: MINIFI-1013 Used soci 
library.
URL: https://github.com/apache/nifi-minifi-cpp/pull/656#discussion_r366981746
 
 

 ##
 File path: extensions/sql/data/JSONSQLWriter.cpp
 ##
 @@ -0,0 +1,101 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "JSONSQLWriter.h"
+#include "rapidjson/writer.h"
+#include "rapidjson/stringbuffer.h"
+#include "rapidjson/prettywriter.h"
+#include "Exception.h"
+#include "Utils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace sql {
+
+JSONSQLWriter::JSONSQLWriter()
+  : jsonPayload_(rapidjson::kArrayType) {
+}
+
+JSONSQLWriter::~JSONSQLWriter() {}
+
+void JSONSQLWriter::beginProcessRow() {
+  jsonRow_ = rapidjson::kObjectType;
+}
+
+void JSONSQLWriter::endProcessRow() {
+  jsonPayload_.PushBack(jsonRow_, jsonPayload_.GetAllocator());
+}
+
+void JSONSQLWriter::processColumnName(const std::string& name) {}
+
+void JSONSQLWriter::processColumn(const std::string& name, const std::string& 
value) {
+  addToJSONRow(name, toJSONString(value));
+}
+
+void JSONSQLWriter::processColumn(const std::string& name, double value) {
+  addToJSONRow(name, rapidjson::Value().SetDouble(value));
 
 Review comment:
   These do not compile, because you are trying to bind a temporary to a 
non-const lvalue reference. This is not allowed by the standard, but for some 
reason it compiles on MSVC (but not with any other compiler).
   Since these are temporaries and `AddMember` ultimately moves out their 
contents, I suggest changing them to:
   ```
   addToJSONRow(name, std::move(rapidjson::Value().SetDouble(value)));
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-7021) Release Management for Apache NiFi 1.11.0

2020-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016147#comment-17016147
 ] 

ASF subversion and git services commented on NIFI-7021:
---

Commit c5c0dc1a0ad0df306dbd8ee4914e46699109642d in nifi's branch 
refs/heads/NIFI-7021-RC1 from Joe Witt
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=c5c0dc1 ]

NIFI-7021-RC1 prepare release nifi-1.11.0-RC1


> Release Management for Apache NiFi 1.11.0
> -
>
> Key: NIFI-7021
> URL: https://issues.apache.org/jira/browse/NIFI-7021
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Trivial
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7021) Release Management for Apache NiFi 1.11.0

2020-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016148#comment-17016148
 ] 

ASF subversion and git services commented on NIFI-7021:
---

Commit f628f64cd8f60d0ba44bb3b49fdc7855d352bf32 in nifi's branch 
refs/heads/NIFI-7021-RC1 from Joe Witt
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=f628f64 ]

NIFI-7021-RC1 prepare for next development iteration


> Release Management for Apache NiFi 1.11.0
> -
>
> Key: NIFI-7021
> URL: https://issues.apache.org/jira/browse/NIFI-7021
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Trivial
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #711: MINIFICPP-1120 - clarify C2.md

2020-01-15 Thread GitBox
arpadboda closed pull request #711: MINIFICPP-1120 - clarify C2.md
URL: https://github.com/apache/nifi-minifi-cpp/pull/711
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #711: MINIFICPP-1120 - clarify C2.md

2020-01-15 Thread GitBox
arpadboda commented on a change in pull request #711: MINIFICPP-1120 - clarify 
C2.md
URL: https://github.com/apache/nifi-minifi-cpp/pull/711#discussion_r366981364
 
 

 ##
 File path: C2.md
 ##
 @@ -75,14 +75,14 @@ an alternate key, but you are encouraged to switch your 
configuration options as

nifi.c2.rest.url.ack=http://localhost:10080/minifi-c2-api/c2-protocol/acknowledge
 
 Review comment:
   Thanks, I like it!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (NIFI-6403) ElasticSearch field selection broken in Elastic 7.0+

2020-01-15 Thread Joseph Gresock (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock reassigned NIFI-6403:


Assignee: Joseph Gresock

> ElasticSearch field selection broken in Elastic 7.0+
> 
>
> Key: NIFI-6403
> URL: https://issues.apache.org/jira/browse/NIFI-6403
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.9.2
>Reporter: Wietze B
>Assignee: Joseph Gresock
>Priority: Major
>   Original Estimate: 0.25h
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Elastic has 
> [deprecated|https://www.elastic.co/guide/en/elasticsearch/reference/6.6/breaking-changes-6.6.html#_deprecate_literal__source_exclude_literal_and_literal__source_include_literal_url_parameters]
>  the {{source_include}} search parameter in favour of {{source_includes}} in 
> version 7.0 and higher. 
> This means that processors using the field selection will get an HTTP 400 
> error upon execution. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-6458) ElasticSearchClientServiceImpl does not support Elasticserch 7 due to changes in search responses

2020-01-15 Thread Joseph Gresock (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock reassigned NIFI-6458:


Assignee: Joseph Gresock

> ElasticSearchClientServiceImpl does not support Elasticserch 7 due to changes 
> in search responses
> -
>
> Key: NIFI-6458
> URL: https://issues.apache.org/jira/browse/NIFI-6458
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Affects Versions: 1.9.2
> Environment: NiFi 1.9.2, Elasticsearch 7.1.1
>Reporter: Yury Sergeev
>Assignee: Joseph Gresock
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Due to the changes in Elasticsearch 7, now it returns a map, not Integer (see 
> 'total.value' in the elastic 7 response), but the code in 
> ElasticSearchClientServiceImpl.java expects to process an integer.
> line 264:
> int count = (Integer)hitsParent.get("total");
> Elastic 7 response example:
> {
> "_shards": ...
> "timed_out": false,
> "took": 100,
> "hits": \{
> "max_score": 1.0,
> "total" : {
> "value": 2048,
> "relation": "eq"  
> },
> "hits": ...
> }
> }
> Elastic 6 response example:
> {
>   "took" : 63,
>   "timed_out" : false,
>   "_shards" : \{
> "total" : 5,
> "successful" : 5,
> "skipped" : 0,
> "failed" : 0
>   },
>   "hits" : {
> "total" : 1000,
> "max_score" : null,
> "hits" : [ {
> ...
> Therefore, the method of ElasticSearchClientServiceImpl search(String query, 
> String index, String type) throws the following exception:
> java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to 
> java.lang.Integer: java.lang.ClassCastException: java.util.LinkedHashMap 
> cannot be cast to java.lang.Integer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-6404) PutElasticsearchHttp: Remove _type as being compulsory

2020-01-15 Thread Joseph Gresock (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock reassigned NIFI-6404:


Assignee: Joseph Gresock

> PutElasticsearchHttp: Remove _type as being compulsory
> --
>
> Key: NIFI-6404
> URL: https://issues.apache.org/jira/browse/NIFI-6404
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.9.2
> Environment: Elasticsearch 7.x
>Reporter: David Vassallo
>Assignee: Joseph Gresock
>Priority: Major
>
> In ES 7.x and above, document "type" is no longer compulsory and in fact is 
> deprecated. When using the 1.9.2 version of PutElasticsearchHttp with ES 
> v7.2, it still works however you'll see the following HTTP in the response:
>  
> {{HTTP/1.1 200 OK}}
>  *{{Warning: 299 Elasticsearch-7.2.0-508c38a "[types removal] Specifying 
> types in bulk requests is deprecated."}}*
>  {{content-type: application/json; charset=UTF-8}}
>  
> The fix is relatively straightforward:
>  * In *PutElasticserachHttp.java*, remove the requirement of a compulsory 
> "Type" property:
> {code:java}
> public static final PropertyDescriptor TYPE = new PropertyDescriptor.Builder()
>  .name("put-es-type")
>  .displayName("Type")
>  .description("The type of this document (used by Elasticsearch < 7.0 for 
> indexing and searching). Leave empty for ES >= 7.0") // <-
>  .required(false) // <- CHANGE
>  .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
>  .addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
>  .build();
> {code}
>  
>  * In *AbstractElasticsearchHttpProcessor.java*, check for the presence of 
> "docType". If not present, assume elasticsearch 7.x or above and omit from 
> bulk API URL:
>  
> {code:java}
> protected void buildBulkCommand(StringBuilder sb, String index, String 
> docType, String indexOp, String id, String jsonString) {
> if (indexOp.equalsIgnoreCase("index")) {
> sb.append("{\"index\": { \"_index\": \"");
> sb.append(StringEscapeUtils.escapeJson(index));
> if (!(StringUtils.isEmpty(docType) | docType == null)){ // <- 
> CHANGE START
> sb.append("\", \"_type\": \"");
> sb.append(StringEscapeUtils.escapeJson(docType));
> sb.append("\"");
> }// <- CHANGE END
> if (!StringUtils.isEmpty(id)) { 
> sb.append(", \"_id\": \"");
> sb.append(StringEscapeUtils.escapeJson(id));
> sb.append("\"");
> } 
> sb.append("}}\n");
> sb.append(jsonString);
> sb.append("\n");
> } else if (indexOp.equalsIgnoreCase("upsert") || 
> indexOp.equalsIgnoreCase("update")) {
> sb.append("{\"update\": { \"_index\": \"");
> sb.append(StringEscapeUtils.escapeJson(index));
> sb.append("\", \"_type\": \"");
> sb.append(StringEscapeUtils.escapeJson(docType));
> sb.append("\", \"_id\": \"");
> sb.append(StringEscapeUtils.escapeJson(id));
> sb.append("\" }\n");
> sb.append("{\"doc\": ");
> sb.append(jsonString);
> sb.append(", \"doc_as_upsert\": ");
> sb.append(indexOp.equalsIgnoreCase("upsert"));
> sb.append(" }\n");
> } else if (indexOp.equalsIgnoreCase("delete")) {
> sb.append("{\"delete\": { \"_index\": \"");
> sb.append(StringEscapeUtils.escapeJson(index));
> sb.append("\", \"_type\": \"");
> sb.append(StringEscapeUtils.escapeJson(docType));
> sb.append("\", \"_id\": \"");
> sb.append(StringEscapeUtils.escapeJson(id));
> sb.append("\" }\n");
> }
> }
> {code}
>  
>  * The *TestPutElasticsearchHttp.java* test file needs to be updated to 
> reflect that now a requests without type is valid (it's currently marked as 
> invalid)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #711: MINIFICPP-1120 - clarify C2.md

2020-01-15 Thread GitBox
szaszm commented on a change in pull request #711: MINIFICPP-1120 - clarify 
C2.md
URL: https://github.com/apache/nifi-minifi-cpp/pull/711#discussion_r366979229
 
 

 ##
 File path: C2.md
 ##
 @@ -75,14 +75,14 @@ an alternate key, but you are encouraged to switch your 
configuration options as

nifi.c2.rest.url.ack=http://localhost:10080/minifi-c2-api/c2-protocol/acknowledge
 
 Review comment:
   How about the version in 08f8188 ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #711: MINIFICPP-1120 - clarify C2.md

2020-01-15 Thread GitBox
arpadboda commented on a change in pull request #711: MINIFICPP-1120 - clarify 
C2.md
URL: https://github.com/apache/nifi-minifi-cpp/pull/711#discussion_r366922124
 
 

 ##
 File path: C2.md
 ##
 @@ -75,14 +75,14 @@ an alternate key, but you are encouraged to switch your 
configuration options as

nifi.c2.rest.url.ack=http://localhost:10080/minifi-c2-api/c2-protocol/acknowledge
 
 Review comment:
   I would suggest somehow showing here that these are only example addresses 
and you should insert your own to make sure noone copy-pastes this. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] tpalfy commented on a change in pull request #3979: NIFI-7009: Atlas reporting task retrieves only the active flow compon…

2020-01-15 Thread GitBox
tpalfy commented on a change in pull request #3979: NIFI-7009: Atlas reporting 
task retrieves only the active flow compon…
URL: https://github.com/apache/nifi/pull/3979#discussion_r366877708
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-atlas-bundle/nifi-atlas-reporting-task/src/main/java/org/apache/nifi/atlas/NiFiAtlasClient.java
 ##
 @@ -230,6 +231,35 @@ public NiFiFlow fetchNiFiFlow(String rootProcessGroupId, 
String clusterName) thr
 return nifiFlow;
 }
 
+/**
+ * Retrieves the flow components of type {@code componentType} from Atlas 
server.
+ * Deleted components will be filtered out before calling Atlas.
+ * Atlas object ids will be initialized with all the attributes (guid, 
type, unique attributes) in order to be able
+ * to match ids retrieved from Atlas (having guid) and ids created by the 
reporting task (not having guid yet).
+ *
+ * @param componentType Atlas type of the flow component (nifi_flow_path, 
nifi_queue, nifi_input_port, nifi_output_port)
+ * @param referredEntities referred entities of the flow entity (returned 
when the flow fetched) containing the basic data (id, status) of the flow 
components
+ * @return flow component entities mapped to their object ids
+ */
+private Map fetchFlowComponents(String 
componentType, Map referredEntities) {
+return referredEntities.values().stream()
+.filter(referredEntity -> 
referredEntity.getTypeName().equals(componentType))
+.filter(referredEntity -> referredEntity.getStatus() == 
AtlasEntity.Status.ACTIVE)
+.map(referredEntity -> {
+final Map uniqueAttributes = 
Collections.singletonMap(ATTR_QUALIFIED_NAME, 
referredEntity.getAttribute(ATTR_QUALIFIED_NAME));
+final AtlasObjectId id = new 
AtlasObjectId(referredEntity.getGuid(), componentType, uniqueAttributes);
+try {
+final AtlasEntity.AtlasEntityWithExtInfo 
fetchedEntityExt = searchEntityDef(id);
+return new Tuple<>(id, fetchedEntityExt.getEntity());
+} catch (AtlasServiceException e) {
+logger.warn("Failed to search entity by id {}, due to 
{}", id, e);
+return null;
 
 Review comment:
   We only WARN if a component cannot be retrieved from Atlas then move on. We 
don't end up with an incomplete flow because we will try to recreate the (only 
seemingly) missing components?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] tpalfy commented on a change in pull request #3979: NIFI-7009: Atlas reporting task retrieves only the active flow compon…

2020-01-15 Thread GitBox
tpalfy commented on a change in pull request #3979: NIFI-7009: Atlas reporting 
task retrieves only the active flow compon…
URL: https://github.com/apache/nifi/pull/3979#discussion_r366875871
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-atlas-bundle/nifi-atlas-reporting-task/src/main/java/org/apache/nifi/atlas/NiFiAtlasClient.java
 ##
 @@ -230,6 +231,35 @@ public NiFiFlow fetchNiFiFlow(String rootProcessGroupId, 
String clusterName) thr
 return nifiFlow;
 }
 
+/**
+ * Retrieves the flow components of type {@code componentType} from Atlas 
server.
+ * Deleted components will be filtered out before calling Atlas.
+ * Atlas object ids will be initialized with all the attributes (guid, 
type, unique attributes) in order to be able
+ * to match ids retrieved from Atlas (having guid) and ids created by the 
reporting task (not having guid yet).
+ *
+ * @param componentType Atlas type of the flow component (nifi_flow_path, 
nifi_queue, nifi_input_port, nifi_output_port)
+ * @param referredEntities referred entities of the flow entity (returned 
when the flow fetched) containing the basic data (id, status) of the flow 
components
+ * @return flow component entities mapped to their object ids
+ */
+private Map fetchFlowComponents(String 
componentType, Map referredEntities) {
+return referredEntities.values().stream()
+.filter(referredEntity -> 
referredEntity.getTypeName().equals(componentType))
+.filter(referredEntity -> referredEntity.getStatus() == 
AtlasEntity.Status.ACTIVE)
+.map(referredEntity -> {
+final Map uniqueAttributes = 
Collections.singletonMap(ATTR_QUALIFIED_NAME, 
referredEntity.getAttribute(ATTR_QUALIFIED_NAME));
+final AtlasObjectId id = new 
AtlasObjectId(referredEntity.getGuid(), componentType, uniqueAttributes);
+try {
+final AtlasEntity.AtlasEntityWithExtInfo 
fetchedEntityExt = searchEntityDef(id);
 
 Review comment:
   Wondering if we could set `ignoreRelationship` to `true` for this 
`atlasClient` call (in `searchEntityDef`).
   Not sure if it matters that much.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] tpalfy commented on a change in pull request #3979: NIFI-7009: Atlas reporting task retrieves only the active flow compon…

2020-01-15 Thread GitBox
tpalfy commented on a change in pull request #3979: NIFI-7009: Atlas reporting 
task retrieves only the active flow compon…
URL: https://github.com/apache/nifi/pull/3979#discussion_r366927947
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-atlas-bundle/nifi-atlas-reporting-task/src/main/java/org/apache/nifi/atlas/NiFiAtlasClient.java
 ##
 @@ -201,12 +202,12 @@ public NiFiFlow fetchNiFiFlow(String rootProcessGroupId, 
String clusterName) thr
 nifiFlow.setUrl(toStr(attributes.get(ATTR_URL)));
 nifiFlow.setDescription(toStr(attributes.get(ATTR_DESCRIPTION)));
 
-
nifiFlow.getQueues().putAll(toQualifiedNameIds(toAtlasObjectIds(nifiFlowEntity.getAttribute(ATTR_QUEUES;
-
nifiFlow.getRootInputPortEntities().putAll(toQualifiedNameIds(toAtlasObjectIds(nifiFlowEntity.getAttribute(ATTR_INPUT_PORTS;
-
nifiFlow.getRootOutputPortEntities().putAll(toQualifiedNameIds(toAtlasObjectIds(nifiFlowEntity.getAttribute(ATTR_OUTPUT_PORTS;
+nifiFlow.getQueues().putAll(fetchFlowComponents(TYPE_NIFI_QUEUE, 
nifiFlowReferredEntities));
+
nifiFlow.getRootInputPortEntities().putAll(fetchFlowComponents(TYPE_NIFI_INPUT_PORT,
 nifiFlowReferredEntities));
+
nifiFlow.getRootOutputPortEntities().putAll(fetchFlowComponents(TYPE_NIFI_OUTPUT_PORT,
 nifiFlowReferredEntities));
 
 final Map flowPaths = nifiFlow.getFlowPaths();
-final Map flowPathEntities = 
toQualifiedNameIds(toAtlasObjectIds(attributes.get(ATTR_FLOW_PATHS)));
+final Map flowPathEntities = 
fetchFlowComponents(TYPE_NIFI_FLOW_PATH, nifiFlowReferredEntities);
 
 for (AtlasEntity flowPathEntity : flowPathEntities.values()) {
 
 Review comment:
   Feels strange to leave the handling of the flowpaths as it is. We retrieve 
only the `ACTIVE` ones, but we discard their `referredEntities`. So when it 
comes to their inputs/ouputs, we fall back to the old logic and retrieve them 
all and filter out the `DELETED` ones afterwards.
   
   Even more, to me it seems if we actually didn't set `minExtInfo` to `true` 
but left it on `false`, we could get away with a single REST call, because in 
that case each of the `referredEntities` would contain `relationshipAttributes` 
that also have a `guid` and `entityStatus` (`ACTIVE` or `DELETED`) attribute.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (MINIFICPP-1117) minifi::Exception should be nothrow copyable

2020-01-15 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz closed MINIFICPP-1117.
---

> minifi::Exception should be nothrow copyable
> 
>
> Key: MINIFICPP-1117
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1117
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Minor
> Fix For: 0.8.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {{minifi::Exception}} has non-static data members of type {{std::string}} 
> which can throw on copy. This makes the implicit copy ctor of 
> {{minifi::Exception}} potentially throwing.
>  [Exception objects must be nothrow copy 
> constructible|https://wiki.sei.cmu.edu/confluence/display/cplusplus/ERR60-CPP.+Exception+objects+must+be+nothrow+copy+constructible]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm opened a new pull request #711: MINIFICPP-1120 - clarify C2.md

2020-01-15 Thread GitBox
szaszm opened a new pull request #711: MINIFICPP-1120 - clarify C2.md
URL: https://github.com/apache/nifi-minifi-cpp/pull/711
 
 
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (MINIFICPP-1120) Clarify C2.md

2020-01-15 Thread Marton Szasz (Jira)
Marton Szasz created MINIFICPP-1120:
---

 Summary: Clarify C2.md
 Key: MINIFICPP-1120
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1120
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Marton Szasz
Assignee: Marton Szasz
 Fix For: 0.8.0


The minifi C2 configuration presented in {{C2.md}} selects {{CoapProtocol}} as 
the c2 protocol class, then proceeds to configure {{RESTSender}} properties. 
This is confusing. This issue is presenting a usable example in C2.md and 
clarifying a few details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6987) Remove "Claim Management" section from Admin Guide

2020-01-15 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-6987:
-
Fix Version/s: 1.12.0

> Remove "Claim Management" section from Admin Guide
> --
>
> Key: NIFI-6987
> URL: https://issues.apache.org/jira/browse/NIFI-6987
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Andrew M. Lim
>Assignee: Andrew M. Lim
>Priority: Minor
> Fix For: 1.12.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This section is outdated and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-6987) Remove "Claim Management" section from Admin Guide

2020-01-15 Thread Scott Aslan (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan resolved NIFI-6987.
---
Resolution: Fixed

> Remove "Claim Management" section from Admin Guide
> --
>
> Key: NIFI-6987
> URL: https://issues.apache.org/jira/browse/NIFI-6987
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Andrew M. Lim
>Assignee: Andrew M. Lim
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This section is outdated and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6987) Remove "Claim Management" section from Admin Guide

2020-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016026#comment-17016026
 ] 

ASF subversion and git services commented on NIFI-6987:
---

Commit cdbcc4725cb1f1b67f527dabb1ae7cd5dcdaa541 in nifi's branch 
refs/heads/master from Andrew Lim
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=cdbcc47 ]

NIFI-6987 Remove Claim Management section from Admin Guide

This closes #3964

Signed-off-by: Scott Aslan 


> Remove "Claim Management" section from Admin Guide
> --
>
> Key: NIFI-6987
> URL: https://issues.apache.org/jira/browse/NIFI-6987
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Andrew M. Lim
>Assignee: Andrew M. Lim
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This section is outdated and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #3964: NIFI-6987 Remove Claim Management section from Admin Guide

2020-01-15 Thread GitBox
asfgit closed pull request #3964: NIFI-6987 Remove Claim Management section 
from Admin Guide
URL: https://github.com/apache/nifi/pull/3964
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (MINIFICPP-1119) Unify socket implementation of different platforms

2020-01-15 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1119:
--
Summary: Unify socket implementation of different platforms  (was: Unify 
socket implementation of different )

> Unify socket implementation of different platforms
> --
>
> Key: MINIFICPP-1119
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1119
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.7.0
>Reporter: Arpad Boda
>Assignee: Marton Szasz
>Priority: Major
> Fix For: 0.8.0
>
>
> Currently we have different cpp and h files for socket implementations for 
> Win and *nix platforms. 
> As the API is the same at least the latter should go away, we should have as 
> few code copy-pasted as possible. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1119) Unify socket implementation of different

2020-01-15 Thread Arpad Boda (Jira)
Arpad Boda created MINIFICPP-1119:
-

 Summary: Unify socket implementation of different 
 Key: MINIFICPP-1119
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1119
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Affects Versions: 0.7.0
Reporter: Arpad Boda
Assignee: Marton Szasz
 Fix For: 0.8.0


Currently we have different cpp and h files for socket implementations for Win 
and *nix platforms. 
As the API is the same at least the latter should go away, we should have as 
few code copy-pasted as possible. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #708: MINIFICPP-1118 - MiNiFi C++ on Windows stops running in a secure env …

2020-01-15 Thread GitBox
arpadboda closed pull request #708: MINIFICPP-1118 - MiNiFi C++ on Windows 
stops running in a secure env …
URL: https://github.com/apache/nifi-minifi-cpp/pull/708
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] m-hogue commented on issue #3939: NIFI-6919: Added relationship attribute to DistributeLoad

2020-01-15 Thread GitBox
m-hogue commented on issue #3939: NIFI-6919: Added relationship attribute to 
DistributeLoad
URL: https://github.com/apache/nifi/pull/3939#issuecomment-574668339
 
 
   Thanks so much, @mattyb149! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-7021) Release Management for Apache NiFi 1.11.0

2020-01-15 Thread Joe Witt (Jira)
Joe Witt created NIFI-7021:
--

 Summary: Release Management for Apache NiFi 1.11.0
 Key: NIFI-7021
 URL: https://issues.apache.org/jira/browse/NIFI-7021
 Project: Apache NiFi
  Issue Type: Task
  Components: Tools and Build
Reporter: Joe Witt
Assignee: Joe Witt
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi] asfgit closed pull request #182: MINIFI-522 : Fixed Access denied to: http://jcenter.bintray.com which…

2020-01-15 Thread GitBox
asfgit closed pull request #182: MINIFI-522 : Fixed Access denied to: 
http://jcenter.bintray.com which…
URL: https://github.com/apache/nifi-minifi/pull/182
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi-minifi] apiri commented on issue #182: MINIFI-522 : Fixed Access denied to: http://jcenter.bintray.com which…

2020-01-15 Thread GitBox
apiri commented on issue #182: MINIFI-522 : Fixed Access denied to: 
http://jcenter.bintray.com which…
URL: https://github.com/apache/nifi-minifi/pull/182#issuecomment-574661232
 
 
   reviewing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-7017) PrometheusReportingTask does not report nested process group status

2020-01-15 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-7017:
---
Fix Version/s: 1.11.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

+1 merged to master

> PrometheusReportingTask does not report nested process group status
> ---
>
> Key: NIFI-7017
> URL: https://issues.apache.org/jira/browse/NIFI-7017
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The clear() command added in NIFI-6715 causes the registry to be reset on 
> every call to createNiFiMetrics. In the case of nested process groups this 
> method is a recursive call, so the registry keeps getting cleared except for 
> the last visited process group.
> The clear() should only be done when createNiFiMetrics is called on the root 
> process group, in order to clean up the registry before adding all the 
> components' metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7017) PrometheusReportingTask does not report nested process group status

2020-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015972#comment-17015972
 ] 

ASF subversion and git services commented on NIFI-7017:
---

Commit bb699e749755432837b3a4132372f635d7480be3 in nifi's branch 
refs/heads/master from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=bb699e7 ]

NIFI-7017: This closes #3988. Fixed PrometheusReportingTask for nested PG status

Signed-off-by: Joe Witt 


> PrometheusReportingTask does not report nested process group status
> ---
>
> Key: NIFI-7017
> URL: https://issues.apache.org/jira/browse/NIFI-7017
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The clear() command added in NIFI-6715 causes the registry to be reset on 
> every call to createNiFiMetrics. In the case of nested process groups this 
> method is a recursive call, so the registry keeps getting cleared except for 
> the last visited process group.
> The clear() should only be done when createNiFiMetrics is called on the root 
> process group, in order to clean up the registry before adding all the 
> components' metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >