[jira] [Commented] (NIFI-2801) Add information about relevant version to *Kafka processor documentation
[ https://issues.apache.org/jira/browse/NIFI-2801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15512189#comment-15512189 ] Andy LoPresto commented on NIFI-2801: - I assigned to @andrewmlim only because the mobile app requires an assignee to submit the issue. Feel free to reassign (or volunteer). > Add information about relevant version to *Kafka processor documentation > > > Key: NIFI-2801 > URL: https://issues.apache.org/jira/browse/NIFI-2801 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation & Website >Affects Versions: 1.0.0, 0.7.0 >Reporter: Andy LoPresto >Assignee: Andrew Lim >Priority: Minor > Labels: Beginner Documentation Kafka > Fix For: 1.1.0, 0.8.0 > > > A frequent obstacle for new users is the various processors for communicating > with Kafka. Due to incompatibilities between different versions of Kafka, > there are currently 6 processors (3 to push data and 3 to pull data), and > each "pair" targets a specific version. We should add text to the > documentation/description of each to clarify explicitly which version of > Kafka the processor targets, and what its "complementary" processor is named. > * <=0.8 -- PutKafka/GetKafka > * 0.9 -- PublishKafka/ConsumeKafka > * 0.10 -- PublishKafka_0_10/ConsumeKafka_0_10 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-2801) Add information about relevant version to *Kafka processor documentation
Andy LoPresto created NIFI-2801: --- Summary: Add information about relevant version to *Kafka processor documentation Key: NIFI-2801 URL: https://issues.apache.org/jira/browse/NIFI-2801 Project: Apache NiFi Issue Type: Improvement Components: Documentation & Website Affects Versions: 0.7.0, 1.0.0 Reporter: Andy LoPresto Assignee: Andrew Lim Priority: Minor Fix For: 1.1.0, 0.8.0 A frequent obstacle for new users is the various processors for communicating with Kafka. Due to incompatibilities between different versions of Kafka, there are currently 6 processors (3 to push data and 3 to pull data), and each "pair" targets a specific version. We should add text to the documentation/description of each to clarify explicitly which version of Kafka the processor targets, and what its "complementary" processor is named. * <=0.8 -- PutKafka/GetKafka * 0.9 -- PublishKafka/ConsumeKafka * 0.10 -- PublishKafka_0_10/ConsumeKafka_0_10 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi-minifi pull request #38: MINIFI-86 Adding explicit checks for any unsup...
GitHub user JPercivall opened a pull request: https://github.com/apache/nifi-minifi/pull/38 MINIFI-86 Adding explicit checks for any unsupported components when t⦠â¦transforming a template You can merge this pull request into a Git repository by running: $ git pull https://github.com/JPercivall/nifi-minifi MINIFI-86 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi/pull/38.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #38 commit 87dafae0f4675b360feb724bfb1a7038971699b0 Author: Joseph PercivallDate: 2016-09-21T23:22:38Z MINIFI-86 Adding explicit checks for any unsupported components when transforming a template --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi-minifi pull request #37: MINIFI-110 Adding ' ' (space) to the list of c...
GitHub user JPercivall opened a pull request: https://github.com/apache/nifi-minifi/pull/37 MINIFI-110 Adding ' ' (space) to the list of characters that will cau⦠â¦se parsing errors when using the 'flowStatus' command You can merge this pull request into a Git repository by running: $ git pull https://github.com/JPercivall/nifi-minifi MINIFI-110 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi/pull/37.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #37 commit d5cb5cc9cafd58ea6dad1c85bf449cd80dacabf1 Author: Joseph PercivallDate: 2016-09-21T22:34:33Z MINIFI-110 Adding ' ' (space) to the list of characters that will cause parsing errors when using the 'flowStatus' command --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi-minifi pull request #36: MINIFI-105 Fixing BootstrapCodec tertiary comm...
GitHub user JPercivall opened a pull request: https://github.com/apache/nifi-minifi/pull/36 MINIFI-105 Fixing BootstrapCodec tertiary command order of operations You can merge this pull request into a Git repository by running: $ git pull https://github.com/JPercivall/nifi-minifi MINIFI-105 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi/pull/36.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #36 commit c4a855b5d6a88e82131165c991b88ad9b7663023 Author: Joseph PercivallDate: 2016-09-21T22:30:43Z MINIFI-105 Fixing BootstrapCodec tertiary command order of operations --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi-minifi pull request #35: MINIFI-59 Removing exclude statement for Quart...
GitHub user JPercivall opened a pull request: https://github.com/apache/nifi-minifi/pull/35 MINIFI-59 Removing exclude statement for Quartz to fix CRON support You can merge this pull request into a Git repository by running: $ git pull https://github.com/JPercivall/nifi-minifi MINIFI-59 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi/pull/35.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #35 commit d0ce2bedc62bddedca3cfb95b8d6824c9e58a811 Author: Joseph PercivallDate: 2016-09-21T22:19:17Z MINIFI-59 Removing exclude statement for Quartz to fix CRON support --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-2800) GetFile Processor Appends overrides last slash in filename to always be \
[ https://issues.apache.org/jira/browse/NIFI-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Gambino updated NIFI-2800: -- Description: When using the GetFile processor all flowfiles created have an "Absolute.Path" attribute. This absolute path is taken from the directory field of the GetFile configuration. The last slash in the directory is always over-ridden to be '/' even when the user puts "\" in the directory structure. Example: Configuraton - "Directory": "C:\Samplepath\SampleFolder\" Output Flowfile - "Absolute.Path": "C:\Samplepath\SampleFolder/" I believe the intended function of this should not replace the last \ with a /. This creates issues when using the attribute in later stages with other processors such as ExecuteStreamProcess. was: When using the GetFile processor all flowfiles created have an "Absolute.Path" attribute. This absolute path is taken from the directory field of the GetFile configuration. The last slash in the directory is always over-ridden to be '/' even when the user puts "\" in the directory structure. Example: Configuraton - "Directory": "C:\Samplepath\SampleFolder\" Output Flowfile - "Absolute.Path": "C:\Samplepath\SampleFolder/" I believe the intended function of this should not replace the last \ with a /. This creates issues when using the attribute in later stages with other processors such as "ExecuteStreamProcess. > GetFile Processor Appends overrides last slash in filename to always be \ > - > > Key: NIFI-2800 > URL: https://issues.apache.org/jira/browse/NIFI-2800 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.0.0 > Environment: This was discovered on a windows machine running nifi > server locally >Reporter: Christopher Gambino >Priority: Minor > > When using the GetFile processor all flowfiles created have an > "Absolute.Path" attribute. This absolute path is taken from the directory > field of the GetFile configuration. The last slash in the directory is > always over-ridden to be '/' even when the user puts "\" in the directory > structure. > Example: > Configuraton - "Directory": "C:\Samplepath\SampleFolder\" > Output Flowfile - "Absolute.Path": "C:\Samplepath\SampleFolder/" > I believe the intended function of this should not replace the last \ with a > /. This creates issues when using the attribute in later stages with other > processors such as ExecuteStreamProcess. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #721: Nifi 2398 - Apache Ignite Get Processor
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/721#discussion_r79940293 --- Diff: nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/GetIgniteCache.java --- @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.ignite.cache; + +import java.io.ByteArrayInputStream; +import java.util.ArrayList; +import java.util.List; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.exception.ProcessException; + +/** + * Get cache processors which gets byte array for the key from Ignite cache and set the array + * as the FlowFile content. + */ +@EventDriven +@SupportsBatching +@Tags({ "Ignite", "get", "read", "cache", "key" }) +@InputRequirement(Requirement.INPUT_REQUIRED) +@CapabilityDescription("Get the byte array from Ignite Cache and adds it as the content of a FlowFile." + +"The processor uses the value of FlowFile attribute (Ignite cache entry key) as the cache key lookup. " + +"If the entry corresponding to the key is not found in the cache an error message is associated with the FlowFile " + +"Note - The Ignite Kernel periodically outputs node performance statistics to the logs. This message " + +" can be turned off by setting the log level for logger 'org.apache.ignite' to WARN in the logback.xml configuration file.") +@WritesAttributes({ +@WritesAttribute(attribute = GetIgniteCache.IGNITE_GET_FAILED_REASON_ATTRIBUTE_KEY, description = "The reason for getting entry from cache"), +@WritesAttribute(attribute = GetIgniteCache.IGNITE_GET_FAILED_MISSING_KEY_MESSAGE, description = "The FlowFile key attribute was missing.") +}) +@SeeAlso({PutIgniteCache.class}) +public class GetIgniteCache extends AbstractIgniteCacheProcessor { + +/** Flow file attribute keys and messages */ +public static final String IGNITE_GET_FAILED_REASON_ATTRIBUTE_KEY = "ignite.cache.get.failed.reason"; +public static final String IGNITE_GET_FAILED_MISSING_KEY_MESSAGE = "The FlowFile key attribute was missing"; +public static final String IGNITE_GET_FAILED_MISSING_ENTRY_MESSAGE = "The cache byte array entry was null or zero length"; +public static final String IGNITE_GET_FAILED_MESSAGE_PREFIX = "The cache request failed because of "; + +static { +descriptors = new ArrayList<>(); --- End diff -- You are sharing a static reference for descriptors in both processors, it results in a race condition and not all properties are present. Have a look to other processors extending an abstract class to have an example. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #721: Nifi 2398 - Apache Ignite Get Processor
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/721#discussion_r79868592 --- Diff: nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/GetIgniteCache.java --- @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.ignite.cache; + +import java.io.ByteArrayInputStream; +import java.util.ArrayList; +import java.util.List; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.exception.ProcessException; + +/** + * Get cache processors which gets byte array for the key from Ignite cache and set the array + * as the FlowFile content. + */ +@EventDriven +@SupportsBatching +@Tags({ "Ignite", "get", "read", "cache", "key" }) +@InputRequirement(Requirement.INPUT_REQUIRED) +@CapabilityDescription("Get the byte array from Ignite Cache and adds it as the content of a FlowFile." + +"The processor uses the value of FlowFile attribute (Ignite cache entry key) as the cache key lookup. " + +"If the entry corresponding to the key is not found in the cache an error message is associated with the FlowFile " + +"Note - The Ignite Kernel periodically outputs node performance statistics to the logs. This message " + +" can be turned off by setting the log level for logger 'org.apache.ignite' to WARN in the logback.xml configuration file.") +@WritesAttributes({ +@WritesAttribute(attribute = GetIgniteCache.IGNITE_GET_FAILED_REASON_ATTRIBUTE_KEY, description = "The reason for getting entry from cache"), +@WritesAttribute(attribute = GetIgniteCache.IGNITE_GET_FAILED_MISSING_KEY_MESSAGE, description = "The FlowFile key attribute was missing.") +}) +@SeeAlso({PutIgniteCache.class}) +public class GetIgniteCache extends AbstractIgniteCacheProcessor { + +/** Flow file attribute keys and messages */ +public static final String IGNITE_GET_FAILED_REASON_ATTRIBUTE_KEY = "ignite.cache.get.failed.reason"; +public static final String IGNITE_GET_FAILED_MISSING_KEY_MESSAGE = "The FlowFile key attribute was missing"; --- End diff -- Shouldn't it be a key and not a description (as you did with the other written attribute)? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #721: Nifi 2398 - Apache Ignite Get Processor
Github user pvillard31 commented on a diff in the pull request: https://github.com/apache/nifi/pull/721#discussion_r79939789 --- Diff: nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/GetIgniteCache.java --- @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.ignite.cache; + +import java.io.ByteArrayInputStream; +import java.util.ArrayList; +import java.util.List; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.exception.ProcessException; + +/** + * Get cache processors which gets byte array for the key from Ignite cache and set the array + * as the FlowFile content. + */ +@EventDriven +@SupportsBatching +@Tags({ "Ignite", "get", "read", "cache", "key" }) +@InputRequirement(Requirement.INPUT_REQUIRED) +@CapabilityDescription("Get the byte array from Ignite Cache and adds it as the content of a FlowFile." + +"The processor uses the value of FlowFile attribute (Ignite cache entry key) as the cache key lookup. " + +"If the entry corresponding to the key is not found in the cache an error message is associated with the FlowFile " + +"Note - The Ignite Kernel periodically outputs node performance statistics to the logs. This message " + +" can be turned off by setting the log level for logger 'org.apache.ignite' to WARN in the logback.xml configuration file.") +@WritesAttributes({ +@WritesAttribute(attribute = GetIgniteCache.IGNITE_GET_FAILED_REASON_ATTRIBUTE_KEY, description = "The reason for getting entry from cache"), +@WritesAttribute(attribute = GetIgniteCache.IGNITE_GET_FAILED_MISSING_KEY_MESSAGE, description = "The FlowFile key attribute was missing.") +}) +@SeeAlso({PutIgniteCache.class}) +public class GetIgniteCache extends AbstractIgniteCacheProcessor { + +/** Flow file attribute keys and messages */ +public static final String IGNITE_GET_FAILED_REASON_ATTRIBUTE_KEY = "ignite.cache.get.failed.reason"; +public static final String IGNITE_GET_FAILED_MISSING_KEY_MESSAGE = "The FlowFile key attribute was missing"; --- End diff -- Shouldn't it be a key instead of a string message here? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2795) Enhance Cluster UI with System Diagnostics
[ https://issues.apache.org/jira/browse/NIFI-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15511235#comment-15511235 ] ASF GitHub Bot commented on NIFI-2795: -- Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/1042#discussion_r79936607 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/cluster/nf-cluster-table.js --- @@ -24,13 +24,275 @@ nf.ClusterTable = (function () { */ var config = { primaryNode: 'Primary Node', -clusterCoorindator: 'Cluster Coordinator', +clusterCoordinator: 'Cluster Coordinator', urls: { cluster: '../nifi-api/controller/cluster', -nodes: '../nifi-api/controller/cluster/nodes' -} +nodes: '../nifi-api/controller/cluster/nodes', +systemDiagnostics: '../nifi-api/system-diagnostics' +}, +data: [{ +name: 'cluster', +update: refreshClusterData +},{ +name: 'systemDiagnostics', +update: refreshSystemDiagnosticsData +} +] }; +var commonTableOptions = { +forceFitColumns: true, +enableTextSelectionOnCells: true, +enableCellNavigation: false, +enableColumnReorder: false, +autoEdit: false, +rowHeight: 24 +}; + +var nodesTab = { +name: 'Nodes', +data: { +dataSet: 'cluster', +update: updateNodesTableData +}, +tabContentId: 'cluster-nodes-tab-content', +tableId: 'cluster-nodes-table', +tableColumnModel: createNodeTableColumnModel, +tableIdColumn: 'nodeId', +tableOptions: commonTableOptions, +tableOnClick: nodesTableOnClick, +init: commonTableInit, +onSort: sort, +onTabSelected: onSelectTab, +filterOptions: [{ +text: 'by address', +value: 'address' +}, { +text: 'by status', +value: 'status' +}] +}; + +var jvmTab = { +name: 'JVM', +data: { +dataSet: 'systemDiagnostics', +update: updateJvmTableData +}, +tabContentId: 'cluster-jvm-tab-content', +tableId: 'cluster-jvm-table', +tableColumnModel: [ +{id: 'node', field: 'node', name: 'Node Address', sortable: true, resizable: true}, +{id: 'heapMax', field: 'maxHeap', name: 'Heap Max', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapTotal', field: 'totalHeap', name: 'Heap Total', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapUsed', field: 'usedHeap', name: 'Heap Used', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapUtilPct', field: 'heapUtilization', name: 'Heap Utilization', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'nonHeapTotal', field: 'totalNonHeap', name: 'Non-Heap Total', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'nonHeapUsed', field: 'usedNonHeap', name: 'Non-Heap Used', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'gcOldGen', field: 'gcOldGen', name: 'G1 Old Generation', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'gcNewGen', field: 'gcNewGen', name: 'G1 Young Generation', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'} +], +tableIdColumn: 'id', +tableOptions: commonTableOptions, +tableOnClick: null, +init: commonTableInit, +onSort: sort, +onTabSelected: onSelectTab, +filterOptions: [{ +text: 'by address', +value: 'address' +}] +}; + +var systemTab = { +name: 'System', +data: { +dataSet: 'systemDiagnostics', +update: updateSystemTableData +}, +tabContentId: 'cluster-system-tab-content', +tableId: 'cluster-system-table', +tableColumnModel: [ +{id: 'node', field: 'node', name: 'Node Address', sortable: true, resizable: true},
[jira] [Commented] (NIFI-2795) Enhance Cluster UI with System Diagnostics
[ https://issues.apache.org/jira/browse/NIFI-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15511233#comment-15511233 ] ASF GitHub Bot commented on NIFI-2795: -- Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/1042#discussion_r79936555 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/cluster/nf-cluster-table.js --- @@ -377,190 +952,45 @@ nf.ClusterTable = (function () { nf.ClusterTable.resetTableSize(); }); -// define a custom formatter for the more details column -var moreDetailsFormatter = function (row, cell, value, columnDef, dataContext) { -return ''; -}; - -// define a custom formatter for the run status column -var nodeFormatter = function (row, cell, value, columnDef, dataContext) { -return formatNodeAddress(dataContext); -}; - -// function for formatting the last accessed time -var valueFormatter = function (row, cell, value, columnDef, dataContext) { -return nf.Common.formatValue(value); -}; - -// define a custom formatter for the status column -var statusFormatter = function (row, cell, value, columnDef, dataContext) { -var markup = value; -if (dataContext.roles.includes(config.primaryNode)) { -value += ', PRIMARY'; -} -if (dataContext.roles.includes(config.clusterCoorindator)) { -value += ', COORDINATOR'; -} -return value; -}; - -var columnModel = [ -{id: 'moreDetails', name: '', sortable: false, resizable: false, formatter: moreDetailsFormatter, width: 50, maxWidth: 50}, -{id: 'node', field: 'node', name: 'Node Address', formatter: nodeFormatter, resizable: true, sortable: true}, -{id: 'activeThreadCount', field: 'activeThreadCount', name: 'Active Thread Count', resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'queued', field: 'queued', name: 'Queue/Size', resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'status', field: 'status', name: 'Status', formatter: statusFormatter, resizable: true, sortable: true}, -{id: 'uptime', field: 'nodeStartTime', name: 'Uptime', formatter: valueFormatter, resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'heartbeat', field: 'heartbeat', name: 'Last Heartbeat', formatter: valueFormatter, resizable: true, sortable: true, defaultSortAsc: false} -]; - -// only allow the admin to modify the cluster -if (nf.Common.canModifyController()) { -// function for formatting the actions column -var actionFormatter = function (row, cell, value, columnDef, dataContext) { -var canDisconnect = false; -var canConnect = false; - -// determine the current status -if (dataContext.status === 'CONNECTED' || dataContext.status === 'CONNECTING') { -canDisconnect = true; -} else if (dataContext.status === 'DISCONNECTED') { -canConnect = true; -} - -// return the appropriate markup -if (canConnect) { -return ''; -} else if (canDisconnect) { -return ''; -} else { -return ''; -} -}; - -columnModel.push({id: 'actions', label: '', formatter: actionFormatter, resizable: false, sortable: false, width: 80, maxWidth: 80}); -} - -var clusterOptions = { -forceFitColumns: true, -enableTextSelectionOnCells: true, -enableCellNavigation: false, -enableColumnReorder: false, -autoEdit: false, -rowHeight: 24 -}; - -// initialize the dataview -var clusterData = new Slick.Data.DataView({ -inlineFilters: false -}); -clusterData.setItems([], 'nodeId'); -clusterData.setFilterArgs({ -searchString:
[GitHub] nifi pull request #1042: NIFI-2795 Sys Diagnostics in Cluster UI
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/1042#discussion_r79936607 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/cluster/nf-cluster-table.js --- @@ -24,13 +24,275 @@ nf.ClusterTable = (function () { */ var config = { primaryNode: 'Primary Node', -clusterCoorindator: 'Cluster Coordinator', +clusterCoordinator: 'Cluster Coordinator', urls: { cluster: '../nifi-api/controller/cluster', -nodes: '../nifi-api/controller/cluster/nodes' -} +nodes: '../nifi-api/controller/cluster/nodes', +systemDiagnostics: '../nifi-api/system-diagnostics' +}, +data: [{ +name: 'cluster', +update: refreshClusterData +},{ +name: 'systemDiagnostics', +update: refreshSystemDiagnosticsData +} +] }; +var commonTableOptions = { +forceFitColumns: true, +enableTextSelectionOnCells: true, +enableCellNavigation: false, +enableColumnReorder: false, +autoEdit: false, +rowHeight: 24 +}; + +var nodesTab = { +name: 'Nodes', +data: { +dataSet: 'cluster', +update: updateNodesTableData +}, +tabContentId: 'cluster-nodes-tab-content', +tableId: 'cluster-nodes-table', +tableColumnModel: createNodeTableColumnModel, +tableIdColumn: 'nodeId', +tableOptions: commonTableOptions, +tableOnClick: nodesTableOnClick, +init: commonTableInit, +onSort: sort, +onTabSelected: onSelectTab, +filterOptions: [{ +text: 'by address', +value: 'address' +}, { +text: 'by status', +value: 'status' +}] +}; + +var jvmTab = { +name: 'JVM', +data: { +dataSet: 'systemDiagnostics', +update: updateJvmTableData +}, +tabContentId: 'cluster-jvm-tab-content', +tableId: 'cluster-jvm-table', +tableColumnModel: [ +{id: 'node', field: 'node', name: 'Node Address', sortable: true, resizable: true}, +{id: 'heapMax', field: 'maxHeap', name: 'Heap Max', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapTotal', field: 'totalHeap', name: 'Heap Total', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapUsed', field: 'usedHeap', name: 'Heap Used', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapUtilPct', field: 'heapUtilization', name: 'Heap Utilization', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'nonHeapTotal', field: 'totalNonHeap', name: 'Non-Heap Total', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'nonHeapUsed', field: 'usedNonHeap', name: 'Non-Heap Used', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'gcOldGen', field: 'gcOldGen', name: 'G1 Old Generation', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'gcNewGen', field: 'gcNewGen', name: 'G1 Young Generation', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'} +], +tableIdColumn: 'id', +tableOptions: commonTableOptions, +tableOnClick: null, +init: commonTableInit, +onSort: sort, +onTabSelected: onSelectTab, +filterOptions: [{ +text: 'by address', +value: 'address' +}] +}; + +var systemTab = { +name: 'System', +data: { +dataSet: 'systemDiagnostics', +update: updateSystemTableData +}, +tabContentId: 'cluster-system-tab-content', +tableId: 'cluster-system-table', +tableColumnModel: [ +{id: 'node', field: 'node', name: 'Node Address', sortable: true, resizable: true}, +{id: 'processors', field: 'processors', name: 'Processors', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'processorLoadAverage', field:
[GitHub] nifi pull request #1042: NIFI-2795 Sys Diagnostics in Cluster UI
Github user jvwing commented on a diff in the pull request: https://github.com/apache/nifi/pull/1042#discussion_r79936555 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/cluster/nf-cluster-table.js --- @@ -377,190 +952,45 @@ nf.ClusterTable = (function () { nf.ClusterTable.resetTableSize(); }); -// define a custom formatter for the more details column -var moreDetailsFormatter = function (row, cell, value, columnDef, dataContext) { -return ''; -}; - -// define a custom formatter for the run status column -var nodeFormatter = function (row, cell, value, columnDef, dataContext) { -return formatNodeAddress(dataContext); -}; - -// function for formatting the last accessed time -var valueFormatter = function (row, cell, value, columnDef, dataContext) { -return nf.Common.formatValue(value); -}; - -// define a custom formatter for the status column -var statusFormatter = function (row, cell, value, columnDef, dataContext) { -var markup = value; -if (dataContext.roles.includes(config.primaryNode)) { -value += ', PRIMARY'; -} -if (dataContext.roles.includes(config.clusterCoorindator)) { -value += ', COORDINATOR'; -} -return value; -}; - -var columnModel = [ -{id: 'moreDetails', name: '', sortable: false, resizable: false, formatter: moreDetailsFormatter, width: 50, maxWidth: 50}, -{id: 'node', field: 'node', name: 'Node Address', formatter: nodeFormatter, resizable: true, sortable: true}, -{id: 'activeThreadCount', field: 'activeThreadCount', name: 'Active Thread Count', resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'queued', field: 'queued', name: 'Queue/Size', resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'status', field: 'status', name: 'Status', formatter: statusFormatter, resizable: true, sortable: true}, -{id: 'uptime', field: 'nodeStartTime', name: 'Uptime', formatter: valueFormatter, resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'heartbeat', field: 'heartbeat', name: 'Last Heartbeat', formatter: valueFormatter, resizable: true, sortable: true, defaultSortAsc: false} -]; - -// only allow the admin to modify the cluster -if (nf.Common.canModifyController()) { -// function for formatting the actions column -var actionFormatter = function (row, cell, value, columnDef, dataContext) { -var canDisconnect = false; -var canConnect = false; - -// determine the current status -if (dataContext.status === 'CONNECTED' || dataContext.status === 'CONNECTING') { -canDisconnect = true; -} else if (dataContext.status === 'DISCONNECTED') { -canConnect = true; -} - -// return the appropriate markup -if (canConnect) { -return ''; -} else if (canDisconnect) { -return ''; -} else { -return ''; -} -}; - -columnModel.push({id: 'actions', label: '', formatter: actionFormatter, resizable: false, sortable: false, width: 80, maxWidth: 80}); -} - -var clusterOptions = { -forceFitColumns: true, -enableTextSelectionOnCells: true, -enableCellNavigation: false, -enableColumnReorder: false, -autoEdit: false, -rowHeight: 24 -}; - -// initialize the dataview -var clusterData = new Slick.Data.DataView({ -inlineFilters: false -}); -clusterData.setItems([], 'nodeId'); -clusterData.setFilterArgs({ -searchString: getFilterText(), -property: $('#cluster-filter-type').combo('getSelectedOption').value -}); -clusterData.setFilter(filter); - -// initialize the sort -
[jira] [Created] (NIFI-2800) GetFile Processor Appends overrides last slash in filename to always be \
Christopher Gambino created NIFI-2800: - Summary: GetFile Processor Appends overrides last slash in filename to always be \ Key: NIFI-2800 URL: https://issues.apache.org/jira/browse/NIFI-2800 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.0.0 Environment: This was discovered on a windows machine running nifi server locally Reporter: Christopher Gambino Priority: Minor When using the GetFile processor all flowfiles created have an "Absolute.Path" attribute. This absolute path is taken from the directory field of the GetFile configuration. The last slash in the directory is always over-ridden to be '/' even when the user puts "\" in the directory structure. Example: Configuraton - "Directory": "C:\Samplepath\SampleFolder\" Output Flowfile - "Absolute.Path": "C:\Samplepath\SampleFolder/" I believe the intended function of this should not replace the last \ with a /. This creates issues when using the attribute in later stages with other processors such as "ExecuteStreamProcess. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2707) "Bring to Front" function in UI does not to work after exiting group
[ https://issues.apache.org/jira/browse/NIFI-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-2707: -- Resolution: Fixed Status: Resolved (was: Patch Available) > "Bring to Front" function in UI does not to work after exiting group > > > Key: NIFI-2707 > URL: https://issues.apache.org/jira/browse/NIFI-2707 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 >Reporter: Mark Payne >Assignee: Matt Gilman > Fix For: 1.1.0 > > > I have some connections that overlap one another. I right-clicked on one and > clicked Bring to Front. This worked as expected. However, after I left the > Process Group and came back, the connection that was brought to the front is > no longer in front. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2707) "Bring to Front" function in UI does not to work after exiting group
[ https://issues.apache.org/jira/browse/NIFI-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15511223#comment-15511223 ] ASF GitHub Bot commented on NIFI-2707: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1023 > "Bring to Front" function in UI does not to work after exiting group > > > Key: NIFI-2707 > URL: https://issues.apache.org/jira/browse/NIFI-2707 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 >Reporter: Mark Payne >Assignee: Matt Gilman > Fix For: 1.1.0 > > > I have some connections that overlap one another. I right-clicked on one and > clicked Bring to Front. This worked as expected. However, after I left the > Process Group and came back, the connection that was brought to the front is > no longer in front. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2707) "Bring to Front" function in UI does not to work after exiting group
[ https://issues.apache.org/jira/browse/NIFI-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15511220#comment-15511220 ] ASF subversion and git services commented on NIFI-2707: --- Commit 5198e70d1460134689c11f6bbbc6147b4292fb0b in nifi's branch refs/heads/master from [~mcgilman] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=5198e70 ] NIFI-2707: - Ensuring that connections are always sorted accordingly to their zIndex. This preserves the 'bring to front' settings. This closes #1023. Signed-off-by: Bryan Bende> "Bring to Front" function in UI does not to work after exiting group > > > Key: NIFI-2707 > URL: https://issues.apache.org/jira/browse/NIFI-2707 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 >Reporter: Mark Payne >Assignee: Matt Gilman > Fix For: 1.1.0 > > > I have some connections that overlap one another. I right-clicked on one and > clicked Bring to Front. This worked as expected. However, after I left the > Process Group and came back, the connection that was brought to the front is > no longer in front. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi-minifi pull request #34: MINIFI-82 Adding support for Processor 'Annota...
GitHub user JPercivall opened a pull request: https://github.com/apache/nifi-minifi/pull/34 MINIFI-82 Adding support for Processor 'Annotation Data' You can merge this pull request into a Git repository by running: $ git pull https://github.com/JPercivall/nifi-minifi MINIFI-82 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi/pull/34.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #34 commit 1bafa4fcbf5ef198880536f2563f45ca0e0b11d8 Author: Joseph PercivallDate: 2016-09-21T20:32:35Z MINIFI-82 Adding support for Processor 'Annotation Data' --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #513: PutHBaseJSON processor treats all values as Strings
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/513 @rtempleton can you close this PR since it was included in PR 542? Thanks. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-2787) PersistentProvenanceRepository rollover can fail on immense indexed attributes
[ https://issues.apache.org/jira/browse/NIFI-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Moser updated NIFI-2787: Fix Version/s: 0.7.1 1.1.0 Status: Patch Available (was: Open) > PersistentProvenanceRepository rollover can fail on immense indexed attributes > -- > > Key: NIFI-2787 > URL: https://issues.apache.org/jira/browse/NIFI-2787 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.7.0, 1.0.0 >Reporter: Michael Moser > Fix For: 1.1.0, 0.7.1 > > > Accidentally created an immense attribute (36,000 bytes), which I indexed > with nifi.provenance.repository.indexed.attributes. Received this error. > ERROR [Provenance Repository Rollover Thread-1] > o.a.n.p.PersistentProvenanceRepository Failed to rollover Provenance > repository due to java.lang.IllegalArgumentException: Document contains at > least one immense term in field="FOO" (whose UTF8 encoding is longer than the > max length 32766), all of which were skipped. Please correct the analyzer to > not produce such terms. > Perhaps this is as simple as changing > https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/RepositoryConfiguration.java#L37 > to 32766 to match Lucene. Investigation & testing needed. > For background, this Lucene ticket made exceeding the term size limit an > IllegalArgumentException https://issues.apache.org/jira/browse/LUCENE-5472 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (NIFI-2787) PersistentProvenanceRepository rollover can fail on immense indexed attributes
[ https://issues.apache.org/jira/browse/NIFI-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Moser reassigned NIFI-2787: --- Assignee: Michael Moser > PersistentProvenanceRepository rollover can fail on immense indexed attributes > -- > > Key: NIFI-2787 > URL: https://issues.apache.org/jira/browse/NIFI-2787 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0 >Reporter: Michael Moser >Assignee: Michael Moser > Fix For: 1.1.0, 0.7.1 > > > Accidentally created an immense attribute (36,000 bytes), which I indexed > with nifi.provenance.repository.indexed.attributes. Received this error. > ERROR [Provenance Repository Rollover Thread-1] > o.a.n.p.PersistentProvenanceRepository Failed to rollover Provenance > repository due to java.lang.IllegalArgumentException: Document contains at > least one immense term in field="FOO" (whose UTF8 encoding is longer than the > max length 32766), all of which were skipped. Please correct the analyzer to > not produce such terms. > Perhaps this is as simple as changing > https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/RepositoryConfiguration.java#L37 > to 32766 to match Lucene. Investigation & testing needed. > For background, this Lucene ticket made exceeding the term size limit an > IllegalArgumentException https://issues.apache.org/jira/browse/LUCENE-5472 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2787) PersistentProvenanceRepository rollover can fail on immense indexed attributes
[ https://issues.apache.org/jira/browse/NIFI-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15511031#comment-15511031 ] ASF GitHub Bot commented on NIFI-2787: -- GitHub user mosermw opened a pull request: https://github.com/apache/nifi/pull/1043 NIFI-2787 truncate flowfile attributes that get indexed to fit within Lucene limits NIFI-2787 You can merge this pull request into a Git repository by running: $ git pull https://github.com/mosermw/nifi NIFI-2787 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1043.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1043 commit 0765a243e8d1e3af83b112bf116618bcac521aa9 Author: Mike MoserDate: 2016-09-21T20:10:49Z NIFI-2787 truncate flowfile attributes that get indexed to fit within Lucene limits > PersistentProvenanceRepository rollover can fail on immense indexed attributes > -- > > Key: NIFI-2787 > URL: https://issues.apache.org/jira/browse/NIFI-2787 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0 >Reporter: Michael Moser > > Accidentally created an immense attribute (36,000 bytes), which I indexed > with nifi.provenance.repository.indexed.attributes. Received this error. > ERROR [Provenance Repository Rollover Thread-1] > o.a.n.p.PersistentProvenanceRepository Failed to rollover Provenance > repository due to java.lang.IllegalArgumentException: Document contains at > least one immense term in field="FOO" (whose UTF8 encoding is longer than the > max length 32766), all of which were skipped. Please correct the analyzer to > not produce such terms. > Perhaps this is as simple as changing > https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/RepositoryConfiguration.java#L37 > to 32766 to match Lucene. Investigation & testing needed. > For background, this Lucene ticket made exceeding the term size limit an > IllegalArgumentException https://issues.apache.org/jira/browse/LUCENE-5472 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1043: NIFI-2787 truncate flowfile attributes that get ind...
GitHub user mosermw opened a pull request: https://github.com/apache/nifi/pull/1043 NIFI-2787 truncate flowfile attributes that get indexed to fit within Lucene limits NIFI-2787 You can merge this pull request into a Git repository by running: $ git pull https://github.com/mosermw/nifi NIFI-2787 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1043.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1043 commit 0765a243e8d1e3af83b112bf116618bcac521aa9 Author: Mike MoserDate: 2016-09-21T20:10:49Z NIFI-2787 truncate flowfile attributes that get indexed to fit within Lucene limits --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2417) Implement Query and Scroll processors for ElasticSearch
[ https://issues.apache.org/jira/browse/NIFI-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510925#comment-15510925 ] ASF GitHub Bot commented on NIFI-2417: -- Github user gresockj commented on a diff in the pull request: https://github.com/apache/nifi/pull/733#discussion_r79914319 --- Diff: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ScrollElasticsearchHttp.java --- @@ -0,0 +1,415 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.elasticsearch; + +import java.io.IOException; +import java.net.MalformedURLException; +import java.net.URL; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.regex.Pattern; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.Stateful; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.state.Scope; +import org.apache.nifi.components.state.StateManager; +import org.apache.nifi.components.state.StateMap; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.stream.io.ByteArrayInputStream; +import org.codehaus.jackson.JsonNode; + +import okhttp3.HttpUrl; +import okhttp3.OkHttpClient; +import okhttp3.Response; +import okhttp3.ResponseBody; + +@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN) +@EventDriven +@SupportsBatching +@Tags({ "elasticsearch", "query", "scroll", "read", "get", "http" }) +@CapabilityDescription("Scrolls through an Elasticsearch query using the specified connection properties. " ++ "This processor is intended to be run on the primary node, and is designed for scrolling through " ++ "huge result sets, as in the case of a reindex. The state must be cleared before another query " ++ "can be run. Each page of results is returned, wrapped in a JSON object like so: { \"hits\" : [ , , ] }. " ++ "Note that the full body of each page of documents will be read into memory before being " ++ "written to a Flow File for transfer.") +@WritesAttributes({ +@WritesAttribute(attribute = "es.index", description = "The Elasticsearch index containing the document"), +@WritesAttribute(attribute = "es.type", description = "The Elasticsearch document type") }) +@Stateful(description = "After each successful scroll page, the latest scroll_id is persisted in scrollId as input for the next scroll call. " ++ "Once the entire query is complete, finishedQuery state will be set to true, and the processor will not execute unless this is cleared.", scopes = { Scope.LOCAL }) +public class ScrollElasticsearchHttp extends AbstractElasticsearchHttpProcessor { + +private static final String
[GitHub] nifi pull request #733: NIFI-2417: Implementing QueryElasticsearchHttp and S...
Github user gresockj commented on a diff in the pull request: https://github.com/apache/nifi/pull/733#discussion_r79914319 --- Diff: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ScrollElasticsearchHttp.java --- @@ -0,0 +1,415 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.elasticsearch; + +import java.io.IOException; +import java.net.MalformedURLException; +import java.net.URL; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.regex.Pattern; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.Stateful; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.state.Scope; +import org.apache.nifi.components.state.StateManager; +import org.apache.nifi.components.state.StateMap; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.stream.io.ByteArrayInputStream; +import org.codehaus.jackson.JsonNode; + +import okhttp3.HttpUrl; +import okhttp3.OkHttpClient; +import okhttp3.Response; +import okhttp3.ResponseBody; + +@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN) +@EventDriven +@SupportsBatching +@Tags({ "elasticsearch", "query", "scroll", "read", "get", "http" }) +@CapabilityDescription("Scrolls through an Elasticsearch query using the specified connection properties. " ++ "This processor is intended to be run on the primary node, and is designed for scrolling through " ++ "huge result sets, as in the case of a reindex. The state must be cleared before another query " ++ "can be run. Each page of results is returned, wrapped in a JSON object like so: { \"hits\" : [ , , ] }. " ++ "Note that the full body of each page of documents will be read into memory before being " ++ "written to a Flow File for transfer.") +@WritesAttributes({ +@WritesAttribute(attribute = "es.index", description = "The Elasticsearch index containing the document"), +@WritesAttribute(attribute = "es.type", description = "The Elasticsearch document type") }) +@Stateful(description = "After each successful scroll page, the latest scroll_id is persisted in scrollId as input for the next scroll call. " ++ "Once the entire query is complete, finishedQuery state will be set to true, and the processor will not execute unless this is cleared.", scopes = { Scope.LOCAL }) +public class ScrollElasticsearchHttp extends AbstractElasticsearchHttpProcessor { + +private static final String FINISHED_QUERY_STATE = "finishedQuery"; +private static final String SCROLL_ID_STATE = "scrollId"; +private static final String FIELD_INCLUDE_QUERY_PARAM = "_source_include"; +private static final String
[jira] [Commented] (NIFI-2417) Implement Query and Scroll processors for ElasticSearch
[ https://issues.apache.org/jira/browse/NIFI-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510914#comment-15510914 ] ASF GitHub Bot commented on NIFI-2417: -- Github user gresockj commented on a diff in the pull request: https://github.com/apache/nifi/pull/733#discussion_r79913709 --- Diff: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/QueryElasticsearchHttp.java --- @@ -0,0 +1,410 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.elasticsearch; + +import java.io.IOException; +import java.net.MalformedURLException; +import java.net.URL; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.stream.io.ByteArrayInputStream; +import org.codehaus.jackson.JsonNode; + +import okhttp3.HttpUrl; +import okhttp3.OkHttpClient; +import okhttp3.Response; +import okhttp3.ResponseBody; + +@InputRequirement(InputRequirement.Requirement.INPUT_ALLOWED) +@EventDriven +@SupportsBatching +@Tags({ "elasticsearch", "query", "read", "get", "http" }) +@CapabilityDescription("Queries Elasticsearch using the specified connection properties. " ++ "Note that the full body of each page of documents will be read into memory before being " ++ "written to Flow Files for transfer. Also note that the Elasticsearch max_result_window index " ++ "setting is the upper bound on the number of records that can be retrieved using this query. " ++ "To retrieve more records, use the ScrollElasticsearchHttp processor.") +@WritesAttributes({ +@WritesAttribute(attribute = "filename", description = "The filename attribute is set to the document identifier"), +@WritesAttribute(attribute = "es.index", description = "The Elasticsearch index containing the document"), +@WritesAttribute(attribute = "es.type", description = "The Elasticsearch document type"), +@WritesAttribute(attribute = "es.result.*", description = "If Target is 'Flow file attributes', the JSON attributes of " ++ "each result will be placed into corresponding attributes with this prefix.") }) +public class QueryElasticsearchHttp extends AbstractElasticsearchHttpProcessor { + +private static final String FIELD_INCLUDE_QUERY_PARAM = "_source_include"; +private static final String QUERY_QUERY_PARAM = "q"; +private static final String SORT_QUERY_PARAM = "sort"; +private static final String FROM_QUERY_PARAM = "from"; +private static final String
[GitHub] nifi pull request #733: NIFI-2417: Implementing QueryElasticsearchHttp and S...
Github user gresockj commented on a diff in the pull request: https://github.com/apache/nifi/pull/733#discussion_r79913709 --- Diff: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/QueryElasticsearchHttp.java --- @@ -0,0 +1,410 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.elasticsearch; + +import java.io.IOException; +import java.net.MalformedURLException; +import java.net.URL; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.stream.io.ByteArrayInputStream; +import org.codehaus.jackson.JsonNode; + +import okhttp3.HttpUrl; +import okhttp3.OkHttpClient; +import okhttp3.Response; +import okhttp3.ResponseBody; + +@InputRequirement(InputRequirement.Requirement.INPUT_ALLOWED) +@EventDriven +@SupportsBatching +@Tags({ "elasticsearch", "query", "read", "get", "http" }) +@CapabilityDescription("Queries Elasticsearch using the specified connection properties. " ++ "Note that the full body of each page of documents will be read into memory before being " ++ "written to Flow Files for transfer. Also note that the Elasticsearch max_result_window index " ++ "setting is the upper bound on the number of records that can be retrieved using this query. " ++ "To retrieve more records, use the ScrollElasticsearchHttp processor.") +@WritesAttributes({ +@WritesAttribute(attribute = "filename", description = "The filename attribute is set to the document identifier"), +@WritesAttribute(attribute = "es.index", description = "The Elasticsearch index containing the document"), +@WritesAttribute(attribute = "es.type", description = "The Elasticsearch document type"), +@WritesAttribute(attribute = "es.result.*", description = "If Target is 'Flow file attributes', the JSON attributes of " ++ "each result will be placed into corresponding attributes with this prefix.") }) +public class QueryElasticsearchHttp extends AbstractElasticsearchHttpProcessor { + +private static final String FIELD_INCLUDE_QUERY_PARAM = "_source_include"; +private static final String QUERY_QUERY_PARAM = "q"; +private static final String SORT_QUERY_PARAM = "sort"; +private static final String FROM_QUERY_PARAM = "from"; +private static final String SIZE_QUERY_PARAM = "size"; + +public static final String TARGET_FLOW_FILE_CONTENT = "Flow file content"; +public static final String TARGET_FLOW_FILE_ATTRIBUTES = "Flow file attributes"; +private static final String
[GitHub] nifi issue #1028: Restore upstream/downstream connections dialog
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/1028 Minor comment, i noticed viewing downstream connections on input ports, the component name shows "Object object" rather than the name of the port. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #1028: Restore upstream/downstream connections dialog
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/1028 Reviewing... --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2785) Template Upload - Unable to upload to descendant group
[ https://issues.apache.org/jira/browse/NIFI-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510797#comment-15510797 ] ASF subversion and git services commented on NIFI-2785: --- Commit b304f70f3ee3715dd744e03bc94c231853abf2b5 in nifi's branch refs/heads/master from [~mcgilman] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b304f70 ] NIFI-2785: - Ensure the URL is updated when uploading a template to ensure it's going to the appropriate Process Group. This closes #1029. Signed-off-by: Bryan Bende> Template Upload - Unable to upload to descendant group > -- > > Key: NIFI-2785 > URL: https://issues.apache.org/jira/browse/NIFI-2785 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Critical > Fix For: 1.1.0 > > > Templates can be uploaded to any Process Group. This is driven by the URL > used during the upload request. Currently, the URL in the UI for the upload > request is initialized with the root group id and never updated. As a result, > through the UI templates can only be uploaded to the root group. > > As a work-around, templates could still be uploaded to descendant groups via > a request directly to the REST API. This can be done using the following > command: > {noformat}curl -X POST -v -F template=@"/path/to/template.xml" > http://{host}:{port}/nifi-api/process-groups/{process-group-id}/templates/upload{noformat} > Additionally, templates uploaded to the root group could have explicit > policies set to share with other users. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2785) Template Upload - Unable to upload to descendant group
[ https://issues.apache.org/jira/browse/NIFI-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-2785: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Template Upload - Unable to upload to descendant group > -- > > Key: NIFI-2785 > URL: https://issues.apache.org/jira/browse/NIFI-2785 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Critical > Fix For: 1.1.0 > > > Templates can be uploaded to any Process Group. This is driven by the URL > used during the upload request. Currently, the URL in the UI for the upload > request is initialized with the root group id and never updated. As a result, > through the UI templates can only be uploaded to the root group. > > As a work-around, templates could still be uploaded to descendant groups via > a request directly to the REST API. This can be done using the following > command: > {noformat}curl -X POST -v -F template=@"/path/to/template.xml" > http://{host}:{port}/nifi-api/process-groups/{process-group-id}/templates/upload{noformat} > Additionally, templates uploaded to the root group could have explicit > policies set to share with other users. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2785) Template Upload - Unable to upload to descendant group
[ https://issues.apache.org/jira/browse/NIFI-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510799#comment-15510799 ] ASF GitHub Bot commented on NIFI-2785: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1029 > Template Upload - Unable to upload to descendant group > -- > > Key: NIFI-2785 > URL: https://issues.apache.org/jira/browse/NIFI-2785 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Critical > Fix For: 1.1.0 > > > Templates can be uploaded to any Process Group. This is driven by the URL > used during the upload request. Currently, the URL in the UI for the upload > request is initialized with the root group id and never updated. As a result, > through the UI templates can only be uploaded to the root group. > > As a work-around, templates could still be uploaded to descendant groups via > a request directly to the REST API. This can be done using the following > command: > {noformat}curl -X POST -v -F template=@"/path/to/template.xml" > http://{host}:{port}/nifi-api/process-groups/{process-group-id}/templates/upload{noformat} > Additionally, templates uploaded to the root group could have explicit > policies set to share with other users. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1029: Fixed issue uploading templates into descendant Pro...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1029 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2795) Enhance Cluster UI with System Diagnostics
[ https://issues.apache.org/jira/browse/NIFI-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510782#comment-15510782 ] ASF GitHub Bot commented on NIFI-2795: -- Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/1042#discussion_r79894191 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/cluster/nf-cluster-table.js --- @@ -24,13 +24,275 @@ nf.ClusterTable = (function () { */ var config = { primaryNode: 'Primary Node', -clusterCoorindator: 'Cluster Coordinator', +clusterCoordinator: 'Cluster Coordinator', urls: { cluster: '../nifi-api/controller/cluster', -nodes: '../nifi-api/controller/cluster/nodes' -} +nodes: '../nifi-api/controller/cluster/nodes', +systemDiagnostics: '../nifi-api/system-diagnostics' +}, +data: [{ +name: 'cluster', +update: refreshClusterData +},{ +name: 'systemDiagnostics', +update: refreshSystemDiagnosticsData +} +] }; +var commonTableOptions = { +forceFitColumns: true, +enableTextSelectionOnCells: true, +enableCellNavigation: false, +enableColumnReorder: false, +autoEdit: false, +rowHeight: 24 +}; + +var nodesTab = { +name: 'Nodes', +data: { +dataSet: 'cluster', +update: updateNodesTableData +}, +tabContentId: 'cluster-nodes-tab-content', +tableId: 'cluster-nodes-table', +tableColumnModel: createNodeTableColumnModel, +tableIdColumn: 'nodeId', +tableOptions: commonTableOptions, +tableOnClick: nodesTableOnClick, +init: commonTableInit, +onSort: sort, +onTabSelected: onSelectTab, +filterOptions: [{ +text: 'by address', +value: 'address' +}, { +text: 'by status', +value: 'status' +}] +}; + +var jvmTab = { +name: 'JVM', +data: { +dataSet: 'systemDiagnostics', +update: updateJvmTableData +}, +tabContentId: 'cluster-jvm-tab-content', +tableId: 'cluster-jvm-table', +tableColumnModel: [ +{id: 'node', field: 'node', name: 'Node Address', sortable: true, resizable: true}, +{id: 'heapMax', field: 'maxHeap', name: 'Heap Max', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapTotal', field: 'totalHeap', name: 'Heap Total', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapUsed', field: 'usedHeap', name: 'Heap Used', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapUtilPct', field: 'heapUtilization', name: 'Heap Utilization', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'nonHeapTotal', field: 'totalNonHeap', name: 'Non-Heap Total', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'nonHeapUsed', field: 'usedNonHeap', name: 'Non-Heap Used', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'gcOldGen', field: 'gcOldGen', name: 'G1 Old Generation', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'gcNewGen', field: 'gcNewGen', name: 'G1 Young Generation', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'} +], +tableIdColumn: 'id', +tableOptions: commonTableOptions, +tableOnClick: null, +init: commonTableInit, +onSort: sort, +onTabSelected: onSelectTab, +filterOptions: [{ +text: 'by address', +value: 'address' +}] +}; + +var systemTab = { +name: 'System', +data: { +dataSet: 'systemDiagnostics', +update: updateSystemTableData +}, +tabContentId: 'cluster-system-tab-content', +tableId: 'cluster-system-table', +tableColumnModel: [ +{id: 'node', field: 'node', name: 'Node Address', sortable: true, resizable:
[jira] [Commented] (NIFI-2795) Enhance Cluster UI with System Diagnostics
[ https://issues.apache.org/jira/browse/NIFI-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510783#comment-15510783 ] ASF GitHub Bot commented on NIFI-2795: -- Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/1042#discussion_r79892940 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/cluster/nf-cluster-table.js --- @@ -377,190 +952,45 @@ nf.ClusterTable = (function () { nf.ClusterTable.resetTableSize(); }); -// define a custom formatter for the more details column -var moreDetailsFormatter = function (row, cell, value, columnDef, dataContext) { -return ''; -}; - -// define a custom formatter for the run status column -var nodeFormatter = function (row, cell, value, columnDef, dataContext) { -return formatNodeAddress(dataContext); -}; - -// function for formatting the last accessed time -var valueFormatter = function (row, cell, value, columnDef, dataContext) { -return nf.Common.formatValue(value); -}; - -// define a custom formatter for the status column -var statusFormatter = function (row, cell, value, columnDef, dataContext) { -var markup = value; -if (dataContext.roles.includes(config.primaryNode)) { -value += ', PRIMARY'; -} -if (dataContext.roles.includes(config.clusterCoorindator)) { -value += ', COORDINATOR'; -} -return value; -}; - -var columnModel = [ -{id: 'moreDetails', name: '', sortable: false, resizable: false, formatter: moreDetailsFormatter, width: 50, maxWidth: 50}, -{id: 'node', field: 'node', name: 'Node Address', formatter: nodeFormatter, resizable: true, sortable: true}, -{id: 'activeThreadCount', field: 'activeThreadCount', name: 'Active Thread Count', resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'queued', field: 'queued', name: 'Queue/Size', resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'status', field: 'status', name: 'Status', formatter: statusFormatter, resizable: true, sortable: true}, -{id: 'uptime', field: 'nodeStartTime', name: 'Uptime', formatter: valueFormatter, resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'heartbeat', field: 'heartbeat', name: 'Last Heartbeat', formatter: valueFormatter, resizable: true, sortable: true, defaultSortAsc: false} -]; - -// only allow the admin to modify the cluster -if (nf.Common.canModifyController()) { -// function for formatting the actions column -var actionFormatter = function (row, cell, value, columnDef, dataContext) { -var canDisconnect = false; -var canConnect = false; - -// determine the current status -if (dataContext.status === 'CONNECTED' || dataContext.status === 'CONNECTING') { -canDisconnect = true; -} else if (dataContext.status === 'DISCONNECTED') { -canConnect = true; -} - -// return the appropriate markup -if (canConnect) { -return ''; -} else if (canDisconnect) { -return ''; -} else { -return ''; -} -}; - -columnModel.push({id: 'actions', label: '', formatter: actionFormatter, resizable: false, sortable: false, width: 80, maxWidth: 80}); -} - -var clusterOptions = { -forceFitColumns: true, -enableTextSelectionOnCells: true, -enableCellNavigation: false, -enableColumnReorder: false, -autoEdit: false, -rowHeight: 24 -}; - -// initialize the dataview -var clusterData = new Slick.Data.DataView({ -inlineFilters: false -}); -clusterData.setItems([], 'nodeId'); -clusterData.setFilterArgs({ -searchString:
[GitHub] nifi pull request #1042: NIFI-2795 Sys Diagnostics in Cluster UI
Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/1042#discussion_r79894191 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/cluster/nf-cluster-table.js --- @@ -24,13 +24,275 @@ nf.ClusterTable = (function () { */ var config = { primaryNode: 'Primary Node', -clusterCoorindator: 'Cluster Coordinator', +clusterCoordinator: 'Cluster Coordinator', urls: { cluster: '../nifi-api/controller/cluster', -nodes: '../nifi-api/controller/cluster/nodes' -} +nodes: '../nifi-api/controller/cluster/nodes', +systemDiagnostics: '../nifi-api/system-diagnostics' +}, +data: [{ +name: 'cluster', +update: refreshClusterData +},{ +name: 'systemDiagnostics', +update: refreshSystemDiagnosticsData +} +] }; +var commonTableOptions = { +forceFitColumns: true, +enableTextSelectionOnCells: true, +enableCellNavigation: false, +enableColumnReorder: false, +autoEdit: false, +rowHeight: 24 +}; + +var nodesTab = { +name: 'Nodes', +data: { +dataSet: 'cluster', +update: updateNodesTableData +}, +tabContentId: 'cluster-nodes-tab-content', +tableId: 'cluster-nodes-table', +tableColumnModel: createNodeTableColumnModel, +tableIdColumn: 'nodeId', +tableOptions: commonTableOptions, +tableOnClick: nodesTableOnClick, +init: commonTableInit, +onSort: sort, +onTabSelected: onSelectTab, +filterOptions: [{ +text: 'by address', +value: 'address' +}, { +text: 'by status', +value: 'status' +}] +}; + +var jvmTab = { +name: 'JVM', +data: { +dataSet: 'systemDiagnostics', +update: updateJvmTableData +}, +tabContentId: 'cluster-jvm-tab-content', +tableId: 'cluster-jvm-table', +tableColumnModel: [ +{id: 'node', field: 'node', name: 'Node Address', sortable: true, resizable: true}, +{id: 'heapMax', field: 'maxHeap', name: 'Heap Max', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapTotal', field: 'totalHeap', name: 'Heap Total', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapUsed', field: 'usedHeap', name: 'Heap Used', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'heapUtilPct', field: 'heapUtilization', name: 'Heap Utilization', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'nonHeapTotal', field: 'totalNonHeap', name: 'Non-Heap Total', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'nonHeapUsed', field: 'usedNonHeap', name: 'Non-Heap Used', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'gcOldGen', field: 'gcOldGen', name: 'G1 Old Generation', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'gcNewGen', field: 'gcNewGen', name: 'G1 Young Generation', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'} +], +tableIdColumn: 'id', +tableOptions: commonTableOptions, +tableOnClick: null, +init: commonTableInit, +onSort: sort, +onTabSelected: onSelectTab, +filterOptions: [{ +text: 'by address', +value: 'address' +}] +}; + +var systemTab = { +name: 'System', +data: { +dataSet: 'systemDiagnostics', +update: updateSystemTableData +}, +tabContentId: 'cluster-system-tab-content', +tableId: 'cluster-system-table', +tableColumnModel: [ +{id: 'node', field: 'node', name: 'Node Address', sortable: true, resizable: true}, +{id: 'processors', field: 'processors', name: 'Processors', sortable: true, resizable: true, cssClass: 'cell-right', headerCssClass: 'header-right'}, +{id: 'processorLoadAverage', field:
[GitHub] nifi pull request #1042: NIFI-2795 Sys Diagnostics in Cluster UI
Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/1042#discussion_r79892940 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/cluster/nf-cluster-table.js --- @@ -377,190 +952,45 @@ nf.ClusterTable = (function () { nf.ClusterTable.resetTableSize(); }); -// define a custom formatter for the more details column -var moreDetailsFormatter = function (row, cell, value, columnDef, dataContext) { -return ''; -}; - -// define a custom formatter for the run status column -var nodeFormatter = function (row, cell, value, columnDef, dataContext) { -return formatNodeAddress(dataContext); -}; - -// function for formatting the last accessed time -var valueFormatter = function (row, cell, value, columnDef, dataContext) { -return nf.Common.formatValue(value); -}; - -// define a custom formatter for the status column -var statusFormatter = function (row, cell, value, columnDef, dataContext) { -var markup = value; -if (dataContext.roles.includes(config.primaryNode)) { -value += ', PRIMARY'; -} -if (dataContext.roles.includes(config.clusterCoorindator)) { -value += ', COORDINATOR'; -} -return value; -}; - -var columnModel = [ -{id: 'moreDetails', name: '', sortable: false, resizable: false, formatter: moreDetailsFormatter, width: 50, maxWidth: 50}, -{id: 'node', field: 'node', name: 'Node Address', formatter: nodeFormatter, resizable: true, sortable: true}, -{id: 'activeThreadCount', field: 'activeThreadCount', name: 'Active Thread Count', resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'queued', field: 'queued', name: 'Queue/Size', resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'status', field: 'status', name: 'Status', formatter: statusFormatter, resizable: true, sortable: true}, -{id: 'uptime', field: 'nodeStartTime', name: 'Uptime', formatter: valueFormatter, resizable: true, sortable: true, defaultSortAsc: false}, -{id: 'heartbeat', field: 'heartbeat', name: 'Last Heartbeat', formatter: valueFormatter, resizable: true, sortable: true, defaultSortAsc: false} -]; - -// only allow the admin to modify the cluster -if (nf.Common.canModifyController()) { -// function for formatting the actions column -var actionFormatter = function (row, cell, value, columnDef, dataContext) { -var canDisconnect = false; -var canConnect = false; - -// determine the current status -if (dataContext.status === 'CONNECTED' || dataContext.status === 'CONNECTING') { -canDisconnect = true; -} else if (dataContext.status === 'DISCONNECTED') { -canConnect = true; -} - -// return the appropriate markup -if (canConnect) { -return ''; -} else if (canDisconnect) { -return ''; -} else { -return ''; -} -}; - -columnModel.push({id: 'actions', label: '', formatter: actionFormatter, resizable: false, sortable: false, width: 80, maxWidth: 80}); -} - -var clusterOptions = { -forceFitColumns: true, -enableTextSelectionOnCells: true, -enableCellNavigation: false, -enableColumnReorder: false, -autoEdit: false, -rowHeight: 24 -}; - -// initialize the dataview -var clusterData = new Slick.Data.DataView({ -inlineFilters: false -}); -clusterData.setItems([], 'nodeId'); -clusterData.setFilterArgs({ -searchString: getFilterText(), -property: $('#cluster-filter-type').combo('getSelectedOption').value -}); -clusterData.setFilter(filter); - -// initialize the sort -
[GitHub] nifi issue #1029: Fixed issue uploading templates into descendant Process Gr...
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/1029 Reviewing... --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Resolved] (NIFI-2210) Update images for NiFi docs
[ https://issues.apache.org/jira/browse/NIFI-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rob Moran resolved NIFI-2210. - Resolution: Done > Update images for NiFi docs > --- > > Key: NIFI-2210 > URL: https://issues.apache.org/jira/browse/NIFI-2210 > Project: Apache NiFi > Issue Type: Task > Components: Documentation & Website >Affects Versions: 1.0.0 >Reporter: Rob Moran > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2699) Improve handling of response timeouts in cluster
[ https://issues.apache.org/jira/browse/NIFI-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Storck updated NIFI-2699: -- Description: When running as a cluster, if a node is unable to respond within the socket timeout (eg, hitting a breakpoint while debugging), an IllegalClusterStateException will be thrown that causes the UI to show the "check config and fix errors" page. Once the node is communicating with the cluster again (i.e., breakpoint in the code is passed), the UI can be reloaded and the cluster recovers from the timeout without any user intervention at the service level. However, user experience could be improved. If a user initiates a replicated request to a node that is unable to respond within the socket timeout duration, the user might think NiFi crashed, when it in fact didn't. Here is the stack trace that was encountered during testing: {code} 2016-08-29 11:36:59,041 DEBUG [NiFi Web Server-22] o.a.n.w.a.c.IllegalClusterStateExceptionMapper org.apache.nifi.cluster.manager.exception.IllegalClusterStateException: Node localhost:8443 is unable to fulfill this request due to: Unexpected Response Code 500 at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$2.onCompletion(ThreadPoolRequestReplicator.java:471) ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT] at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:729) ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_92] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_92] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_92] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_92] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92] Caused by: com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155) ~[jersey-client-1.19.jar:1.19] at com.sun.jersey.api.client.Client.handle(Client.java:652) ~[jersey-client-1.19.jar:1.19] at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) ~[jersey-client-1.19.jar:1.19] at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) ~[jersey-client-1.19.jar:1.19] at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:560) ~[jersey-client-1.19.jar:1.19] at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:537) ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT] at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:720) ~[nifi-framework-cluster-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT] ... 5 common frames omitted Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.8.0_92] at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) ~[na:1.8.0_92] at java.net.SocketInputStream.read(SocketInputStream.java:170) ~[na:1.8.0_92] at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[na:1.8.0_92] at sun.security.ssl.InputRecord.readFully(InputRecord.java:465) ~[na:1.8.0_92] at sun.security.ssl.InputRecord.read(InputRecord.java:503) ~[na:1.8.0_92] at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973) ~[na:1.8.0_92] at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930) ~[na:1.8.0_92] at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) ~[na:1.8.0_92] at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) ~[na:1.8.0_92] at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) ~[na:1.8.0_92] at java.io.BufferedInputStream.read(BufferedInputStream.java:345) ~[na:1.8.0_92] at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704) ~[na:1.8.0_92] at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647) ~[na:1.8.0_92] at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536) ~[na:1.8.0_92] at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441) ~[na:1.8.0_92] at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) ~[na:1.8.0_92] at
[jira] [Updated] (NIFI-2795) Enhance Cluster UI with System Diagnostics
[ https://issues.apache.org/jira/browse/NIFI-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Wing updated NIFI-2795: - Status: Patch Available (was: In Progress) > Enhance Cluster UI with System Diagnostics > -- > > Key: NIFI-2795 > URL: https://issues.apache.org/jira/browse/NIFI-2795 > Project: Apache NiFi > Issue Type: New Feature > Components: Core UI >Affects Versions: 1.0.0 >Reporter: James Wing >Assignee: James Wing >Priority: Minor > Attachments: cluster-01-nodes.png, cluster-02-system.png, > cluster-03-jvm.png, cluster-04-flowfile-store.png, > cluster-05-content-store.png > > > The Cluster UI currently provides some basic information on each node in the > cluster and options for connecting and disconnecting nodes. I propose to add > system diagnostics information in tables, contained in multiple tabs. > Roughly, the tabs should cover the same content as the System Diagnostics > dialog already in the System UI, but in a tabular format for comparing across > nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (NIFI-2795) Enhance Cluster UI with System Diagnostics
[ https://issues.apache.org/jira/browse/NIFI-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Wing reassigned NIFI-2795: Assignee: James Wing > Enhance Cluster UI with System Diagnostics > -- > > Key: NIFI-2795 > URL: https://issues.apache.org/jira/browse/NIFI-2795 > Project: Apache NiFi > Issue Type: New Feature > Components: Core UI >Affects Versions: 1.0.0 >Reporter: James Wing >Assignee: James Wing >Priority: Minor > Attachments: cluster-01-nodes.png, cluster-02-system.png, > cluster-03-jvm.png, cluster-04-flowfile-store.png, > cluster-05-content-store.png > > > The Cluster UI currently provides some basic information on each node in the > cluster and options for connecting and disconnecting nodes. I propose to add > system diagnostics information in tables, contained in multiple tabs. > Roughly, the tabs should cover the same content as the System Diagnostics > dialog already in the System UI, but in a tabular format for comparing across > nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2795) Enhance Cluster UI with System Diagnostics
[ https://issues.apache.org/jira/browse/NIFI-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Wing updated NIFI-2795: - Attachment: cluster-05-content-store.png cluster-04-flowfile-store.png cluster-03-jvm.png cluster-02-system.png cluster-01-nodes.png Screenshots of proposed cluster UI > Enhance Cluster UI with System Diagnostics > -- > > Key: NIFI-2795 > URL: https://issues.apache.org/jira/browse/NIFI-2795 > Project: Apache NiFi > Issue Type: New Feature > Components: Core UI >Affects Versions: 1.0.0 >Reporter: James Wing >Priority: Minor > Attachments: cluster-01-nodes.png, cluster-02-system.png, > cluster-03-jvm.png, cluster-04-flowfile-store.png, > cluster-05-content-store.png > > > The Cluster UI currently provides some basic information on each node in the > cluster and options for connecting and disconnecting nodes. I propose to add > system diagnostics information in tables, contained in multiple tabs. > Roughly, the tabs should cover the same content as the System Diagnostics > dialog already in the System UI, but in a tabular format for comparing across > nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2795) Enhance Cluster UI with System Diagnostics
[ https://issues.apache.org/jira/browse/NIFI-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510474#comment-15510474 ] ASF GitHub Bot commented on NIFI-2795: -- GitHub user jvwing opened a pull request: https://github.com/apache/nifi/pull/1042 NIFI-2795 Sys Diagnostics in Cluster UI Reworked the existing cluster UI to provide tabs containing multiple data tables. Added views for System, JVM, FlowFile Storage and Content Storage diagnostics. This is a UI-only change built on top of the existing System Diagnostics API. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jvwing/nifi NIFI-2795-cluster-ui-sysdiag-3 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1042.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1042 commit 596d4e933a9547a6af3358840bf5a0fbf762d6da Author: James WingDate: 2016-09-19T01:06:12Z NIFI-2795 Sys Diagnostics in Cluster UI > Enhance Cluster UI with System Diagnostics > -- > > Key: NIFI-2795 > URL: https://issues.apache.org/jira/browse/NIFI-2795 > Project: Apache NiFi > Issue Type: New Feature > Components: Core UI >Affects Versions: 1.0.0 >Reporter: James Wing >Priority: Minor > > The Cluster UI currently provides some basic information on each node in the > cluster and options for connecting and disconnecting nodes. I propose to add > system diagnostics information in tables, contained in multiple tabs. > Roughly, the tabs should cover the same content as the System Diagnostics > dialog already in the System UI, but in a tabular format for comparing across > nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2756) nifi-processor-bundle-archetype lacks displayName property
[ https://issues.apache.org/jira/browse/NIFI-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510380#comment-15510380 ] ASF GitHub Bot commented on NIFI-2756: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1004#discussion_r79869645 --- Diff: nifi-maven-archetypes/nifi-processor-bundle-archetype/src/main/resources/archetype-resources/nifi-__artifactBaseName__-processors/src/main/java/MyProcessor.java --- @@ -41,7 +51,8 @@ public class MyProcessor extends AbstractProcessor { public static final PropertyDescriptor MY_PROPERTY = new PropertyDescriptor -.Builder().name("My Property") +.Builder().name("MY_PROPERTY") --- End diff -- Not that it matters, but most of the time I see machine-friendly names I've been seeing them like "my-processor-my-property" vs "MY_PROPERTY". Since it's a placeholder here, then no big deal, just sharing :) > nifi-processor-bundle-archetype lacks displayName property > -- > > Key: NIFI-2756 > URL: https://issues.apache.org/jira/browse/NIFI-2756 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andre >Assignee: Andre > > When using {{nifi-processor-bundle-archetype}} to create a new bundle, the > resulting code lacks displayName leading to a number of PR and contributions > to arrive to peer review without displayName configured -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2756) nifi-processor-bundle-archetype lacks displayName property
[ https://issues.apache.org/jira/browse/NIFI-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510382#comment-15510382 ] ASF GitHub Bot commented on NIFI-2756: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1004#discussion_r79869457 --- Diff: nifi-maven-archetypes/nifi-processor-bundle-archetype/src/main/resources/archetype-resources/nifi-__artifactBaseName__-processors/src/main/java/MyProcessor.java --- @@ -29,9 +28,20 @@ import org.apache.nifi.annotation.documentation.SeeAlso; import org.apache.nifi.annotation.documentation.Tags; import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.ProcessorInitializationContext; +import org.apache.nifi.processor.Relationship; import org.apache.nifi.processor.util.StandardValidators; -import java.util.*; + + +import java.util.ArrayList; --- End diff -- Bit of extra whitespace here > nifi-processor-bundle-archetype lacks displayName property > -- > > Key: NIFI-2756 > URL: https://issues.apache.org/jira/browse/NIFI-2756 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andre >Assignee: Andre > > When using {{nifi-processor-bundle-archetype}} to create a new bundle, the > resulting code lacks displayName leading to a number of PR and contributions > to arrive to peer review without displayName configured -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2756) nifi-processor-bundle-archetype lacks displayName property
[ https://issues.apache.org/jira/browse/NIFI-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510381#comment-15510381 ] ASF GitHub Bot commented on NIFI-2756: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1004#discussion_r79869437 --- Diff: nifi-maven-archetypes/nifi-processor-bundle-archetype/src/main/resources/archetype-resources/nifi-__artifactBaseName__-processors/src/main/java/MyProcessor.java --- @@ -19,7 +19,6 @@ import org.apache.nifi.components.PropertyDescriptor; import org.apache.nifi.components.PropertyValue; import org.apache.nifi.flowfile.FlowFile; -import org.apache.nifi.processor.*; --- End diff -- Thank you for removing this!! Makes an initial compile much cleaner :) > nifi-processor-bundle-archetype lacks displayName property > -- > > Key: NIFI-2756 > URL: https://issues.apache.org/jira/browse/NIFI-2756 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andre >Assignee: Andre > > When using {{nifi-processor-bundle-archetype}} to create a new bundle, the > resulting code lacks displayName leading to a number of PR and contributions > to arrive to peer review without displayName configured -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2764) JdbcCommon Avro Can't Process Java Short Types
[ https://issues.apache.org/jira/browse/NIFI-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510357#comment-15510357 ] Matt Burgess commented on NIFI-2764: Shorts can be supported but you have to specify the class in the schema: https://avro.apache.org/docs/1.7.7/api/java/org/apache/avro/reflect/package-summary.html Having said that, this solution seems fine (and is being done for other types like Byte) > JdbcCommon Avro Can't Process Java Short Types > -- > > Key: NIFI-2764 > URL: https://issues.apache.org/jira/browse/NIFI-2764 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Peter Wicks > > Microsoft SQL Server returns TINYINT values as Java Short's. Avro is unable > to write datum's of this type and throws an exception when trying to. > This currently breaks QueryDatabaseTable at the very least when querying MS > SQL Server with TINYINT's in the ResultSet. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1004: NIFI-2756 - Add displayName to maven archetypes
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1004#discussion_r79869645 --- Diff: nifi-maven-archetypes/nifi-processor-bundle-archetype/src/main/resources/archetype-resources/nifi-__artifactBaseName__-processors/src/main/java/MyProcessor.java --- @@ -41,7 +51,8 @@ public class MyProcessor extends AbstractProcessor { public static final PropertyDescriptor MY_PROPERTY = new PropertyDescriptor -.Builder().name("My Property") +.Builder().name("MY_PROPERTY") --- End diff -- Not that it matters, but most of the time I see machine-friendly names I've been seeing them like "my-processor-my-property" vs "MY_PROPERTY". Since it's a placeholder here, then no big deal, just sharing :) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #1004: NIFI-2756 - Add displayName to maven archetypes
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1004#discussion_r79869457 --- Diff: nifi-maven-archetypes/nifi-processor-bundle-archetype/src/main/resources/archetype-resources/nifi-__artifactBaseName__-processors/src/main/java/MyProcessor.java --- @@ -29,9 +28,20 @@ import org.apache.nifi.annotation.documentation.SeeAlso; import org.apache.nifi.annotation.documentation.Tags; import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.ProcessorInitializationContext; +import org.apache.nifi.processor.Relationship; import org.apache.nifi.processor.util.StandardValidators; -import java.util.*; + + +import java.util.ArrayList; --- End diff -- Bit of extra whitespace here --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-2750) Add Optional Column Name Quoting to ConvertJSONToSQL
[ https://issues.apache.org/jira/browse/NIFI-2750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-2750: --- Status: Patch Available (was: Open) > Add Optional Column Name Quoting to ConvertJSONToSQL > > > Key: NIFI-2750 > URL: https://issues.apache.org/jira/browse/NIFI-2750 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Peter Wicks >Priority: Minor > > If a column name happens to also be a SQL reserved word then the SQL > generated by ConvertJSONToSQL will be invalid and will fail when it runs. > By getting and quoting column identifiers (optionally) it will allow users to > keep original column names by quoting them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (NIFI-1844) Cannot download new files from FTP server without re-downloading old files
[ https://issues.apache.org/jira/browse/NIFI-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-1844. -- Resolution: Fixed Fix Version/s: 1.1.0 > Cannot download new files from FTP server without re-downloading old files > -- > > Key: NIFI-1844 > URL: https://issues.apache.org/jira/browse/NIFI-1844 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Subhash Sriram > Fix For: 1.1.0 > > > Hello, > It appears that currently, there is no way to download files from a FTP > server, and keep track of what files have been downloaded in NiFi. > For example, I am trying to get files from a third party FTP server, from > which I cannot download the original, and it does not support SFTP. > When I use the GetFTP processor, it downloads the same files over and over, > and I cannot use the ListSFTP processor. > It would be great if there was a ListFTP/FetchFTP processor that could allow > someone to acquire only the files from a FTP server that have not yet been > downloaded. > Thank you very much! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #881: Added ListFTP and FetchFTP processors
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/881 Visually checked the code (highly similar to equivalent SFTP processors). Full build with contrib check. Ran a basic workflow to list and download data from a public FTP server. LGTM +1. Merging to master. Thanks for your contribution @ndup-git ! (and apologies for reviewing so late) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-1844) Cannot download new files from FTP server without re-downloading old files
[ https://issues.apache.org/jira/browse/NIFI-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510325#comment-15510325 ] ASF GitHub Bot commented on NIFI-1844: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/881 > Cannot download new files from FTP server without re-downloading old files > -- > > Key: NIFI-1844 > URL: https://issues.apache.org/jira/browse/NIFI-1844 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Subhash Sriram > > Hello, > It appears that currently, there is no way to download files from a FTP > server, and keep track of what files have been downloaded in NiFi. > For example, I am trying to get files from a third party FTP server, from > which I cannot download the original, and it does not support SFTP. > When I use the GetFTP processor, it downloads the same files over and over, > and I cannot use the ListSFTP processor. > It would be great if there was a ListFTP/FetchFTP processor that could allow > someone to acquire only the files from a FTP server that have not yet been > downloaded. > Thank you very much! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1844) Cannot download new files from FTP server without re-downloading old files
[ https://issues.apache.org/jira/browse/NIFI-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510323#comment-15510323 ] ASF subversion and git services commented on NIFI-1844: --- Commit be83c0c5b2b8a435b4745cbfc43f7c9251561727 in nifi's branch refs/heads/master from [~ndup-apache] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=be83c0c ] NIFI-1844 - Added ListFTP and FetchFTP processors This closes #881. > Cannot download new files from FTP server without re-downloading old files > -- > > Key: NIFI-1844 > URL: https://issues.apache.org/jira/browse/NIFI-1844 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Subhash Sriram > > Hello, > It appears that currently, there is no way to download files from a FTP > server, and keep track of what files have been downloaded in NiFi. > For example, I am trying to get files from a third party FTP server, from > which I cannot download the original, and it does not support SFTP. > When I use the GetFTP processor, it downloads the same files over and over, > and I cannot use the ListSFTP processor. > It would be great if there was a ListFTP/FetchFTP processor that could allow > someone to acquire only the files from a FTP server that have not yet been > downloaded. > Thank you very much! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #881: Added ListFTP and FetchFTP processors
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/881 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-2749) maxvalue Attributes Added by QueryDatabaseTable to FlowFiles May be Incorrect
[ https://issues.apache.org/jira/browse/NIFI-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-2749: --- Resolution: Fixed Status: Resolved (was: Patch Available) > maxvalue Attributes Added by QueryDatabaseTable to FlowFiles May be Incorrect > - > > Key: NIFI-2749 > URL: https://issues.apache.org/jira/browse/NIFI-2749 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Peter Wicks >Priority: Minor > Fix For: 1.1.0 > > > NIFI-2641 caused maxvalue attributes to be placed onto FlowFiles generated by > QueryDatabaseTable. However if QDB is using MAX_ROWS_PER_FLOW_FILE to split > the ResultSet into multiple FlowFiles then the value attached to the FlowFile > will just be the maximum seen so far, and not necessarily the final Maximum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2749) maxvalue Attributes Added by QueryDatabaseTable to FlowFiles May be Incorrect
[ https://issues.apache.org/jira/browse/NIFI-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510261#comment-15510261 ] ASF GitHub Bot commented on NIFI-2749: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/997 > maxvalue Attributes Added by QueryDatabaseTable to FlowFiles May be Incorrect > - > > Key: NIFI-2749 > URL: https://issues.apache.org/jira/browse/NIFI-2749 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Peter Wicks >Priority: Minor > Fix For: 1.1.0 > > > NIFI-2641 caused maxvalue attributes to be placed onto FlowFiles generated by > QueryDatabaseTable. However if QDB is using MAX_ROWS_PER_FLOW_FILE to split > the ResultSet into multiple FlowFiles then the value attached to the FlowFile > will just be the maximum seen so far, and not necessarily the final Maximum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #997: NIFI-2749
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/997 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2749) maxvalue Attributes Added by QueryDatabaseTable to FlowFiles May be Incorrect
[ https://issues.apache.org/jira/browse/NIFI-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510258#comment-15510258 ] ASF subversion and git services commented on NIFI-2749: --- Commit 938c7cccb8b251d4a1390cf0078b14f53bc94a57 in nifi's branch refs/heads/master from [~patricker] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=938c7cc ] NIFI-2749 Signed-off-by: Matt BurgessThis closes #997 > maxvalue Attributes Added by QueryDatabaseTable to FlowFiles May be Incorrect > - > > Key: NIFI-2749 > URL: https://issues.apache.org/jira/browse/NIFI-2749 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Peter Wicks >Priority: Minor > Fix For: 1.1.0 > > > NIFI-2641 caused maxvalue attributes to be placed onto FlowFiles generated by > QueryDatabaseTable. However if QDB is using MAX_ROWS_PER_FLOW_FILE to split > the ResultSet into multiple FlowFiles then the value attached to the FlowFile > will just be the maximum seen so far, and not necessarily the final Maximum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2749) maxvalue Attributes Added by QueryDatabaseTable to FlowFiles May be Incorrect
[ https://issues.apache.org/jira/browse/NIFI-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510255#comment-15510255 ] ASF GitHub Bot commented on NIFI-2749: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/997 +1 LGTM, verified the problem exists, the unit tests show the problem, and the fix corrects the problem. Merging to master. > maxvalue Attributes Added by QueryDatabaseTable to FlowFiles May be Incorrect > - > > Key: NIFI-2749 > URL: https://issues.apache.org/jira/browse/NIFI-2749 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Peter Wicks >Priority: Minor > Fix For: 1.1.0 > > > NIFI-2641 caused maxvalue attributes to be placed onto FlowFiles generated by > QueryDatabaseTable. However if QDB is using MAX_ROWS_PER_FLOW_FILE to split > the ResultSet into multiple FlowFiles then the value attached to the FlowFile > will just be the maximum seen so far, and not necessarily the final Maximum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2749) maxvalue Attributes Added by QueryDatabaseTable to FlowFiles May be Incorrect
[ https://issues.apache.org/jira/browse/NIFI-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-2749: --- Fix Version/s: 1.1.0 > maxvalue Attributes Added by QueryDatabaseTable to FlowFiles May be Incorrect > - > > Key: NIFI-2749 > URL: https://issues.apache.org/jira/browse/NIFI-2749 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Peter Wicks >Priority: Minor > Fix For: 1.1.0 > > > NIFI-2641 caused maxvalue attributes to be placed onto FlowFiles generated by > QueryDatabaseTable. However if QDB is using MAX_ROWS_PER_FLOW_FILE to split > the ResultSet into multiple FlowFiles then the value attached to the FlowFile > will just be the maximum seen so far, and not necessarily the final Maximum. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #997: NIFI-2749
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/997 +1 LGTM, verified the problem exists, the unit tests show the problem, and the fix corrects the problem. Merging to master. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-1893) Add processor for validating JSON
[ https://issues.apache.org/jira/browse/NIFI-1893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510144#comment-15510144 ] ASF GitHub Bot commented on NIFI-1893: -- Github user bartoszjkwozniak commented on a diff in the pull request: https://github.com/apache/nifi/pull/1037#discussion_r79849248 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestValidateJson.java --- @@ -0,0 +1,79 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import java.io.IOException; +import java.nio.file.Paths; + +import org.apache.commons.io.IOUtils; + +import org.apache.nifi.util.TestRunner; +import org.apache.nifi.util.TestRunners; + +import org.junit.Test; +import org.xml.sax.SAXException; + +public class TestValidateJson { --- End diff -- Sure thing, added > Add processor for validating JSON > - > > Key: NIFI-1893 > URL: https://issues.apache.org/jira/browse/NIFI-1893 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Matt Burgess > > NiFi has a ValidateXml processor to validate incoming XML files against a > schema. It would be good to have one to validate JSON files as well. > For example, an input JSON of: > { > name: "Test", > timestamp: 1463499695, > tags: { >"host": "Test_1", >"ip" : "1.1.1.1" > }, > fields: { > "cpu": 10.2, > "load": 15.6 > } > } > Could be validated successfully against the following "schema": > { > "type": "object", > "required": ["name", "tags", "timestamp", "fields"], > "properties": { > "name": {"type": "string"}, > "timestamp": {"type": "integer"}, > "tags": {"type": "object", "items": {"type": "string"}}, > "fields": { "type": "object"} > } > } > There is at least one ASF-friendly library that could be used for > implementation: https://github.com/everit-org/json-schema -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1893) Add processor for validating JSON
[ https://issues.apache.org/jira/browse/NIFI-1893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510145#comment-15510145 ] ASF GitHub Bot commented on NIFI-1893: -- Github user bartoszjkwozniak commented on a diff in the pull request: https://github.com/apache/nifi/pull/1037#discussion_r79849231 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateJson.java --- @@ -0,0 +1,202 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.Map; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; + +import org.apache.commons.io.IOUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.SideEffectFree; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.components.Validator; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.ProcessorInitializationContext; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.io.InputStreamCallback; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.util.StringUtils; + +import org.everit.json.schema.Schema; +import org.everit.json.schema.ValidationException; +import org.everit.json.schema.loader.SchemaLoader; +import org.json.JSONArray; +import org.json.JSONObject; +import org.json.JSONTokener; + + +@EventDriven +@SideEffectFree +@SupportsBatching +@InputRequirement(Requirement.INPUT_REQUIRED) +@Tags({"json", "schema", "validation"}) +@CapabilityDescription("Validates the contents of FlowFiles against a user-specified JSON Schema file") +public class ValidateJson extends AbstractProcessor { + +public static final PropertyDescriptor SCHEMA_FILE = new PropertyDescriptor.Builder() +.name("validate-json-schema-file") +.displayName("Schema File") +.description("The path to the Schema file that is to be used for validation. Only one of Schema File or Schema Body may be used") +.required(false) +.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR) +.build(); + +public static final PropertyDescriptor SCHEMA_BODY = new PropertyDescriptor.Builder() +.name("validate-json-schema-body") +.displayName("Schema Body") +.required(false) +.description("Json Schema Body that is to be used for validation. Only one of Schema File or Schema Body may be used") +.expressionLanguageSupported(false) +.addValidator(Validator.VALID) +.build(); + +public static final Relationship REL_VALID = new
[jira] [Commented] (NIFI-2797) Authorization header not submitted when clicking Download from Templates window
[ https://issues.apache.org/jira/browse/NIFI-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510126#comment-15510126 ] ASF subversion and git services commented on NIFI-2797: --- Commit e10b4beb9062c65e86a19ae16b6636768ac29bde in nifi's branch refs/heads/master from [~mcgilman] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e10b4be ] NIFI-2797: - Correcting download URIs for OTPs. This closes #1038. Signed-off-by: Bryan Bende> Authorization header not submitted when clicking Download from Templates > window > --- > > Key: NIFI-2797 > URL: https://issues.apache.org/jira/browse/NIFI-2797 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 >Reporter: Scott Wagner >Assignee: Matt Gilman > Fix For: 1.1.0 > > > I am running on a standalone instance of Apache NiFi. It is configured to > use a local LDAP server for authentication, and I am logging in as a user > with full permissions. > When browsing the templates, and I click on the "Download" link, a new tab is > opened in the browser but the error message of {{Unable to perform the > desired action due to insufficient permissions. Contact the system > administrator.}} > Checking the link that is submitted via developer tools, I noticed that the > Authorization header is not being submitted. If I use curl to get the URL > that the browser is trying to get but submit an Authorization header for my > valid session, I am able to download the template XML. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1037: NIFI-1893 Add processor for validating JSON
Github user bartoszjkwozniak commented on a diff in the pull request: https://github.com/apache/nifi/pull/1037#discussion_r79849248 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestValidateJson.java --- @@ -0,0 +1,79 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import java.io.IOException; +import java.nio.file.Paths; + +import org.apache.commons.io.IOUtils; + +import org.apache.nifi.util.TestRunner; +import org.apache.nifi.util.TestRunners; + +import org.junit.Test; +import org.xml.sax.SAXException; + +public class TestValidateJson { --- End diff -- Sure thing, added --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #1037: NIFI-1893 Add processor for validating JSON
Github user bartoszjkwozniak commented on a diff in the pull request: https://github.com/apache/nifi/pull/1037#discussion_r79849231 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateJson.java --- @@ -0,0 +1,202 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.Map; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; + +import org.apache.commons.io.IOUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.SideEffectFree; +import org.apache.nifi.annotation.behavior.SupportsBatching; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.components.Validator; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.ProcessorInitializationContext; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.io.InputStreamCallback; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.util.StringUtils; + +import org.everit.json.schema.Schema; +import org.everit.json.schema.ValidationException; +import org.everit.json.schema.loader.SchemaLoader; +import org.json.JSONArray; +import org.json.JSONObject; +import org.json.JSONTokener; + + +@EventDriven +@SideEffectFree +@SupportsBatching +@InputRequirement(Requirement.INPUT_REQUIRED) +@Tags({"json", "schema", "validation"}) +@CapabilityDescription("Validates the contents of FlowFiles against a user-specified JSON Schema file") +public class ValidateJson extends AbstractProcessor { + +public static final PropertyDescriptor SCHEMA_FILE = new PropertyDescriptor.Builder() +.name("validate-json-schema-file") +.displayName("Schema File") +.description("The path to the Schema file that is to be used for validation. Only one of Schema File or Schema Body may be used") +.required(false) +.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR) +.build(); + +public static final PropertyDescriptor SCHEMA_BODY = new PropertyDescriptor.Builder() +.name("validate-json-schema-body") +.displayName("Schema Body") +.required(false) +.description("Json Schema Body that is to be used for validation. Only one of Schema File or Schema Body may be used") +.expressionLanguageSupported(false) +.addValidator(Validator.VALID) +.build(); + +public static final Relationship REL_VALID = new Relationship.Builder() +.name("valid") +.description("FlowFiles that are successfully validated against the schema are routed to this relationship") +.build(); +public static final
[jira] [Commented] (NIFI-1893) Add processor for validating JSON
[ https://issues.apache.org/jira/browse/NIFI-1893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510143#comment-15510143 ] ASF GitHub Bot commented on NIFI-1893: -- Github user bartoszjkwozniak commented on a diff in the pull request: https://github.com/apache/nifi/pull/1037#discussion_r79849108 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/pom.xml --- @@ -249,6 +249,11 @@ language governing permissions and limitations under the License. --> super-csv 2.4.0 + +org.everit.json +org.everit.json.schema +1.4.0 + --- End diff -- ahh, I see now. Corrected. > Add processor for validating JSON > - > > Key: NIFI-1893 > URL: https://issues.apache.org/jira/browse/NIFI-1893 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Matt Burgess > > NiFi has a ValidateXml processor to validate incoming XML files against a > schema. It would be good to have one to validate JSON files as well. > For example, an input JSON of: > { > name: "Test", > timestamp: 1463499695, > tags: { >"host": "Test_1", >"ip" : "1.1.1.1" > }, > fields: { > "cpu": 10.2, > "load": 15.6 > } > } > Could be validated successfully against the following "schema": > { > "type": "object", > "required": ["name", "tags", "timestamp", "fields"], > "properties": { > "name": {"type": "string"}, > "timestamp": {"type": "integer"}, > "tags": {"type": "object", "items": {"type": "string"}}, > "fields": { "type": "object"} > } > } > There is at least one ASF-friendly library that could be used for > implementation: https://github.com/everit-org/json-schema -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1039: Correcting documentation for ExecuteFlumeSink
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1039 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-2788) Cluster icon in the menu doesn't have a consistent size
[ https://issues.apache.org/jira/browse/NIFI-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Scott Aslan updated NIFI-2788: -- Status: Patch Available (was: In Progress) https://github.com/apache/nifi/pull/1041 > Cluster icon in the menu doesn't have a consistent size > --- > > Key: NIFI-2788 > URL: https://issues.apache.org/jira/browse/NIFI-2788 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 > Environment: OSX, Chrome >Reporter: Andrew Grande >Assignee: Scott Aslan >Priority: Trivial > Attachments: screenshot.png > > > The global menu cluster icon has incorrect size, the Cluster menu item is > shifted to the right as a result. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2797) Authorization header not submitted when clicking Download from Templates window
[ https://issues.apache.org/jira/browse/NIFI-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-2797: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Authorization header not submitted when clicking Download from Templates > window > --- > > Key: NIFI-2797 > URL: https://issues.apache.org/jira/browse/NIFI-2797 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 >Reporter: Scott Wagner >Assignee: Matt Gilman > Fix For: 1.1.0 > > > I am running on a standalone instance of Apache NiFi. It is configured to > use a local LDAP server for authentication, and I am logging in as a user > with full permissions. > When browsing the templates, and I click on the "Download" link, a new tab is > opened in the browser but the error message of {{Unable to perform the > desired action due to insufficient permissions. Contact the system > administrator.}} > Checking the link that is submitted via developer tools, I noticed that the > Authorization header is not being submitted. If I use curl to get the URL > that the browser is trying to get but submit an Authorization header for my > valid session, I am able to download the template XML. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1041: [NIFI-2788] update global hamburger menu to have fi...
GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi/pull/1041 [NIFI-2788] update global hamburger menu to have fixed width icons an⦠â¦d align text You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi NIFI-2788 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1041.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1041 commit d510e33ae94306d028b092011e1bed113c763dd7 Author: Scott AslanDate: 2016-09-21T13:10:57Z [NIFI-2788] update global hamburger menu to have fixed width icons and align text --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2788) Cluster icon in the menu doesn't have a consistent size
[ https://issues.apache.org/jira/browse/NIFI-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510131#comment-15510131 ] ASF GitHub Bot commented on NIFI-2788: -- GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi/pull/1041 [NIFI-2788] update global hamburger menu to have fixed width icons an… …d align text You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi NIFI-2788 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1041.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1041 commit d510e33ae94306d028b092011e1bed113c763dd7 Author: Scott AslanDate: 2016-09-21T13:10:57Z [NIFI-2788] update global hamburger menu to have fixed width icons and align text > Cluster icon in the menu doesn't have a consistent size > --- > > Key: NIFI-2788 > URL: https://issues.apache.org/jira/browse/NIFI-2788 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 > Environment: OSX, Chrome >Reporter: Andrew Grande >Assignee: Scott Aslan >Priority: Trivial > Attachments: screenshot.png > > > The global menu cluster icon has incorrect size, the Cluster menu item is > shifted to the right as a result. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2792) Removal of a Template does not save the flow.
[ https://issues.apache.org/jira/browse/NIFI-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-2792: - Resolution: Fixed Status: Resolved (was: Patch Available) > Removal of a Template does not save the flow. > - > > Key: NIFI-2792 > URL: https://issues.apache.org/jira/browse/NIFI-2792 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Critical > Fix For: 1.1.0 > > > When removing a template, the flow is not saved. Consequently, if no other > actions are taken prior to shutting down the template will be reloaded upon > the next restart. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1031: Saving flow after Template removal
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1031 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2799) AWS Credentials for Assume Role Need Proxy
[ https://issues.apache.org/jira/browse/NIFI-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510116#comment-15510116 ] James Wing commented on NIFI-2799: -- I was not able to assign it to you, but please proceed. I think you need a permissions or membership adjustment. > AWS Credentials for Assume Role Need Proxy > -- > > Key: NIFI-2799 > URL: https://issues.apache.org/jira/browse/NIFI-2799 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: Keren Tseytlin >Assignee: James Wing >Priority: Minor > Fix For: 1.1.0 > > > As a user of Nifi, when I want to enable cross account fetching of S3 objects > I need the proxy variables to be set in order to generate temporary AWS > tokens for STS:AssumeRole. > Within some enterprise environments, it is necessary to set the proxy > variables prior to running AssumeRole methods. Without this being set, the > machine in VPC A times out on generating temporary keys and is unable to > assume a role as a machine in VPC B. > This ticket arose from this conversation: > http://apache-nifi-developer-list.39713.n7.nabble.com/Nifi-Cross-Account-Download-With-A-Profile-Flag-td13232.html#a13252 > Goal: There are files stored in an S3 bucket in VPC B. My Nifi machines are > in VPC A. I want Nifi to be able to get those files from VPC B. VPC A and VPC > B need to be communicating in the FetchS3Object component. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #1031: Saving flow after Template removal
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/1031 LGTM +1. Merging into master. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (NIFI-2799) AWS Credentials for Assume Role Need Proxy
[ https://issues.apache.org/jira/browse/NIFI-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Wing reassigned NIFI-2799: Assignee: James Wing > AWS Credentials for Assume Role Need Proxy > -- > > Key: NIFI-2799 > URL: https://issues.apache.org/jira/browse/NIFI-2799 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: Keren Tseytlin >Assignee: James Wing >Priority: Minor > Fix For: 1.1.0 > > > As a user of Nifi, when I want to enable cross account fetching of S3 objects > I need the proxy variables to be set in order to generate temporary AWS > tokens for STS:AssumeRole. > Within some enterprise environments, it is necessary to set the proxy > variables prior to running AssumeRole methods. Without this being set, the > machine in VPC A times out on generating temporary keys and is unable to > assume a role as a machine in VPC B. > This ticket arose from this conversation: > http://apache-nifi-developer-list.39713.n7.nabble.com/Nifi-Cross-Account-Download-With-A-Profile-Flag-td13232.html#a13252 > Goal: There are files stored in an S3 bucket in VPC B. My Nifi machines are > in VPC A. I want Nifi to be able to get those files from VPC B. VPC A and VPC > B need to be communicating in the FetchS3Object component. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2794) Expose a getOneFromEachConnection method in ProcessSession
[ https://issues.apache.org/jira/browse/NIFI-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510102#comment-15510102 ] ASF GitHub Bot commented on NIFI-2794: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/1032 Closing as per discussion in JIRA. > Expose a getOneFromEachConnection method in ProcessSession > -- > > Key: NIFI-2794 > URL: https://issues.apache.org/jira/browse/NIFI-2794 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > Attachments: image.png > > > In case a processor has multiple incoming connections, it could be > interesting to expose a method allowing users to get exactly one flow file > per incoming connection in a row. > That could unlock opportunities such as performing join operations on > datasets. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2794) Expose a getOneFromEachConnection method in ProcessSession
[ https://issues.apache.org/jira/browse/NIFI-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510103#comment-15510103 ] ASF GitHub Bot commented on NIFI-2794: -- Github user pvillard31 closed the pull request at: https://github.com/apache/nifi/pull/1032 > Expose a getOneFromEachConnection method in ProcessSession > -- > > Key: NIFI-2794 > URL: https://issues.apache.org/jira/browse/NIFI-2794 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > Attachments: image.png > > > In case a processor has multiple incoming connections, it could be > interesting to expose a method allowing users to get exactly one flow file > per incoming connection in a row. > That could unlock opportunities such as performing join operations on > datasets. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1032: NIFI-2794 - Expose a getOneFromEachConnection metho...
Github user pvillard31 closed the pull request at: https://github.com/apache/nifi/pull/1032 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #1032: NIFI-2794 - Expose a getOneFromEachConnection method in Pr...
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/1032 Closing as per discussion in JIRA. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Resolved] (NIFI-2794) Expose a getOneFromEachConnection method in ProcessSession
[ https://issues.apache.org/jira/browse/NIFI-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-2794. -- Resolution: Won't Fix Closing at "Won't fix". Thanks for your insights! > Expose a getOneFromEachConnection method in ProcessSession > -- > > Key: NIFI-2794 > URL: https://issues.apache.org/jira/browse/NIFI-2794 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > Attachments: image.png > > > In case a processor has multiple incoming connections, it could be > interesting to expose a method allowing users to get exactly one flow file > per incoming connection in a row. > That could unlock opportunities such as performing join operations on > datasets. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #1024: Adding EL support to TailFile processor
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/1024 @apsaltis I believe this is covered by #980. It is now possible to use wildcards to specify files to tail and a periodic time to look for new files to tail. If you think this does not match your need, you will have to rebase against master. Let me know if you have questions on the new features. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #1039: Correcting documentation for ExecuteFlumeSink
Github user olegz commented on the issue: https://github.com/apache/nifi/pull/1039 Merging. . . --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2794) Expose a getOneFromEachConnection method in ProcessSession
[ https://issues.apache.org/jira/browse/NIFI-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510074#comment-15510074 ] Oleg Zhurakousky commented on NIFI-2794: Had an off-line discussion with Pierre but to stay true to ASF world below is the summary of it Based on our discussion we both realized that the use case Pierre was trying to address falls into the category of basic aggregation and as Pierre already pointed out it is tracked by several issues already NIFI-2590, NIFI-1926, NIFI-2735. So I think it is fair to close this issue as won't fix after linking it to any of the once mentioned above especially NIFI-1926 since it already attempts to collect variety of use cases. Also, linking [~mattyb149] since he is involved in the overall effort as well. > Expose a getOneFromEachConnection method in ProcessSession > -- > > Key: NIFI-2794 > URL: https://issues.apache.org/jira/browse/NIFI-2794 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > Attachments: image.png > > > In case a processor has multiple incoming connections, it could be > interesting to expose a method allowing users to get exactly one flow file > per incoming connection in a row. > That could unlock opportunities such as performing join operations on > datasets. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (NIFI-1170) TailFile "File to Tail" property should support Wildcards
[ https://issues.apache.org/jira/browse/NIFI-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleg Zhurakousky resolved NIFI-1170. Resolution: Fixed > TailFile "File to Tail" property should support Wildcards > - > > Key: NIFI-1170 > URL: https://issues.apache.org/jira/browse/NIFI-1170 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 0.4.0 >Reporter: Andre >Assignee: Pierre Villard > Fix For: 1.1.0 > > > Because of challenges around log rotation of high volume syslog and app > producers, it is customary to logging platform developers to promote file > variables based file names such as DynaFiles (rsyslog), Macros(syslog-ng)as > alternatives to getting SIGHUPs being sent to the syslog daemon upon every > file rotation. > (To certain extent, used even NiFi's has similar patterns, like for example, > when one uses Expression Language to set PutHDFS destination file). > The current TailFile strategy suggests rotation patterns like: > {code} > log_folder/app.log > log_folder/app.log.1 > log_folder/app.log.2 > log_folder/app.log.3 > {code} > It is possible to fool the system to accept wildcards by simply using a > strategy like: > {code} > log_folder/test1 > log_folder/server1 > log_folder/server2 > log_folder/server3 > {code} > And configure *Rolling Filename Pattern* to * but it feels like a hack, > rather than catering for an ever increasingly prevalent use case > (DynaFile/macros/etc). > It would be great if instead, TailFile had the ability to use wildcards on > File to Tail property -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #980: NIFI-1170 - Improved TailFile processor to support m...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/980 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (NIFI-1959) TailFile not ingesting data when tailed file is moved with no rolling pattern
[ https://issues.apache.org/jira/browse/NIFI-1959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-1959: - Resolution: Fixed Fix Version/s: 1.1.0 Status: Resolved (was: Patch Available) > TailFile not ingesting data when tailed file is moved with no rolling pattern > - > > Key: NIFI-1959 > URL: https://issues.apache.org/jira/browse/NIFI-1959 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > Fix For: 1.1.0 > > > In case "Rolling Filename Pattern" is not set by user and if the tailed file > is moved, then the new file will not be tailed. > Besides, in such case, the processor will be endlessly triggered without > ingesting data: it creates a lot of tasks and consumes CPU. The reason is it > never goes in if statement L448. > A solution is to look at size() and lastUpdated() of the tailed file to > detect a "rollover". However it won't allow the processor to ingest the > potential data added in the tailed file just before being moved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1959) TailFile not ingesting data when tailed file is moved with no rolling pattern
[ https://issues.apache.org/jira/browse/NIFI-1959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510066#comment-15510066 ] ASF GitHub Bot commented on NIFI-1959: -- Github user pvillard31 closed the pull request at: https://github.com/apache/nifi/pull/490 > TailFile not ingesting data when tailed file is moved with no rolling pattern > - > > Key: NIFI-1959 > URL: https://issues.apache.org/jira/browse/NIFI-1959 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > > In case "Rolling Filename Pattern" is not set by user and if the tailed file > is moved, then the new file will not be tailed. > Besides, in such case, the processor will be endlessly triggered without > ingesting data: it creates a lot of tasks and consumes CPU. The reason is it > never goes in if statement L448. > A solution is to look at size() and lastUpdated() of the tailed file to > detect a "rollover". However it won't allow the processor to ingest the > potential data added in the tailed file just before being moved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #490: NIFI-1959 Added length and timestamp to detect rollo...
Github user pvillard31 closed the pull request at: https://github.com/apache/nifi/pull/490 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi issue #490: NIFI-1959 Added length and timestamp to detect rollover
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/490 Closing - implemented with #980 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-1959) TailFile not ingesting data when tailed file is moved with no rolling pattern
[ https://issues.apache.org/jira/browse/NIFI-1959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510065#comment-15510065 ] ASF GitHub Bot commented on NIFI-1959: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/490 Closing - implemented with #980 > TailFile not ingesting data when tailed file is moved with no rolling pattern > - > > Key: NIFI-1959 > URL: https://issues.apache.org/jira/browse/NIFI-1959 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 0.6.1 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > > In case "Rolling Filename Pattern" is not set by user and if the tailed file > is moved, then the new file will not be tailed. > Besides, in such case, the processor will be endlessly triggered without > ingesting data: it creates a lot of tasks and consumes CPU. The reason is it > never goes in if statement L448. > A solution is to look at size() and lastUpdated() of the tailed file to > detect a "rollover". However it won't allow the processor to ingest the > potential data added in the tailed file just before being moved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1170) TailFile "File to Tail" property should support Wildcards
[ https://issues.apache.org/jira/browse/NIFI-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510058#comment-15510058 ] ASF GitHub Bot commented on NIFI-1170: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/980 > TailFile "File to Tail" property should support Wildcards > - > > Key: NIFI-1170 > URL: https://issues.apache.org/jira/browse/NIFI-1170 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 0.4.0 >Reporter: Andre >Assignee: Pierre Villard > Fix For: 1.1.0 > > > Because of challenges around log rotation of high volume syslog and app > producers, it is customary to logging platform developers to promote file > variables based file names such as DynaFiles (rsyslog), Macros(syslog-ng)as > alternatives to getting SIGHUPs being sent to the syslog daemon upon every > file rotation. > (To certain extent, used even NiFi's has similar patterns, like for example, > when one uses Expression Language to set PutHDFS destination file). > The current TailFile strategy suggests rotation patterns like: > {code} > log_folder/app.log > log_folder/app.log.1 > log_folder/app.log.2 > log_folder/app.log.3 > {code} > It is possible to fool the system to accept wildcards by simply using a > strategy like: > {code} > log_folder/test1 > log_folder/server1 > log_folder/server2 > log_folder/server3 > {code} > And configure *Rolling Filename Pattern* to * but it feels like a hack, > rather than catering for an ever increasingly prevalent use case > (DynaFile/macros/etc). > It would be great if instead, TailFile had the ability to use wildcards on > File to Tail property -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1170) TailFile "File to Tail" property should support Wildcards
[ https://issues.apache.org/jira/browse/NIFI-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510056#comment-15510056 ] ASF subversion and git services commented on NIFI-1170: --- Commit 930e95aa0023b12e5618068ea144808e5627cea7 in nifi's branch refs/heads/master from [~pvillard] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=930e95a ] NIFI-1170 - Improved TailFile processor to support multiple files tailing This closes #980 > TailFile "File to Tail" property should support Wildcards > - > > Key: NIFI-1170 > URL: https://issues.apache.org/jira/browse/NIFI-1170 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 0.4.0 >Reporter: Andre >Assignee: Pierre Villard > Fix For: 1.1.0 > > > Because of challenges around log rotation of high volume syslog and app > producers, it is customary to logging platform developers to promote file > variables based file names such as DynaFiles (rsyslog), Macros(syslog-ng)as > alternatives to getting SIGHUPs being sent to the syslog daemon upon every > file rotation. > (To certain extent, used even NiFi's has similar patterns, like for example, > when one uses Expression Language to set PutHDFS destination file). > The current TailFile strategy suggests rotation patterns like: > {code} > log_folder/app.log > log_folder/app.log.1 > log_folder/app.log.2 > log_folder/app.log.3 > {code} > It is possible to fool the system to accept wildcards by simply using a > strategy like: > {code} > log_folder/test1 > log_folder/server1 > log_folder/server2 > log_folder/server3 > {code} > And configure *Rolling Filename Pattern* to * but it feels like a hack, > rather than catering for an ever increasingly prevalent use case > (DynaFile/macros/etc). > It would be great if instead, TailFile had the ability to use wildcards on > File to Tail property -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (NIFI-1170) TailFile "File to Tail" property should support Wildcards
[ https://issues.apache.org/jira/browse/NIFI-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard reassigned NIFI-1170: Assignee: Pierre Villard > TailFile "File to Tail" property should support Wildcards > - > > Key: NIFI-1170 > URL: https://issues.apache.org/jira/browse/NIFI-1170 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 0.4.0 >Reporter: Andre >Assignee: Pierre Villard > Fix For: 1.1.0 > > > Because of challenges around log rotation of high volume syslog and app > producers, it is customary to logging platform developers to promote file > variables based file names such as DynaFiles (rsyslog), Macros(syslog-ng)as > alternatives to getting SIGHUPs being sent to the syslog daemon upon every > file rotation. > (To certain extent, used even NiFi's has similar patterns, like for example, > when one uses Expression Language to set PutHDFS destination file). > The current TailFile strategy suggests rotation patterns like: > {code} > log_folder/app.log > log_folder/app.log.1 > log_folder/app.log.2 > log_folder/app.log.3 > {code} > It is possible to fool the system to accept wildcards by simply using a > strategy like: > {code} > log_folder/test1 > log_folder/server1 > log_folder/server2 > log_folder/server3 > {code} > And configure *Rolling Filename Pattern* to * but it feels like a hack, > rather than catering for an ever increasingly prevalent use case > (DynaFile/macros/etc). > It would be great if instead, TailFile had the ability to use wildcards on > File to Tail property -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-1170) TailFile "File to Tail" property should support Wildcards
[ https://issues.apache.org/jira/browse/NIFI-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleg Zhurakousky updated NIFI-1170: --- Fix Version/s: 1.1.0 > TailFile "File to Tail" property should support Wildcards > - > > Key: NIFI-1170 > URL: https://issues.apache.org/jira/browse/NIFI-1170 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 0.4.0 >Reporter: Andre > Fix For: 1.1.0 > > > Because of challenges around log rotation of high volume syslog and app > producers, it is customary to logging platform developers to promote file > variables based file names such as DynaFiles (rsyslog), Macros(syslog-ng)as > alternatives to getting SIGHUPs being sent to the syslog daemon upon every > file rotation. > (To certain extent, used even NiFi's has similar patterns, like for example, > when one uses Expression Language to set PutHDFS destination file). > The current TailFile strategy suggests rotation patterns like: > {code} > log_folder/app.log > log_folder/app.log.1 > log_folder/app.log.2 > log_folder/app.log.3 > {code} > It is possible to fool the system to accept wildcards by simply using a > strategy like: > {code} > log_folder/test1 > log_folder/server1 > log_folder/server2 > log_folder/server3 > {code} > And configure *Rolling Filename Pattern* to * but it feels like a hack, > rather than catering for an ever increasingly prevalent use case > (DynaFile/macros/etc). > It would be great if instead, TailFile had the ability to use wildcards on > File to Tail property -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2417) Implement Query and Scroll processors for ElasticSearch
[ https://issues.apache.org/jira/browse/NIFI-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510001#comment-15510001 ] ASF GitHub Bot commented on NIFI-2417: -- Github user gresockj commented on the issue: https://github.com/apache/nifi/pull/733 Thanks @mattyb149 ! Good comments, I'll see if I can work on these this week. > Implement Query and Scroll processors for ElasticSearch > --- > > Key: NIFI-2417 > URL: https://issues.apache.org/jira/browse/NIFI-2417 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.0.0 >Reporter: Joseph Gresock >Assignee: Joseph Gresock >Priority: Minor > Fix For: 1.1.0 > > > FetchElasticsearchHttp allows users to select a single document from > Elasticsearch in NiFi, but there is no way to run a query to retrieve > multiple documents. > We should add a QueryElasticsearchHttp processor for running a query and > returning a flow file per result, for small result sets. This should allow > both input and non-input execution. > A separate ScrollElasticsearchHttp processor would also be useful for > scrolling through a huge result set. This should use the state manager to > maintain the scroll_id value, and use this as input to the next scroll page. > As a result, this processor should not allow flow file input, but should > retrieve one page per run. -- This message was sent by Atlassian JIRA (v6.3.4#6332)