[jira] [Created] (NIFIREG-366) update NiFi Registry's parent pom version
Endre Kovacs created NIFIREG-366: Summary: update NiFi Registry's parent pom version Key: NIFIREG-366 URL: https://issues.apache.org/jira/browse/NIFIREG-366 Project: NiFi Registry Issue Type: Task Reporter: Endre Kovacs Assignee: Endre Kovacs currently Apache NiFi uses version *23* of its parent pom. {code} org.apache apache 23 {code} let's update it in NiFi Registry as well! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (NIFIREG-365) upgrade apache rat to latest released version (0.12)
[ https://issues.apache.org/jira/browse/NIFIREG-365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs closed NIFIREG-365. Resolution: Invalid actually it is not apache-rat which has to be upgraded, but *nifi-registry*'s parent pom version, from which rat, and other plugin's will get their version. > upgrade apache rat to latest released version (0.12) > > > Key: NIFIREG-365 > URL: https://issues.apache.org/jira/browse/NIFIREG-365 > Project: NiFi Registry > Issue Type: Task >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Trivial > > NiFi itself uses apache rat 0.12 for license header checks. > NiFi registry have not upgraded it yet, and is using 0.11. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFIREG-365) upgrade apache rat to latest released version (0.12)
Endre Kovacs created NIFIREG-365: Summary: upgrade apache rat to latest released version (0.12) Key: NIFIREG-365 URL: https://issues.apache.org/jira/browse/NIFIREG-365 Project: NiFi Registry Issue Type: Task Reporter: Endre Kovacs Assignee: Endre Kovacs NiFi itself uses apache rat 0.12 for license header checks. NiFi registry have not upgraded it yet, and is using 0.11. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6897) NiFi toolkit not Java 11 compatible
[ https://issues.apache.org/jira/browse/NIFI-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982591#comment-16982591 ] Endre Kovacs commented on NIFI-6897: yes. the only difference here is: while nifi is being started /_bootstrapped_/ by a java process, nifi-toolkit is started by shell script. my solution is on the way > NiFi toolkit not Java 11 compatible > --- > > Key: NIFI-6897 > URL: https://issues.apache.org/jira/browse/NIFI-6897 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.10.0 >Reporter: Mark Payne >Assignee: Endre Kovacs >Priority: Major > > With version 1.10.0, NiFi was made Java 11 compatible. However, the toolkit > is not. Ran into the following error when attempting to use the CLI with Java > 11: > {code:java} > bash-4.4$ $NIFI_TOOLKIT_HOME/bin/cli.sh nifi cluster-summary -ot json > Exception in thread "main" java.lang.NoClassDefFoundError: > javax/xml/bind/annotation/XmlElement > at > com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.(JaxbAnnotationIntrospector.java:139) > at > com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.(JaxbAnnotationIntrospector.java:126) > at > org.apache.nifi.toolkit.cli.impl.client.nifi.impl.JerseyNiFiClient.jacksonJaxbJsonProvider(JerseyNiFiClient.java:359) > at > org.apache.nifi.toolkit.cli.impl.client.nifi.impl.JerseyNiFiClient.(JerseyNiFiClient.java:111) > at > org.apache.nifi.toolkit.cli.impl.client.nifi.impl.JerseyNiFiClient.(JerseyNiFiClient.java:57) > at > org.apache.nifi.toolkit.cli.impl.client.nifi.impl.JerseyNiFiClient$Builder.build(JerseyNiFiClient.java:349) > at > org.apache.nifi.toolkit.cli.impl.client.NiFiClientFactory.createClient(NiFiClientFactory.java:105) > at > org.apache.nifi.toolkit.cli.impl.client.NiFiClientFactory.createClient(NiFiClientFactory.java:44) > at > org.apache.nifi.toolkit.cli.impl.command.nifi.AbstractNiFiCommand.doExecute(AbstractNiFiCommand.java:62) > at > org.apache.nifi.toolkit.cli.impl.command.AbstractPropertyCommand.execute(AbstractPropertyCommand.java:74) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188) > at > org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145) > at org.apache.nifi.toolkit.cli.CLIMain.main(CLIMain.java:72) > Caused by: java.lang.ClassNotFoundException: > javax.xml.bind.annotation.XmlElement > at > java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) > at > java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) > at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) > ... 15 more{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-6897) NiFi toolkit not Java 11 compatible
[ https://issues.apache.org/jira/browse/NIFI-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs reassigned NIFI-6897: -- Assignee: Endre Kovacs > NiFi toolkit not Java 11 compatible > --- > > Key: NIFI-6897 > URL: https://issues.apache.org/jira/browse/NIFI-6897 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.10.0 >Reporter: Mark Payne >Assignee: Endre Kovacs >Priority: Major > > With version 1.10.0, NiFi was made Java 11 compatible. However, the toolkit > is not. Ran into the following error when attempting to use the CLI with Java > 11: > {code:java} > bash-4.4$ $NIFI_TOOLKIT_HOME/bin/cli.sh nifi cluster-summary -ot json > Exception in thread "main" java.lang.NoClassDefFoundError: > javax/xml/bind/annotation/XmlElement > at > com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.(JaxbAnnotationIntrospector.java:139) > at > com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.(JaxbAnnotationIntrospector.java:126) > at > org.apache.nifi.toolkit.cli.impl.client.nifi.impl.JerseyNiFiClient.jacksonJaxbJsonProvider(JerseyNiFiClient.java:359) > at > org.apache.nifi.toolkit.cli.impl.client.nifi.impl.JerseyNiFiClient.(JerseyNiFiClient.java:111) > at > org.apache.nifi.toolkit.cli.impl.client.nifi.impl.JerseyNiFiClient.(JerseyNiFiClient.java:57) > at > org.apache.nifi.toolkit.cli.impl.client.nifi.impl.JerseyNiFiClient$Builder.build(JerseyNiFiClient.java:349) > at > org.apache.nifi.toolkit.cli.impl.client.NiFiClientFactory.createClient(NiFiClientFactory.java:105) > at > org.apache.nifi.toolkit.cli.impl.client.NiFiClientFactory.createClient(NiFiClientFactory.java:44) > at > org.apache.nifi.toolkit.cli.impl.command.nifi.AbstractNiFiCommand.doExecute(AbstractNiFiCommand.java:62) > at > org.apache.nifi.toolkit.cli.impl.command.AbstractPropertyCommand.execute(AbstractPropertyCommand.java:74) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processCommand(CommandProcessor.java:252) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.processGroupCommand(CommandProcessor.java:233) > at > org.apache.nifi.toolkit.cli.impl.command.CommandProcessor.process(CommandProcessor.java:188) > at > org.apache.nifi.toolkit.cli.CLIMain.runSingleCommand(CLIMain.java:145) > at org.apache.nifi.toolkit.cli.CLIMain.main(CLIMain.java:72) > Caused by: java.lang.ClassNotFoundException: > javax.xml.bind.annotation.XmlElement > at > java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) > at > java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) > at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) > ... 15 more{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFIREG-342) streamline nifi-registry docker projects to it's NiFi counterpart
Endre Kovacs created NIFIREG-342: Summary: streamline nifi-registry docker projects to it's NiFi counterpart Key: NIFIREG-342 URL: https://issues.apache.org/jira/browse/NIFIREG-342 Project: NiFi Registry Issue Type: Task Affects Versions: 1.0.0 Reporter: Endre Kovacs Assignee: Endre Kovacs currently i see a few differences between the layout of NiFi vs. nifi-registry's docker projects. # {code}nifi-registry-docker{code} is a child project of {code}nifi-registry/nifi-registry-core{code} it's NiFi counterpart lives in the project root. Having it in the project-root is beneficial as it allows us to defer its execution order after the assembly, so we could leverage the build artifacts for docker build. Such docker build can be initiated with the {code}-P docker{code} maven build profile, provided by NIFIREG-252 # currently nifi-registry/nifi-registry-core/nifi-registry-docker/dockerhub is not a maven project (no pom.xml in its folder) this is another difference between the NiFi dockerhub and the NiFi Registry counter part. cc.: [~kdoran] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFIREG-252) Add docker-maven image and build profile
[ https://issues.apache.org/jira/browse/NIFIREG-252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs reassigned NIFIREG-252: Assignee: Endre Kovacs (was: Kevin Doran) > Add docker-maven image and build profile > > > Key: NIFIREG-252 > URL: https://issues.apache.org/jira/browse/NIFIREG-252 > Project: NiFi Registry > Issue Type: Improvement >Reporter: Kevin Doran >Assignee: Endre Kovacs >Priority: Major > > For NiFi Registry, it would be nice to have the option to build a docker > image as part of the maven source code build, similar to NiFi. The > docker-maven plugin supports this. The basic idea would be to copy the build > artifacts into the docker image when building the image, and tag it in a way > that distinguishes it from the apache/nifi-registry dockerhub image. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFIREG-338) fixing nifi registrty version in dockerfile
Endre Kovacs created NIFIREG-338: Summary: fixing nifi registrty version in dockerfile Key: NIFIREG-338 URL: https://issues.apache.org/jira/browse/NIFIREG-338 Project: NiFi Registry Issue Type: Bug Reporter: Endre Kovacs Assignee: Endre Kovacs after the 0.5.0 release, the community agreed on jumping to the next major version: *1.0.0* the dockerfile however was not updated to reflect this. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-6758) create bundle coordinate map in separate phase
Endre Kovacs created NIFI-6758: -- Summary: create bundle coordinate map in separate phase Key: NIFI-6758 URL: https://issues.apache.org/jira/browse/NIFI-6758 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Endre Kovacs Assignee: Endre Kovacs In NarUnpacker https://github.com/apache/nifi/blob/9a496fe9d2681fca06fb6f071d0fa39d71bc5268/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-nar-utils/src/main/java/org/apache/nifi/nar/NarUnpacker.java#L131 a bundleCoordinate map is populated during a for loop which un-packs the nars found in extension work dir. *This is great.* However if we want to optimize nifi size, we could keep only the un-packed the nars, delete the original nar archive files, and save hundreds of MBs of diskspace. In this case, the bundle-coordinate map is not populated, as the nar files are no longer present, only their unpacked , directory version are present. Such empty bundle-coordinate map will result in an empty ExtensionMapping which is returned, and is supplied to downstream components, eg.: jettyserver puts it on context for web docs servlet https://github.com/apache/nifi/blob/9a496fe9d2681fca06fb6f071d0fa39d71bc5268/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java#L1005 and the nifi-web-docs DocumentationController uses it: https://github.com/apache/nifi/blob/9a496fe9d2681fca06fb6f071d0fa39d71bc5268/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-docs/src/main/java/org/apache/nifi/web/docs/DocumentationController.java#L60 In such case, the resulting effect is that when the user does a right click on a processor, and clicks view-usage None of the processors are showing in the help. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFIREG-325) support specifying group for 'NiFi Identity' to grant permission to proxy user requests
[ https://issues.apache.org/jira/browse/NIFIREG-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFIREG-325: - Description: As documented in [https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#fileaccesspolicyprovider] one can specify NiFi node identities to grant permission to proxy user requests and bucket read permission. What I'd like to propose is to be able to provider a group name there.: {code:xml} file-access-policy-provider org.a.n.r.s.authorization.file.FileAccessPolicyProvider ./conf/authorizations.xml ... ... my-group {code} which in turn would bless that group with the same permissions as described in the admin guide for {code}NiFi Identity{code} (proxying user request and bucket read). This feature would be very similar to what https://issues.apache.org/jira/browse/NIFI-5542 does. was: As documented in [https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#fileaccesspolicyprovider] one can specify NiFi node identities to grant permission to proxy user requests and bucket read permission. What I'd like to propose is to be able to provider a group name there.: {code:xml} file-access-policy-provider org.a.n.r.s.authorization.file.FileAccessPolicyProvider ./conf/authorizations.xml ... ... my-group {code} which in turn would bless that group with the same permissions as described in the admin guide for {code}NiFi Identity{code} (proxying user request and bucket read). This feature would be very similar to what https://issues.apache.org/jira/browse/NIFI-5542 does. > support specifying group for 'NiFi Identity' to grant permission to proxy > user requests > --- > > Key: NIFIREG-325 > URL: https://issues.apache.org/jira/browse/NIFIREG-325 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 1.0.0 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Fix For: 1.0.0 > > Time Spent: 10m > Remaining Estimate: 0h > > As documented in > [https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#fileaccesspolicyprovider] > one can specify NiFi node identities to grant permission to proxy user > requests and bucket read permission. > > What I'd like to propose is to be able to provider a group name there.: > {code:xml} > > file-access-policy-provider > org.a.n.r.s.authorization.file.FileAccessPolicyProvider > ./conf/authorizations.xml > ... > ... > my-group > > {code} > which in turn would bless that group with the same permissions as described > in the admin guide for {code}NiFi Identity{code} (proxying user request and > bucket read). > This feature would be very similar to what > https://issues.apache.org/jira/browse/NIFI-5542 does. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFIREG-325) support specifying group for 'NiFi Identity' to grant permission to proxy user requests
Endre Kovacs created NIFIREG-325: Summary: support specifying group for 'NiFi Identity' to grant permission to proxy user requests Key: NIFIREG-325 URL: https://issues.apache.org/jira/browse/NIFIREG-325 Project: NiFi Registry Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Endre Kovacs Assignee: Endre Kovacs Fix For: 1.0.0 As documented in [https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#fileaccesspolicyprovider] one can specify NiFi node identities to grant permission to proxy user requests and bucket read permission. What I'd like to propose is to be able to provider a group name there.: {code:xml} file-access-policy-provider org.a.n.r.s.authorization.file.FileAccessPolicyProvider ./conf/authorizations.xml ... ... my-group {code} which in turn would bless that group with the same permissions as described in the admin guide for {code}NiFi Identity{code} (proxying user request and bucket read). This feature would be very similar to what https://issues.apache.org/jira/browse/NIFI-5542 does. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFIREG-291) adjust docker folder naming convention to have similar layout as NiFi
[ https://issues.apache.org/jira/browse/NIFIREG-291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs reassigned NIFIREG-291: Assignee: Endre Kovacs > adjust docker folder naming convention to have similar layout as NiFi > - > > Key: NIFIREG-291 > URL: https://issues.apache.org/jira/browse/NIFIREG-291 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.4.0 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Trivial > > Currently NiFi's docker image > ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) > installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* > folder. > > NiFi registry's docker image > [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] > does it differently: it always creates *a version specific* folder (and sets > that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. > > For NiFi Registry, I'd like to propose, to follow the same convention, in > order to make it easier for downstream use-cases to find NiFi Registry's home > folder / directory layout -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFIREG-291) adjust docker folder naming convention to have similar layout as NiFi
[ https://issues.apache.org/jira/browse/NIFIREG-291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFIREG-291: - Description: Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Registry's home folder / directory layout was: Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Registry home / folder / directory layout > adjust docker folder naming convention to have similar layout as NiFi > - > > Key: NIFIREG-291 > URL: https://issues.apache.org/jira/browse/NIFIREG-291 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.4.0 >Reporter: Endre Kovacs >Priority: Trivial > > Currently NiFi's docker image > ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) > installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* > folder. > > NiFi registry's docker image > [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] > does it differently: it always creates *a version specific* folder (and sets > that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. > > For NiFi Registry, I'd like to propose, to follow the same convention, in > order to make it easier for downstream use-cases to find NiFi Registry's home > folder / directory layout -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFIREG-291) adjust docker folder naming convention to have similar layout as NiFi
[ https://issues.apache.org/jira/browse/NIFIREG-291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFIREG-291: - Description: Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Registry folders was: Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Regisytu > adjust docker folder naming convention to have similar layout as NiFi > - > > Key: NIFIREG-291 > URL: https://issues.apache.org/jira/browse/NIFIREG-291 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.4.0 >Reporter: Endre Kovacs >Priority: Trivial > > Currently NiFi's docker image > ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) > installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* > folder. > > NiFi registry's docker image > [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] > does it differently: it always creates *a version specific* folder (and sets > that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. > > For NiFi Registry, I'd like to propose, to follow the same convention, in > order to make it easier for downstream use-cases to find NiFi Registry folders -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFIREG-291) adjust docker folder naming convention to have similar layout as NiFi
[ https://issues.apache.org/jira/browse/NIFIREG-291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFIREG-291: - Description: Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Registry home / folder / directory layout was: Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Registry folders > adjust docker folder naming convention to have similar layout as NiFi > - > > Key: NIFIREG-291 > URL: https://issues.apache.org/jira/browse/NIFIREG-291 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.4.0 >Reporter: Endre Kovacs >Priority: Trivial > > Currently NiFi's docker image > ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) > installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* > folder. > > NiFi registry's docker image > [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] > does it differently: it always creates *a version specific* folder (and sets > that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. > > For NiFi Registry, I'd like to propose, to follow the same convention, in > order to make it easier for downstream use-cases to find NiFi Registry home / > folder / directory layout -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFIREG-291) adjust docker folder naming convention to have similar layout as NiFi
[ https://issues.apache.org/jira/browse/NIFIREG-291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFIREG-291: - Description: Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29|https://github.com/ekovacs/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Regisytu was: Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/ekovacs/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Regisytu > adjust docker folder naming convention to have similar layout as NiFi > - > > Key: NIFIREG-291 > URL: https://issues.apache.org/jira/browse/NIFIREG-291 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.4.0 >Reporter: Endre Kovacs >Priority: Trivial > > Currently NiFi's docker image > ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) > installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* > folder. > > NiFi registry's docker image > [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29|https://github.com/ekovacs/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] > does it differently: it always creates *a version specific* folder (and sets > that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. > > For NiFi Registry, I'd like to propose, to follow the same convention, in > order to make it easier for downstream use-cases to find NiFi Regisytu -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFIREG-291) adjust docker folder naming convention to have similar layout as NiFi
[ https://issues.apache.org/jira/browse/NIFIREG-291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFIREG-291: - Description: Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Regisytu was: Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29|https://github.com/ekovacs/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Regisytu > adjust docker folder naming convention to have similar layout as NiFi > - > > Key: NIFIREG-291 > URL: https://issues.apache.org/jira/browse/NIFIREG-291 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.4.0 >Reporter: Endre Kovacs >Priority: Trivial > > Currently NiFi's docker image > ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) > installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* > folder. > > NiFi registry's docker image > [https://github.com/apache/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] > does it differently: it always creates *a version specific* folder (and sets > that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. > > For NiFi Registry, I'd like to propose, to follow the same convention, in > order to make it easier for downstream use-cases to find NiFi Regisytu -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFIREG-291) adjust docker folder naming convention to have similar layout as NiFi
Endre Kovacs created NIFIREG-291: Summary: adjust docker folder naming convention to have similar layout as NiFi Key: NIFIREG-291 URL: https://issues.apache.org/jira/browse/NIFIREG-291 Project: NiFi Registry Issue Type: Improvement Affects Versions: 0.4.0 Reporter: Endre Kovacs Currently NiFi's docker image ([https://github.com/apache/nifi/blob/41663929a4727592972e6be04b3c516a752e760e/nifi-docker/dockerhub/Dockerfile#L32]) installs NiFi ( and sets that to $NIFI_HOME) into a *version agnostic* folder. NiFi registry's docker image [https://github.com/ekovacs/nifi-registry/blob/f3b82a7b8dab3737d9b9ca1dc8028d4e9d7108fa/nifi-registry-core/nifi-registry-docker/dockerhub/Dockerfile#L29] does it differently: it always creates *a version specific* folder (and sets that to $NIFI_REGISTRY_HOME) for any new NiFi Registry releases. For NiFi Registry, I'd like to propose, to follow the same convention, in order to make it easier for downstream use-cases to find NiFi Regisytu -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6289) character set encoding issue in ExecuteSparkInteractive
[ https://issues.apache.org/jira/browse/NIFI-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6289: --- Attachment: (was: spakr-processor-encode-issue-result-looks-good-with-fix.png) > character set encoding issue in ExecuteSparkInteractive > --- > > Key: NIFI-6289 > URL: https://issues.apache.org/jira/browse/NIFI-6289 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > Attachments: spark-processor-encode-issue-bad-result.png, > spark-processor-encode-issue-result-looks-good-with-fix.png, > spark-processor-encode-issue-setup.png > > Time Spent: 10m > Remaining Estimate: 0h > > I could reproduce the issue described in NIFI-6288 also in > ExecuteSparkInteractive processor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6289) character set encoding issue in ExecuteSparkInteractive
[ https://issues.apache.org/jira/browse/NIFI-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6289: --- Attachment: (was: spakr-processor-encode-issue-setup.png) > character set encoding issue in ExecuteSparkInteractive > --- > > Key: NIFI-6289 > URL: https://issues.apache.org/jira/browse/NIFI-6289 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > Attachments: spark-processor-encode-issue-bad-result.png, > spark-processor-encode-issue-result-looks-good-with-fix.png, > spark-processor-encode-issue-setup.png > > Time Spent: 10m > Remaining Estimate: 0h > > I could reproduce the issue described in NIFI-6288 also in > ExecuteSparkInteractive processor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6289) character set encoding issue in ExecuteSparkInteractive
[ https://issues.apache.org/jira/browse/NIFI-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6289: --- Attachment: spakr-processor-encode-issue-result-looks-good-with-fix.png > character set encoding issue in ExecuteSparkInteractive > --- > > Key: NIFI-6289 > URL: https://issues.apache.org/jira/browse/NIFI-6289 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > Attachments: spakr-processor-encode-issue-bad-result.png, > spakr-processor-encode-issue-result-looks-good-with-fix.png, > spakr-processor-encode-issue-setup.png > > > I could reproduce the issue described in NIFI-6288 also in > ExecuteSparkInteractive processor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6289) character set encoding issue in ExecuteSparkInteractive
[ https://issues.apache.org/jira/browse/NIFI-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6289: --- Attachment: (was: spakr-processor-encode-issue-bad-result.png) > character set encoding issue in ExecuteSparkInteractive > --- > > Key: NIFI-6289 > URL: https://issues.apache.org/jira/browse/NIFI-6289 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > Attachments: spark-processor-encode-issue-bad-result.png, > spark-processor-encode-issue-result-looks-good-with-fix.png, > spark-processor-encode-issue-setup.png > > Time Spent: 10m > Remaining Estimate: 0h > > I could reproduce the issue described in NIFI-6288 also in > ExecuteSparkInteractive processor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6289) character set encoding issue in ExecuteSparkInteractive
[ https://issues.apache.org/jira/browse/NIFI-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6289: --- Attachment: spark-processor-encode-issue-setup.png spark-processor-encode-issue-bad-result.png spark-processor-encode-issue-result-looks-good-with-fix.png > character set encoding issue in ExecuteSparkInteractive > --- > > Key: NIFI-6289 > URL: https://issues.apache.org/jira/browse/NIFI-6289 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > Attachments: spark-processor-encode-issue-bad-result.png, > spark-processor-encode-issue-result-looks-good-with-fix.png, > spark-processor-encode-issue-setup.png > > Time Spent: 10m > Remaining Estimate: 0h > > I could reproduce the issue described in NIFI-6288 also in > ExecuteSparkInteractive processor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6289) character set encoding issue in ExecuteSparkInteractive
[ https://issues.apache.org/jira/browse/NIFI-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6289: --- Attachment: spakr-processor-encode-issue-setup.png spakr-processor-encode-issue-bad-result.png > character set encoding issue in ExecuteSparkInteractive > --- > > Key: NIFI-6289 > URL: https://issues.apache.org/jira/browse/NIFI-6289 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > Attachments: spakr-processor-encode-issue-bad-result.png, > spakr-processor-encode-issue-setup.png > > > I could reproduce the issue described in NIFI-6288 also in > ExecuteSparkInteractive processor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6289) character set encoding issue in ExecuteSparkInteractive
Endre Kovacs created NIFI-6289: -- Summary: character set encoding issue in ExecuteSparkInteractive Key: NIFI-6289 URL: https://issues.apache.org/jira/browse/NIFI-6289 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.9.2 Reporter: Endre Kovacs Assignee: Endre Kovacs I could reproduce the issue described in NIFI-6288 also in ExecuteSparkInteractive processor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6288) character set encoding issue in FetchElasticsearchHttp processor
[ https://issues.apache.org/jira/browse/NIFI-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6288: --- Attachment: message-generator.png simple-flow-overview.png bad-encoded-message.png after-fix-encoded-message.png > character set encoding issue in FetchElasticsearchHttp processor > > > Key: NIFI-6288 > URL: https://issues.apache.org/jira/browse/NIFI-6288 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > Attachments: after-fix-encoded-message.png, bad-encoded-message.png, > message-generator.png, simple-flow-overview.png > > > I used FetchElasticsearchHttp processor to fetch documents in Elasticsearch > which have special UTF-8 chars, eg.: characters of foreign languages: > accented chars or Japanese/Chinese chars. > It was working as expected on platforms that have UTF-8 as a default > _file.encoding._ But on e.g.: SLES12 VM, the special chars in the document, > turned to "?" in the fetched, output flow files. > > Taking a look at the source code showed: > - AbstractElasticsearchProcessor declares *CHARSET* property descriptor, but > it was not added to > AbstractElasticsearchHttpProcessor in the static initializer block. > - and in the place where the content of the document is written to the > flowfile, : > [https://github.com/apache/nifi/blob/65c41ab917d7b5f323aa71d841cc03b29e12d480/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearchHttp.java#L237] > it uses > {code} > out.write(source.toString().getBytes()); > {code} > > which will only work if the JVM's _file.encoding_ is UTF-8. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6288) character set encoding issue in FetchElasticsearchHttp processor
[ https://issues.apache.org/jira/browse/NIFI-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6288: --- Attachment: image-2019-05-10-13-47-14-725.png > character set encoding issue in FetchElasticsearchHttp processor > > > Key: NIFI-6288 > URL: https://issues.apache.org/jira/browse/NIFI-6288 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > > I used FetchElasticsearchHttp processor to fetch documents in Elasticsearch > which have special UTF-8 chars, eg.: characters of foreign languages: > accented chars or Japanese/Chinese chars. > It was working as expected on platforms that have UTF-8 as a default > _file.encoding._ But on e.g.: SLES12 VM, the special chars in the document, > turned to "?" in the fetched, output flow files. > > Taking a look at the source code showed: > - AbstractElasticsearchProcessor declares *CHARSET* property descriptor, but > it was not added to > AbstractElasticsearchHttpProcessor in the static initializer block. > - and in the place where the content of the document is written to the > flowfile, : > [https://github.com/apache/nifi/blob/65c41ab917d7b5f323aa71d841cc03b29e12d480/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearchHttp.java#L237] > it uses > {code} > out.write(source.toString().getBytes()); > {code} > > which will only work if the JVM's _file.encoding_ is UTF-8. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6288) character set encoding issue in FetchElasticsearchHttp processor
[ https://issues.apache.org/jira/browse/NIFI-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6288: --- Attachment: image-2019-05-10-13-47-14-789.png > character set encoding issue in FetchElasticsearchHttp processor > > > Key: NIFI-6288 > URL: https://issues.apache.org/jira/browse/NIFI-6288 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > > I used FetchElasticsearchHttp processor to fetch documents in Elasticsearch > which have special UTF-8 chars, eg.: characters of foreign languages: > accented chars or Japanese/Chinese chars. > It was working as expected on platforms that have UTF-8 as a default > _file.encoding._ But on e.g.: SLES12 VM, the special chars in the document, > turned to "?" in the fetched, output flow files. > > Taking a look at the source code showed: > - AbstractElasticsearchProcessor declares *CHARSET* property descriptor, but > it was not added to > AbstractElasticsearchHttpProcessor in the static initializer block. > - and in the place where the content of the document is written to the > flowfile, : > [https://github.com/apache/nifi/blob/65c41ab917d7b5f323aa71d841cc03b29e12d480/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearchHttp.java#L237] > it uses > {code} > out.write(source.toString().getBytes()); > {code} > > which will only work if the JVM's _file.encoding_ is UTF-8. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6288) character set encoding issue in FetchElasticsearchHttp processor
[ https://issues.apache.org/jira/browse/NIFI-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6288: --- Attachment: (was: image-2019-05-10-13-47-14-789.png) > character set encoding issue in FetchElasticsearchHttp processor > > > Key: NIFI-6288 > URL: https://issues.apache.org/jira/browse/NIFI-6288 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > > I used FetchElasticsearchHttp processor to fetch documents in Elasticsearch > which have special UTF-8 chars, eg.: characters of foreign languages: > accented chars or Japanese/Chinese chars. > It was working as expected on platforms that have UTF-8 as a default > _file.encoding._ But on e.g.: SLES12 VM, the special chars in the document, > turned to "?" in the fetched, output flow files. > > Taking a look at the source code showed: > - AbstractElasticsearchProcessor declares *CHARSET* property descriptor, but > it was not added to > AbstractElasticsearchHttpProcessor in the static initializer block. > - and in the place where the content of the document is written to the > flowfile, : > [https://github.com/apache/nifi/blob/65c41ab917d7b5f323aa71d841cc03b29e12d480/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearchHttp.java#L237] > it uses > {code} > out.write(source.toString().getBytes()); > {code} > > which will only work if the JVM's _file.encoding_ is UTF-8. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6288) character set encoding issue in FetchElasticsearchHttp processor
[ https://issues.apache.org/jira/browse/NIFI-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6288: --- Attachment: image-2019-05-10-13-47-14-509.png > character set encoding issue in FetchElasticsearchHttp processor > > > Key: NIFI-6288 > URL: https://issues.apache.org/jira/browse/NIFI-6288 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > > I used FetchElasticsearchHttp processor to fetch documents in Elasticsearch > which have special UTF-8 chars, eg.: characters of foreign languages: > accented chars or Japanese/Chinese chars. > It was working as expected on platforms that have UTF-8 as a default > _file.encoding._ But on e.g.: SLES12 VM, the special chars in the document, > turned to "?" in the fetched, output flow files. > > Taking a look at the source code showed: > - AbstractElasticsearchProcessor declares *CHARSET* property descriptor, but > it was not added to > AbstractElasticsearchHttpProcessor in the static initializer block. > - and in the place where the content of the document is written to the > flowfile, : > [https://github.com/apache/nifi/blob/65c41ab917d7b5f323aa71d841cc03b29e12d480/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearchHttp.java#L237] > it uses > {code} > out.write(source.toString().getBytes()); > {code} > > which will only work if the JVM's _file.encoding_ is UTF-8. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6288) character set encoding issue in FetchElasticsearchHttp processor
[ https://issues.apache.org/jira/browse/NIFI-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6288: --- Attachment: (was: image-2019-05-10-13-47-14-725.png) > character set encoding issue in FetchElasticsearchHttp processor > > > Key: NIFI-6288 > URL: https://issues.apache.org/jira/browse/NIFI-6288 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > > I used FetchElasticsearchHttp processor to fetch documents in Elasticsearch > which have special UTF-8 chars, eg.: characters of foreign languages: > accented chars or Japanese/Chinese chars. > It was working as expected on platforms that have UTF-8 as a default > _file.encoding._ But on e.g.: SLES12 VM, the special chars in the document, > turned to "?" in the fetched, output flow files. > > Taking a look at the source code showed: > - AbstractElasticsearchProcessor declares *CHARSET* property descriptor, but > it was not added to > AbstractElasticsearchHttpProcessor in the static initializer block. > - and in the place where the content of the document is written to the > flowfile, : > [https://github.com/apache/nifi/blob/65c41ab917d7b5f323aa71d841cc03b29e12d480/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearchHttp.java#L237] > it uses > {code} > out.write(source.toString().getBytes()); > {code} > > which will only work if the JVM's _file.encoding_ is UTF-8. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6288) character set encoding issue in FetchElasticsearchHttp processor
[ https://issues.apache.org/jira/browse/NIFI-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6288: --- Attachment: (was: image-2019-05-10-13-47-14-509.png) > character set encoding issue in FetchElasticsearchHttp processor > > > Key: NIFI-6288 > URL: https://issues.apache.org/jira/browse/NIFI-6288 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > > I used FetchElasticsearchHttp processor to fetch documents in Elasticsearch > which have special UTF-8 chars, eg.: characters of foreign languages: > accented chars or Japanese/Chinese chars. > It was working as expected on platforms that have UTF-8 as a default > _file.encoding._ But on e.g.: SLES12 VM, the special chars in the document, > turned to "?" in the fetched, output flow files. > > Taking a look at the source code showed: > - AbstractElasticsearchProcessor declares *CHARSET* property descriptor, but > it was not added to > AbstractElasticsearchHttpProcessor in the static initializer block. > - and in the place where the content of the document is written to the > flowfile, : > [https://github.com/apache/nifi/blob/65c41ab917d7b5f323aa71d841cc03b29e12d480/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearchHttp.java#L237] > it uses > {code} > out.write(source.toString().getBytes()); > {code} > > which will only work if the JVM's _file.encoding_ is UTF-8. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6288) character set encoding issue in FetchElasticsearchHttp processor
[ https://issues.apache.org/jira/browse/NIFI-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6288: --- Attachment: (was: image-2019-05-10-13-40-09-396.png) > character set encoding issue in FetchElasticsearchHttp processor > > > Key: NIFI-6288 > URL: https://issues.apache.org/jira/browse/NIFI-6288 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > > I used FetchElasticsearchHttp processor to fetch documents in Elasticsearch > which have special UTF-8 chars, eg.: characters of foreign languages: > accented chars or Japanese/Chinese chars. > It was working as expected on platforms that have UTF-8 as a default > _file.encoding._ But on e.g.: SLES12 VM, the special chars in the document, > turned to "?" in the fetched, output flow files. > > Taking a look at the source code showed: > - AbstractElasticsearchProcessor declares *CHARSET* property descriptor, but > it was not added to > AbstractElasticsearchHttpProcessor in the static initializer block. > - and in the place where the content of the document is written to the > flowfile, : > [https://github.com/apache/nifi/blob/65c41ab917d7b5f323aa71d841cc03b29e12d480/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearchHttp.java#L237] > it uses > {code} > out.write(source.toString().getBytes()); > {code} > > which will only work if the JVM's _file.encoding_ is UTF-8. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6288) character set encoding issue in FetchElasticsearchHttp processor
[ https://issues.apache.org/jira/browse/NIFI-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6288: --- Attachment: image-2019-05-10-13-40-09-396.png > character set encoding issue in FetchElasticsearchHttp processor > > > Key: NIFI-6288 > URL: https://issues.apache.org/jira/browse/NIFI-6288 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > Labels: easyfix > > I used FetchElasticsearchHttp processor to fetch documents in Elasticsearch > which have special UTF-8 chars, eg.: characters of foreign languages: > accented chars or Japanese/Chinese chars. > It was working as expected on platforms that have UTF-8 as a default > _file.encoding._ But on e.g.: SLES12 VM, the special chars in the document, > turned to "?" in the fetched, output flow files. > > Taking a look at the source code showed: > - AbstractElasticsearchProcessor declares *CHARSET* property descriptor, but > it was not added to > AbstractElasticsearchHttpProcessor in the static initializer block. > - and in the place where the content of the document is written to the > flowfile, : > [https://github.com/apache/nifi/blob/65c41ab917d7b5f323aa71d841cc03b29e12d480/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearchHttp.java#L237] > it uses > {code} > out.write(source.toString().getBytes()); > {code} > > which will only work if the JVM's _file.encoding_ is UTF-8. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6288) character set encoding issue in FetchElasticsearchHttp processor
Endre Kovacs created NIFI-6288: -- Summary: character set encoding issue in FetchElasticsearchHttp processor Key: NIFI-6288 URL: https://issues.apache.org/jira/browse/NIFI-6288 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.9.2 Reporter: Endre Kovacs Assignee: Endre Kovacs I used FetchElasticsearchHttp processor to fetch documents in Elasticsearch which have special UTF-8 chars, eg.: characters of foreign languages: accented chars or Japanese/Chinese chars. It was working as expected on platforms that have UTF-8 as a default _file.encoding._ But on e.g.: SLES12 VM, the special chars in the document, turned to "?" in the fetched, output flow files. Taking a look at the source code showed: - AbstractElasticsearchProcessor declares *CHARSET* property descriptor, but it was not added to AbstractElasticsearchHttpProcessor in the static initializer block. - and in the place where the content of the document is written to the flowfile, : [https://github.com/apache/nifi/blob/65c41ab917d7b5f323aa71d841cc03b29e12d480/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearchHttp.java#L237] it uses {code} out.write(source.toString().getBytes()); {code} which will only work if the JVM's _file.encoding_ is UTF-8. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6257) duplicate info in administrator's guide
[ https://issues.apache.org/jira/browse/NIFI-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-6257: --- Description: i was just reading through the nifi.properties part of the administrator's guide. [https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#cluster_node_properties] i saw there was line duplication in the doc as follows: both: {code:java} nifi.cluster.flow.election.max.wait.time {code} and {code:java} nifi.cluster.flow.election.max.candidates {code} is twice in the table, with the exact same description. was: i was just reading through the nifi.properties part of the administrator's guide. [https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html] i saw there was line duplication in the doc as follows: both: {code:java} nifi.cluster.flow.election.max.wait.time {code} and {code:java} nifi.cluster.flow.election.max.candidates {code} is twice in the table, with the exact same description. > duplicate info in administrator's guide > --- > > Key: NIFI-6257 > URL: https://issues.apache.org/jira/browse/NIFI-6257 > Project: Apache NiFi > Issue Type: Bug > Components: Documentation Website >Affects Versions: 1.9.2 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Trivial > Labels: documentation > Attachments: Screen Shot 2019-05-03 at 09.38.14.png > > > i was just reading through the nifi.properties part of the administrator's > guide. > [https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#cluster_node_properties] > i saw there was line duplication in the doc as follows: > both: > {code:java} > nifi.cluster.flow.election.max.wait.time > {code} > and > {code:java} > nifi.cluster.flow.election.max.candidates > {code} > is twice in the table, with the exact same description. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6257) duplicate info in administrator's guide
Endre Kovacs created NIFI-6257: -- Summary: duplicate info in administrator's guide Key: NIFI-6257 URL: https://issues.apache.org/jira/browse/NIFI-6257 Project: Apache NiFi Issue Type: Bug Components: Documentation Website Affects Versions: 1.9.2 Reporter: Endre Kovacs Assignee: Endre Kovacs Attachments: Screen Shot 2019-05-03 at 09.38.14.png i was just reading through the nifi.properties part of the administrator's guide. [https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html] i saw there was line duplication in the doc as follows: both: {code:java} nifi.cluster.flow.election.max.wait.time {code} and {code:java} nifi.cluster.flow.election.max.candidates {code} is twice in the table, with the exact same description. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6180) DruidTranquilityController: expose `firehoseGracePeriod` for higher index task throughput
Endre Kovacs created NIFI-6180: -- Summary: DruidTranquilityController: expose `firehoseGracePeriod` for higher index task throughput Key: NIFI-6180 URL: https://issues.apache.org/jira/browse/NIFI-6180 Project: Apache NiFi Issue Type: Improvement Components: Extensions Affects Versions: 1.9.1 Reporter: Endre Kovacs Assignee: Endre Kovacs During integration testing NiFi with Druid, I noticed a constant, overhead in the realtime index tasks: eg.: even if i set the {code:java} druid-cs-window-period=PT1M{code} the task duration was still at 426726ms (~7 minutes) 5 of that 7 minutes were due to firehose's grace period. My suggestion is to expose `druidBeam.firehoseGracePeriod` ([https://github.com/druid-io/tranquility/blob/master/docs/configuration.md#properties]) to the controller service: DruidTranquilityController, and build it into [https://github.com/apache/nifi/blob/3696b5bfcf0bd9e12ee4e9472f3413a93c9c0fcd/nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/druid/DruidTranquilityController.java#L452-L457] the beam config. Currently this constant takes the default value of 5 minutes, causing a constant overhead on each of the indexing tasks initiated by NiFi, no matter how few/small the workload window. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6179) DruidTranquilityController: add INDEX_RETRY_PERIOD to properties list, fix description
Endre Kovacs created NIFI-6179: -- Summary: DruidTranquilityController: add INDEX_RETRY_PERIOD to properties list, fix description Key: NIFI-6179 URL: https://issues.apache.org/jira/browse/NIFI-6179 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.9.1 Reporter: Endre Kovacs Assignee: Endre Kovacs During integration testing NiFi with Druid, I came across this in the NiFi source code for DruidTranquilityController: INDEX_RETRY_PERIOD [https://github.com/apache/nifi/blob/3696b5bfcf0bd9e12ee4e9472f3413a93c9c0fcd/nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/druid/DruidTranquilityController.java#L259] is not added to the list of properties at: [https://github.com/apache/nifi/blob/3696b5bfcf0bd9e12ee4e9472f3413a93c9c0fcd/nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/druid/DruidTranquilityController.java#L319-L339] Additional minor issue with this property is: its description [https://github.com/apache/nifi/blob/3696b5bfcf0bd9e12ee4e9472f3413a93c9c0fcd/nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/druid/DruidTranquilityController.java#L262] is a copy paste of the WINDOW_PERIOD: [https://github.com/apache/nifi/blob/3696b5bfcf0bd9e12ee4e9472f3413a93c9c0fcd/nifi-nar-bundles/nifi-druid-bundle/nifi-druid-controller-service/src/main/java/org/apache/nifi/controller/druid/DruidTranquilityController.java#L269-L272] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6031) ListenTCP processor does not release connections if client disconnects abruptly
Endre Kovacs created NIFI-6031: -- Summary: ListenTCP processor does not release connections if client disconnects abruptly Key: NIFI-6031 URL: https://issues.apache.org/jira/browse/NIFI-6031 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.5.0 Reporter: Endre Kovacs Assignee: Endre Kovacs If a client connected to ListenTCP processor, gets disconnected abruptly: (eg.: they disconnected their VPN to NiFi's network), the client-TCP-socket on the NiFi node will be seen lingering for an extended amount of time (eg.: such socket can linger in established state even after 12 hours after disconnection) Or till a manual workaround is done: the ListenTCP processor is restarted. An ideal fix would be to allow the underlying OS to poll the socket if the remote side is still alive, and disconnect it if not. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (NIFI-5983) recordReader parse problems in PutDatabaseRecord: flowfiles not transferred to failure relationship
[ https://issues.apache.org/jira/browse/NIFI-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757491#comment-16757491 ] Endre Kovacs edited comment on NIFI-5983 at 1/31/19 4:50 PM: - two PRs are sent: solving the problem at the processor side: [https://github.com/apache/nifi/pull/3280] solving the problem at the recordReaders side: [https://github.com/apache/nifi/pull/3282] they are mutually exclusive: if one of them is accepted, the other one should be thrown away. was (Author: andrewsmith87): two PRs are sent: solving the problem at the processor side: https://github.com/apache/nifi/pull/3280 solving the problem at the recordReaders side: https://github.com/apache/nifi/pull/3282 > recordReader parse problems in PutDatabaseRecord: flowfiles not transferred > to failure relationship > --- > > Key: NIFI-5983 > URL: https://issues.apache.org/jira/browse/NIFI-5983 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.7.0 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > When using PutDatabaseRecord, parse problems in record reader (Avro, CSV, but > possibly others too) should cause the flowfiles to transfer to failure > relationship, however, they are instead session rollbacked. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5983) recordReader parse problems in PutDatabaseRecord: flowfiles not transferred to failure relationship
[ https://issues.apache.org/jira/browse/NIFI-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs updated NIFI-5983: --- Status: Patch Available (was: Open) two PRs are sent: solving the problem at the processor side: https://github.com/apache/nifi/pull/3280 solving the problem at the recordReaders side: https://github.com/apache/nifi/pull/3282 > recordReader parse problems in PutDatabaseRecord: flowfiles not transferred > to failure relationship > --- > > Key: NIFI-5983 > URL: https://issues.apache.org/jira/browse/NIFI-5983 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.7.0 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > When using PutDatabaseRecord, parse problems in record reader (Avro, CSV, but > possibly others too) should cause the flowfiles to transfer to failure > relationship, however, they are instead session rollbacked. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5983) recordReader parse problems in PutDatabaseRecord: flowfiles not transferred to failure relationship
Endre Kovacs created NIFI-5983: -- Summary: recordReader parse problems in PutDatabaseRecord: flowfiles not transferred to failure relationship Key: NIFI-5983 URL: https://issues.apache.org/jira/browse/NIFI-5983 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.7.0 Reporter: Endre Kovacs Assignee: Endre Kovacs When using PutDatabaseRecord, parse problems in record reader (Avro, CSV, but possibly others too) should cause the flowfiles to transfer to failure relationship, however, they are instead session rollbacked. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-1490) Add multipart request support to ListenHTTP Processor
[ https://issues.apache.org/jira/browse/NIFI-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs reassigned NIFI-1490: -- Assignee: Endre Kovacs > Add multipart request support to ListenHTTP Processor > - > > Key: NIFI-1490 > URL: https://issues.apache.org/jira/browse/NIFI-1490 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Andrew Serff >Assignee: Endre Kovacs >Priority: Major > > The current ListenHTTP processor does not seem to support multipart requests > that are encoded with multipart/form-data. When a multipart request is > received, the ListenHTTPServlet just copies the Request InputStream to the > FlowFiles content which leaves the form encoding wrapper in the content and > in turn makes the file invalid. > Specifically, we want to be able to support file uploads in a multipart > request. > See this thread in the mailing list for more info: > http://mail-archives.apache.org/mod_mbox/nifi-users/201602.mbox/%3C6DE9CEEF-2A37-480F-8D3C-5028C590FD9E%40acesinc.net%3E -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-3469) Add multipart request support to HandleHttpRequest Processor
[ https://issues.apache.org/jira/browse/NIFI-3469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs reassigned NIFI-3469: -- Assignee: Endre Kovacs > Add multipart request support to HandleHttpRequest Processor > > > Key: NIFI-3469 > URL: https://issues.apache.org/jira/browse/NIFI-3469 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Koji Kawamura >Assignee: Endre Kovacs >Priority: Major > > Currently, HandleHttpRequest outputs a single FlowFile containing all > multipart values as following: > {code} > --ef07e8bf36c274d3 > Content-Disposition: form-data; name="p1" > v1 > --ef07e8bf36c274d3 > Content-Disposition: form-data; name="p2" > v2 > --ef07e8bf36c274d3-- > {code} > Many users requested adding upload files support to NiFi. > In order for HandleHttpRequest to support multipart data we need to add > followings (this is based on a brief researching and can be more complex or > simple): > We need to use HttpServletRequest#getParts() as written in this stackoverflow > thread: > http://stackoverflow.com/questions/3337056/convenient-way-to-parse-incoming-multipart-form-data-parameters-in-a-servlet > Also, we probably need a custom MultiPartInputStreamParser implementation. > Because Jetty's default implementation writes input data to temporary > directory on file system, instead, we'd like NiFi to write those into output > FlowFiles content in streaming fashion. > And we need request size validation checks, threshold for those validation > should be passed via javax.servlet.MultipartConfigElement. > Finally, we have to do something with HandleHttpResponse processor. > Once HandleHttpRequest processor start splitting incoming request into > multiple output FlowFiles, we need to wait for every fragment to be > processed, then execute HandleHttpRequest. > I think Wait/Notify processors (available from next version) will be helpful > here. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
[ https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Kovacs reassigned NIFI-5557: -- Assignee: Endre Kovacs > PutHDFS "GSSException: No valid credentials provided" when krb ticket expires > - > > Key: NIFI-5557 > URL: https://issues.apache.org/jira/browse/NIFI-5557 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 >Reporter: Endre Kovacs >Assignee: Endre Kovacs >Priority: Major > > when using *PutHDFS* processor in a kerberized environment, with a flow > "traffic" which approximately matches or less frequent then the lifetime of > the ticket of the principal, we see this in the log: > {code:java} > INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler > Exception while invoking getFileInfo of class > ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over > attempts. Trying to fail over immediately. > java.io.IOException: Failed on local exception: java.io.IOException: Couldn't > setup connection for princi...@example.com to host2.example.com/ip2:8020; > Host Details : local host is: "host1.example.com/ip1"; destination host is: > "host2.example.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) > at org.apache.hadoop.ipc.Client.call(Client.java:1479) > at org.apache.hadoop.ipc.Client.call(Client.java:1412) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771) > at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) > at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678) > at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222) > {code} > and the flowfile is routed to failure relationship. > *To reproduce:* > Create a principal in your KDC with two minutes ticket lifetime, > and set up a similar flow: > {code:java} > GetFile => putHDFS - success- -> logAttributes > \ > fail >\ > -> logAttributes > {code} > copy a file to the input directory of the getFile processor. If the influx > of the flowfile is much more frequent, then the expiration time of the ticket: > {code:java} > watch -n 5 "cp book.txt /path/to/input" > {code} > then the flow will successfully run without issue. > If we adjust this, to: > {code:java} > watch -n 121 "cp book.txt /path/to/input" > {code} > then we will observe this issue. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
[ https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594742#comment-16594742 ] Endre Kovacs commented on NIFI-5557: Please assign me this issue, as I already have a proposed fix. Thanks! Endre > PutHDFS "GSSException: No valid credentials provided" when krb ticket expires > - > > Key: NIFI-5557 > URL: https://issues.apache.org/jira/browse/NIFI-5557 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 >Reporter: Endre Kovacs >Priority: Major > > when using *PutHDFS* processor in a kerberized environment, with a flow > "traffic" which approximately matches or less frequent then the lifetime of > the ticket of the principal, we see this in the log: > {code:java} > INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler > Exception while invoking getFileInfo of class > ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over > attempts. Trying to fail over immediately. > java.io.IOException: Failed on local exception: java.io.IOException: Couldn't > setup connection for princi...@example.com to host2.example.com/ip2:8020; > Host Details : local host is: "host1.example.com/ip1"; destination host is: > "host2.example.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) > at org.apache.hadoop.ipc.Client.call(Client.java:1479) > at org.apache.hadoop.ipc.Client.call(Client.java:1412) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771) > at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) > at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678) > at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222) > {code} > and the flowfile is routed to failure relationship. > *To reproduce:* > Create a principal in your KDC with two minutes ticket lifetime, > and set up a similar flow: > {code:java} > GetFile => putHDFS - success- -> logAttributes > \ > fail >\ > -> logAttributes > {code} > copy a file to the input directory of the getFile processor. If the influx > of the flowfile is much more frequent, then the expiration time of the ticket: > {code:java} > watch -n 5 "cp book.txt /path/to/input" > {code} > then the flow will successfully run without issue. > If we adjust this, to: > {code:java} > watch -n 121 "cp book.txt /path/to/input" > {code} > then we will observe this issue. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
Endre Kovacs created NIFI-5557: -- Summary: PutHDFS "GSSException: No valid credentials provided" when krb ticket expires Key: NIFI-5557 URL: https://issues.apache.org/jira/browse/NIFI-5557 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.5.0 Reporter: Endre Kovacs when using *PutHDFS* processor in a kerberized environment, with a flow "traffic" which approximately matches or less frequent then the lifetime of the ticket of the principal, we see this in the log: {code:java} INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over attempts. Trying to fail over immediately. java.io.IOException: Failed on local exception: java.io.IOException: Couldn't setup connection for princi...@example.com to host2.example.com/ip2:8020; Host Details : local host is: "host1.example.com/ip1"; destination host is: "host2.example.com":8020; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) at org.apache.hadoop.ipc.Client.call(Client.java:1479) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771) at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678) at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222) {code} and the flowfile is routed to failure relationship. *To reproduce:* Create a principal in your KDC with two minutes ticket lifetime, and set up a similar flow: {code:java} GetFile => putHDFS - success- -> logAttributes \ fail \ -> logAttributes {code} copy a file to the input directory of the getFile processor. If the influx of the flowfile is much more frequent, then the expiration time of the ticket: {code:java} watch -n 5 "cp book.txt /path/to/input" {code} then the flow will successfully run without issue. If we adjust this, to: {code:java} watch -n 121 "cp book.txt /path/to/input" {code} then we will observe this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)