Hello Mothi86,
I think you can achieve it by using HttpFS on HDFS side. It is a part of hadoop
library and a proxy server for HDFS.
https://hadoop.apache.org/docs/stable/hadoop-hdfs-httpfs/index.html
In your case, running HttpFS server on the management nodes or the edge node
would be good.
Hi Matt and Koji,
Thanks for the information. So if there is not any flow.xml.gz in conf
directory when a secured nifi cluster is starting, we need to add "Node
Identity" (and "Initial Admin Identity") to the policies (each component or PG)
explicitly, right? That's my case. After adding
For reference:
Netty 4.1 NOTICE
https://github.com/netty/netty/blob/4.1/NOTICE.txt
Netty 3.7 NOTICE
https://github.com/netty/netty/blob/3.7/NOTICE.txt
On Tue, Jun 27, 2017 at 10:21 PM, Tony Kurc wrote:
> Background, I asked Mike how he put together the LICENSE, and why he
Background, I asked Mike how he put together the LICENSE, and why he added
a separate section in the LICENSE for Google Protocol Buffers 3.3.1, and
his answer that made sense was "well, what existed there was there had a
version (2.5.0)".
Interesting note, the Google Protocol Buffers LICENSE
Hello all,
I'm attempting to merge a LICENSE and NOTICE i've created for a new grpc
processor bundle [1,2] into the NiFi assembly. I've run into a couple of
things i don't know how to resolve:
1. If I add a new (transitive) dependency with a newer version than exists
elsewhere in the code
The multipart/form-data body would have to be preemptively created and
stored in your flowfile payload. InvokeHTTP could then be used to POST the
message body to the remote server (after having set the appropriate
content-type).
i.e. you have to manually construct the multipart form in the
This is a bit outside of the box, but I have actually implemented this
solution previously.
My scenario was very similar. NIFI was installed outside of the firewalled
HDFS cluster. The only external access to the HDFS cluster was through SSH.
Therefore, my solution was to use SSH to call a
Sometimes it’s just an historical oversight that EL support isn’t available for
certain properties, but in this case the connection is initialized only once,
so the connection specific properties like hostname, port, username and
password cannot be evaluated against FlowFile attributes in any
Hi,
For publishing/consuming messages on Rabbit queue, NIFI provides support via
PublishAMQP and ConsumeAMQP processors.
However, only limited properties support EL while others don't.
Properties supporting EL
Exchange Name
Routing Key
Thus we can dynamically control these.
However remaining
Thanks Matt for clarification. My cluster had an existing flow.xml I
happened copied from another NiFi instance.
On Jun 27, 2017 9:14 PM, "Matt Gilman" wrote:
Takanobu,
The dataflow-specific policies (any policies on the root Process Group) are
only granted for new
Takanobu,
The dataflow-specific policies (any policies on the root Process Group) are
only granted for new instances when there is an existing flow.xml.gz in
your /conf directory. When there is no flow and the NiFi
instance is joining a cluster the policies cannot be granted at start up
because
Hi Koji,
Thank you very much for the confirmation. Hmm... I will continue to investigate
why my cluster does not work correctly.
Thanks again,
Takanobu
-Original Message-
From: Koji Kawamura [mailto:ijokaruma...@gmail.com]
Sent: Tuesday, June 27, 2017 5:59 PM
To: dev
I just created a brand-new secured cluster now. NiFi automatically
created a policy "view the data" (and others) with the user defined as
"Initial Admin Identity" and "Node Identity" in conf/authorizers.xml.
It seems working as expected.
Koji
On Tue, Jun 27, 2017 at 5:26 PM, Koji Kawamura
Hi Takanobu,
Glad to hear that you have it fixed.
> Although I defined the Node Identity before stating the cluster at the first
> time, it seemed NiFi did not automatically create the policies and I needed
> to add the Node Identity to the policy explicitly.
Thanks for sharing, ideally NiFi
Hi Koji,
Thank you for your quick and valuable answer! That's exactly what I need. After
adding "Node Identity" of authorizers.xml to the "view the data" policy, the
authorized user can list the queue.
>> IIRC, if you define the Node Identity before starting the secured cluster at
>> the
15 matches
Mail list logo