Hi Josef,
I would suggest reaching out to the Apache ZooKeeper community who would be
able to give you the best guidance on your ZooKeeper deployments. In the
meantime, I would also suggest reviewing the offical ZooKeeper docs (
https://zookeeper.apache.org/documentation.html) as well.
Edward
Hi Max,
Having a quick look through the PR, it looks like it was closed due to
inactivity.
All that probably needs to happen is for someone to check that the change
is still good and then re-open the PR.
If someone can pick this up, the contributor guidelines, (
>> >
> >> > Found Banned Dependency:
> >> > org.apache.logging.log4j:log4j-core:jar:2.17.1
> >> >
> >> > while this version is not impacted by log4j security vulnerability:
> >> >
> >> > https://logging.apache.org
Hi Mike,
Can you give the full output you get from mvn dependency:tree as we 15.2
and 3 was done to remove log4j dependences from Nifi as a safety measure
due the recent log4shell issues.
Thanks
Edward
On Wed, Jan 19, 2022 at 10:24 AM Michal Tomaszewski <
michal.tomaszew...@cca.pl> wrote:
>
I've just had a quick look on ZooKeeper JIRA and it looks like there is a
bug around ZooKeeper session closing that was introduced as part of a
change in 3.6 release it looks like it might of been fixed in 3.7 release
https://issues.apache.org/jira/browse/ZOOKEEPER-3822
rong and will never work. We
> > could introduce a new RetryableLookupFailureException and have both
> > services catch IOException and throw the retryable exception, or both
> > services can let IOException be thrown and let the callers decide what
> > to do.
> >
> > On Wed, Oct 27, 2021 at 5:
le)
> processor. The same situation happened; flow files were penalized and the
> queue on upstream connection of LookupAttribute (Oracle) increased..
>
>
>
> Thank you in advance,
>
>
>
> --Bilal
>
>
>
>
>
> *From:* Edward Armes
> *Sent:* 26 Ekim 20
, do you think to add a new feature (Rollback On Failure) for
> LookupAttribute processor like PutHiveQL processor?
>
>
>
> Thank you for helping,
>
>
>
> --Bilal
>
>
>
>
>
> *From:* Edward Armes
> *Sent:* 26 Ekim 2021 Salı 01:32
> *To:
ecide if they want to connect Retry back to
> > self to get the current behavior, auto-terminate it, or connect it to
> > the next processor like this case wants to do.
> >
> >
> > On Mon, Oct 25, 2021 at 4:01 PM Edward Armes
> wrote:
> > >
> > >
Hmm, it sounds like to me there might be 2 bugs here.
One in the lookup attribute processor not isolating the loading of
attributes from a FlowFile which may legitimately cause an IOException that
would result in the FlowFile needing to be retired. The other in the
TeradataDB lookup service not
Hi Thắng,
It feels like to me you may need more nifi nodes in your cluster, as sounds
like he current load is not distributed enough across the cluster. Would
you be able share a few additional pieces of information to help the
community help you? Specificly what version of Nifi you are running,
sting things out on a sandbox or dev env we already have.
>
> Thank you,
>
> On Fri, Sep 10, 2021 at 12:04 PM Edward Armes
> wrote:
>
>> Hi Dima,
>>
>> Ideally you should not run anything as the root user as it tends to cause
>> more issues then it sol
Hi Dima,
Ideally you should not run anything as the root user as it tends to cause
more issues then it solves in the long term.
Secondly I would recommend against running more than one Nifi instance
concurrently on a host without some sort of isolation like a container or a
jail.
If you're
Hi,
Looking at the error I would guess that for some reason the PutHDFS
processor isn't able to resolve the data node in HDFS.
Do you have any additional infornation around HDFS in your Nifi app log or
any information in the HDFS logs?
Otherwise I would suggest lowering lowering the log level
Hi Sharma,
An alternative could be, duplicate the FlowFile from the HandleHTTPRequest
Processor have one wait while the other goes through an AttributesToJSON
and stores the JSON in a lookup cache. The other FlowFile then grabs that
json from the lookup cache and then use ReplaceText processor
Hi Eric,
So I looked into this September last year. I found that a variant of master
driven development was the best course of action, for collaboration on a
single Nifi flow.
As far as I can tell the git integration deliberately designed to be as
simple a s possible, the down side of this is
Hi Ami,
Biased on the error you've got in the user log it looks like you've got
a local trust issue. If you could tell us what you've already tried,
someone might be able to help you a bit more.
Edward
On 27/04/2020 05:36, Ami Goldenberg wrote:
Hi,
We are trying to deploy NiFi on
Hi Jouvin,
I believe you are correct that the inferAvroSchema and the convert record
processor do work differently. I believe this is because the
inferAvroSchema uses Apache Kite and the convert record derives the schema
from the record reader itself.
As an aside I have also noticed that when
Hi Jeremy,
In this case I don't think there is an easy answer here.
You may have some luck with adjusting the max runtime of the processor but
without checking the the processors implementation I couldn't know for
certain if that would have any effect.
Edward
On Mon, 9 Mar 2020, 06:34 Jeremy
Hi Lei,
As far as I'm aware there currently isn't an out of the box method to do
this
However I would look at using a combination of the nifi registry and tools
in nifi toolkit to roll your own.
Also, there have been multiple questions to both this and the dev mailing
list, asking what could be
Hmm, I wonder if there's a change that could be made to expose this error
so its a bit more obvious, maybe one for the Dev mailing list?
Edward
On Wed, Aug 14, 2019 at 3:12 PM Pierre Villard
wrote:
> Glad you sorted it out and thanks for letting us know!
> In case you missed it, you might be
Hi Nicolas,
This is another dump question. As I've only ever seen this before when I've
accidentally connect to a secured Nifi cluster over HTTP and not HTTPS.
>From I've seen Nifi won't ask your browser to do a connection upgrade (HTTP
-> HTTPS),
When you type in the address are you sure your
Hi Jason,
This isn't explicitly documented anywhere, however the locations of all the
key paths for Nifi can be found in the documentation in general. Hopefully
a combination of this email thread and official documentation should be
enough for your client to give the AV exemptions you need.
Hi Peter,
I think this depends on where this data is stored. If this data is avaiable
as metrics record by Nifi, then a reporting task would be the best way
forward. However if this is data that is recorded in your FlowFiles as part
of your flow then I think you're looking at either collecting in
spawn 4,000 threads behind the scene? The tcp connections will
> be idle most of the time.
> >
> > thanks
> > Clay
> >
> >
> > On Fri, Aug 2, 2019 at 6:10 AM Edward Armes
> wrote:
> >>
> >> Hi Clay,
> >>
> >> Because
Hi Clay,
Because Nifi underneath uses a thread pool for it's own threading
underneath, and each instance processor runs does so in it's own thread, I
don't see any reason why not. One thing to note that the way the ListenTCP
processor appears to have been written such that it gets all the
yet loaded)
>
> .
>
> So there must be some weird back-and-forth dance between the ldap user
> group provider and the file policy provider ... But I don't understand the
> dance in question
> Le 19/07/2019 à 15:38, Edward Armes a écrit :
>
> Hi Nicolas,
>
> In your act
r
> file-access-policy-provider property>
>
>
>
>
> Le 19/07/2019 à 12:03, Pierre Villard a écrit :
>
> Hi Nicolas,
>
> Could you share the full content of your authorizers.xml file? Sometimes
> it's just a matter of references not being in the right "order&quo
7/2019 à 11:36, Edward Armes a écrit :
>
> Hi Nicolas,
>
> This one is a bit of a Spring special. The actual cause here is that the
> Spring Bean that is being created from this file has silently failed, and
> thus the auto-wiring has failed as well. The result is you get this l
Hi Nicolas,
This one is a bit of a Spring special. The actual cause here is that the
Spring Bean that is being created from this file has silently failed, and
thus the auto-wiring has failed as well. The result is you get this lovely
misleading error. The normal reason for the bean not being
Hi Andrew,
Is this functionality documented anywhere do you know? As I've had a quick
look through the documentation and I haven't seen this.
Edward
On Tue, Jul 2, 2019 at 5:33 PM James McMahon wrote:
> Excellent - thanks very much Andrew. This is my first crack at working
> with a clustered
31 matches
Mail list logo