Hi
We have unfortunately had an incident where NiFi doing a weekend is filling
up the disk with logs because a process is failing and produce hundreds of
error messages per seconds.
We have changed the rollingPolicy to use daily rollover with a maxHistory
of 30 and maxFileSize at 100MB.
When the d
Hi all
I have a issue with using the QueryRecord query csv files. currently i'm
running NiFi version 1.12.1 but I also tested this in version 1.13.0
If my incoming csv file only have a header line and no data it fails
My querying statement looks like this: SELECT colA FROM FLOWFILE WHERE colC
= '
Geoffrey,
There's a really good blog by the man himself [1] :) I highly recommend the
official blog in general, lots of great posts and many are record-oriented
[2]
Regards,
Matt
[1] https://blogs.apache.org/nifi/entry/record-oriented-data-with-nifi
[2] https://blogs.apache.org/nifi/
On Wed, Fe
Thank you for the fast response Mark.
Hrm, record processing does sound useful.
Are there any good blogs / documentation on this? I’d really like to learn
more. I’ve been doing mostly text processing, as you’ve observed.
My use case is something like this
1) Use server API to get list o
Geoffrey,
At a high level, if you’re splitting multiple times and then trying to
re-assemble everything, then yes I think your thought process is correct. But
you’ve no doubt seen how complex and cumbersome this approach can be. It can
also result in extremely poor performance. So much so that
Im having some trouble with multiple splits/merges. Here's the idea:
Big data -> split 1->Save all the fragment.*attributes into variables -> split
2-> save all the fragment.* attributes
|
Split 1
|
Save fragment.* attributes into split1.fragment.*
|
Split 2
|
Save fragment.* attributes
Khaja,
There are two options in NiFi for incremental database fetch:
QueryDatabaseTable and GenerateTableFetch. The former is more often
used on a standalone NiFi cluster for single tables (as it does not
accept an incoming connection). It generates the SQL needed to do
incremental fetching, then
Hi,
I have a use case where I need to do incremental fetch on the oracle
tables. Is there a easy way to do this? I saw some posts about
querydatabase table. want to check if there is any efficient way to do this?
Thanks,
Khaja
Hello.
Not sure, but I already did something like this.
But unlike Java, I defined variable with keyword def
flowfile = session.get()
> if(!flowfile) return
> def filePath = flowfile.getAttribute('file_path')
> def file = new File(file_path)
> if(file.exists()){
> session.transfer(flowfile, R
Hi Mike,
attribute 'file_path' is not pointing to folder only, it has value
/path/to/filename, so it is like /opt/data/folder/filename.txt. The attribute
value is ok, I double checked.
Tom
-Original Message-
From: Mike Thomsen
Sent: 24 February 2021 18:00
To: users@nifi.apache.org
Sub
If file_path is pointing to a folder as you said, it's going to check
for the folder's existence. The fact that it's failing to return true
there suggests that something is wrong with the path in the file_path
attribute.
On Wed, Feb 24, 2021 at 11:47 AM Tomislav Novosel
wrote:
>
> Hi guys,
>
>
>
Thanks Mark for the detailed explanation.
Along with nifi i am also upgrading zookeeper 3.4 version to 3.5 version.
So My nifi1.8 version runs on 3.4 version & new 1.12.1 version nifi runs on
3.5 version of zookeeper.and both zookeper are in differnet linux server
running.
That is why i need to tr
Hi guys,
I want to check if file exists with this groovy script:
flowfile = session.get()
if(!flowfile) return
file_path = flowfile.getAttribute('file_path')
File file = new File(file_path)
if(file.exists()){
session.transfer(flowfile, REL_FAILURE)
}
else{
session.transfer(flowfile, REL_SUCCESS)
Sanjeet,
For this use case, you should not be using the zk-migrator.sh tool. That tool
is not intended to be used when upgrading nifi. Rather, the tool is to be used
if you’re migrating nifi away from one zookeeper and onto another. For example,
if you have a ZooKeeper instance that is shared b
Hi,
My use case is to upgrade the nifi cluster from 1.8 to 1.12.1 with state (we
are using external zookeeper and its 3 node cluster ).
So the approach I followed ,
-> created 3 node linux box and installed nifi 1.12.1 & zookeeper 3.5.8
-> brought the flow.xml.gz, users.xml and authorization.xml
15 matches
Mail list logo