Sorry, I wasn't saying that
'nifi.content.repository.archive.max.usage.percentage' was new I just hadn't
managed to get a NiFi instance stuck this way and even the documentation says
that if archive is empty and the content repo needs more room it would disable
the archive. I'm having trouble
Shawn,
There are a couple of properties at play. The
“nifi.content.repository.archive.max.usage.percentage" property behaves as you
have described. But there’s also a second property:
nifi.content.repository.archive.backpressure.percentage
This controls at what point the Content Repository
Apologize for the very late reply on this thread, but finally had time to
see
if I was able to reproduce the problem in our LAB environment.
The LAB environment is a 3 node cluster running NiFi v1.13.2 on Ubuntu
18.04
with java version:
ii openjdk-11-jre-headless:amd64
Thank you Carlos.
I used the Pentaho PDI (Pentaho Data Integration - Spoon).
---
In Pentaho there is a block: Input / CSV file input.
- this block, reads a CSV file, specifies the column names, specifies the
data type of each column, that is, obtains the fields from the CSV file.
---
Then
Hi Cristiano,
Take a look on Nifi processor PutDatabaseRecord wich can read from a csv
source (and others json, avro,ect) and put the records on a existent table on
database.
If you need to create a table based on the schema of CSV you probably need a
custom script, take a look in this post: