Fantastic, thanks Mark
On Wed, 12 Sep 2018 at 06:15, Mark Payne wrote:
> Phil,
>
> For the content repository, you can configure the directory by changing
> the value of
> the "nifi.content.repository.directory.default" property in
> nifi.properties. The suffix here,
> "default" is the name of t
Phil,
For the content repository, you can configure the directory by changing the
value of
the "nifi.content.repository.directory.default" property in nifi.properties.
The suffix here,
"default" is the name of this "container". You can have multiple containers by
adding extra
properties. So, fo
Thanks Mark, this is great advice.
Disk access is certainly an issue with the current set up. I will certainly
shoot for NVMe disks in the build. How does NiFi get configured to span
it's repositories across multiple physical disks?
Thanks,
Phil
On Wed, 12 Sep 2018 at 01:32, Mark Payne wrote:
Phil,
As Sivaprasanna mentioned, your bottleneck will certainly depend on your flow.
There's nothing inherent about NiFi or the JVM, AFAIK that would limit you. I've
seen NiFi run on VM's containing 4-8 cores, and I've seen it run on bare metal
on servers containing 96+ cores. Most often, I see pe
We need your help to make the Apache Washington DC Roadshow on Dec 4th a
success.
What do we need most? Speakers!
We're bringing a unique DC flavor to this event by mixing Open Source
Software with talks about Apache projects as well as OSS CyberSecurity,
OSS in Government and and OSS Career
I think the existing processors such as HandleHttpRequest can be used.
The body of the POST will become the flow file content, and the
headers will become flow file attributes.
After HandleHttpRequest you can use RouteOnAttribute to make a
decision based on one of the headers (flow file attributes
Dear Experts,
I need your suggestion to design a requirement for my current project.
I would like to listen for HTTP post request thru NiFi which will have image
file attached.
I need to 1st extract the HTTP header information (to understand the type of
request), extract the image file and stor
Hi,
Logging level is set in logback.xml. To change the logging level for a given
logger/package/class, you must modify that file. See [1] for more details.
In your code example, you are creating the logger correctly using the
LoggerFactory. All NiFi Frameworks and extensions use the slf4j API,
Good morning. I am confused about logging levels from within java code that
defines a nar.
I have a java class file AMQPWorker.java, within which I establish a logger
used throughout my nar. This is what I inherited in the class:
abstract class AMQPWorker implements AutoCloseable {
private f
hello there
i want to load many huge CSV files in my oracle database by NIFI
i test PutDatabaseRecord processor but it create an insert statement for
every record
and need too many RAM resource and too many time to be processed because
number of records are very large
but if i can execute sqlldr co
10 matches
Mail list logo