Re: Nifi registry Kerberos Auth with Docker

2019-02-13 Thread Tomislav Novosel
Hi Kevin,

Thank you for your suggestions. I succeeded to get everything working now.
As you described, now is everything exectly like that in files you
mentioned.

One strange thing. At first stratup of container, I can login into UI
without problems, but I cannot add new users and policies. After I
refreshed UI in browser, I was able to do that. So just after refreshing. ??

And also, Im not able to modify my initial admin and user privileges, I
mean for myself, but for new added user I can.

I read on some forums that it can be slow snyc beetwen Nifi and AD. Im on
my company's domain and there are couple of hundreds users.

BR,
Tom

On Wed, 13 Feb 2019, 15:29 Kevin Doran  Hi Tom,
>
> How are you configuring the various config files? Through the docker
> container's environment variables, or through modifying those files
> directly? If modifying those files, are you injecting them through a volume
> or something like that? Trying to determine if there is something else at
> play here overwritting your settings on startup...
>
> It sounds like you are able to configure authentication/login
> successfully, and are just running into a snag on the authorization /
> initial admin side of things.
>
> Try this:
>
> 1. In authorizers.xml, set the "Initial User Identity 1" and "Initial
> Admin Identity" properties to exactly match the user identity recognized by
> NiFi (the one you see in the upper-right corner of the UI after logging
> in). Make sure whitespace and capitalization all agree.
>
> 2. Delete users.xml and authorizations.xml files and restart NiFI Registry.
>
> If all goes successfully, your users.xml file should be regenerated to
> hold a user with an identity matching "Initial User Identity 1", and
> authorizations.xml should be regenerated to hold the policies for the
> "Initial Admin Identity".
>
> If you get that working, you can improve things a bit by configuring the
> LdapUserGroupProvider to sync users and groups from LDAP, letting you set
> policies in the UI without having to manually create users that match the
> LDAP directory users.
>
> Hope this helps,
> Kevin
>
>
> On February 13, 2019 at 03:56:52, Tomislav Novosel (to.novo...@gmail.com)
> wrote:
> > Also, FYI.
> >
> > If I set for INITIAL_ADMIN_IDENTITY my user's full DN,
> cn=...,ou=...,dc=...
> > I can also login into UI, but there is no properties button upper right
> in
> > the UI.
> >
> > [image: 1.PNG]
> >
> > If I set only USERNEMA to be u21g46, I can see properties button, but I
> > can't add new users.
> >
> > BR,
> > Tom
> >
> > On Fri, 8 Feb 2019 at 16:03, Bryan Bende wrote:
> >
> > > Thinking about it more, I guess if you are not trying to do spnego
> > > then that message from the logs is not really an error. The registry
> > > UI always tries the spnego end-point first and if it returns the
> > > conflict response (as the log says) then you get sent to the login
> > > page.
> > >
> > > Maybe try turning on debug logging by editing logback.xml > >
> name="org.apache.nifi.registry" level="INFO"/> and changing to DEBUG.
> > >
> > > On Fri, Feb 8, 2019 at 9:51 AM Tomislav Novosel
> > > wrote:
> > > >
> > > > Hi Bryan,
> > > >
> > > > I don't have this properties populated in Nifi registry instance
> > > > outside Docker (as a service on linux server), and everything works.
> > > >
> > > > What are this properties up to?
> > > >
> > > > Regards,
> > > > Tom
> > > >
> > > >
> > > >
> > > > On Fri, 8 Feb 2019 at 15:25, Bryan Bende wrote:
> > > >>
> > > >> The message about "Kerberos service ticket login not supported by
> this
> > > >> NiFi Registry" means that one of the following properties is not
> > > >> populated:
> > > >>
> > > >> nifi.registry.kerberos.spnego.principal=
> > > >> nifi.registry.kerberos.spnego.keytab.location=
> > > >>
> > > >> On Fri, Feb 8, 2019 at 8:20 AM Tomislav Novosel
> > > wrote:
> > > >> >
> > > >> > Hi Daniel,
> > > >> >
> > > >> > Ok, I see. Thanks for the answer.
> > > >> >
> > > >> > I switched to official Nifi registry image. I succeeded to spin up
> > > registry in docker container and to
> > > >> > setup Kerberos provider in identity-providers.xml. Also I
> configured
> > > authorizers.xml as per afficial Nifi documentation.
> > > >> >
> > > >> > I already have the same setup with Kerberos, but not in Docker
> > > container. And everything works like a charm.
> > > >> >
> > > >> > When I enter credentials, login does not pass. This is app log:
> > > >> >
> > > >> > 2019-02-08 12:52:30,568 INFO [NiFi Registry Web Server-14]
> > > o.a.n.r.w.m.IllegalStateExceptionMapper
> java.lang.IllegalStateException:
> > > Kerberos service ticket login not supported by this NiFi Registry.
> > > Returning Conflict response.
> > > >> > 2019-02-08 12:52:30,644 INFO [NiFi Registry Web Server-13]
> > > o.a.n.r.w.s.NiFiRegistrySecurityConfig Client could not be
> authenticated
> > > due to:
> > >
> org.springframework.security.authentication.AuthenticationCredentialsNotFoundException:
>
> > > An 

Re: running multiple commands from a single ExecuteStreamCommand processor

2019-02-13 Thread Mark Payne
Vijay,

No worries, this thread is fine. The processor will stream the contents of the 
FlowFIle to the Standard Input (StdIn) of the process
that is generated. So it will go to the bash script. The bash script can do 
whatever it needs to do, pipe to another command, etc.
Whatever is written to StdOut becomes the content of the FlowFile. So it would 
be up to you to pipe the output of the first command
to the input of the second. Does that make sense?

Thanks
-Mark



> On Feb 13, 2019, at 3:26 PM, Vijay Chhipa  wrote:
> 
> Mark,
> 
> Thanks for your quick response, 
> When calling bash script that has multiple commands, is there a single flow 
> file generated after all commands are executed (accumulating output from each 
> command) or multiple flow files generated per command line in the bash 
> script. 
> 
> Sorry for tagging along another question on top of this, I can ask it as a 
> separate thread if it makes more sense. 
> 
> Thanks
> 
> 
>> On Feb 13, 2019, at 12:50 PM, Mark Payne  wrote:
>> 
>> Vijay,
>> 
>> This would be treated as arguments to a single command.
>> 
>> One option would be to create a simple bash script that executes the desired 
>> commands and invoke
>> that from the processor. Or, of course, you can chain together multiple 
>> processors.
>> 
>> Thanks
>> -Mark
>> 
>> 
>>> On Feb 13, 2019, at 1:48 PM, Vijay Chhipa  wrote:
>>> 
>>> Hi, 
>>> 
>>> I have a ExecuteStreamCommand  processor running a single command, 
>>> (executing a  -jar  ) and it runs fine, 
>>> 
>>> I need to run the same command but with different arguments. 
>>> 
>>> My question is: Can I put multiple lines as command arguments and still 
>>> have a single instance of the ExecuteStreamCommand?
>>> 
>>> Would those be treated as arguments to a single command, or each line of 
>>> arguments would be treated as separate command?
>>> 
>>> 
>>> Thanks 
>>> 
>>> Vijay
>>> 
>>> 
>>> 
>> 
> 



Re: Failed to read TOC File

2019-02-13 Thread Chad Woodhead
Thanks Mark. I really appreciate the help. I'll take a look at these in my
different clusters.

-Chad

On Wed, Feb 13, 2019 at 3:09 PM Mark Payne  wrote:

> Depending on the size of the nodes, you may want to also increase the
> number of indexing threads
> ("nifi.provenance.repository.index.threads"). You may want to also
> increase the amount of time to store provenance
> from 24 hours to something like 36 months... for most people just limiting
> based on size is enough. The time is really
> only useful if you are worried about it from a compliance standpoint, such
> that you are only allowed to store that sort
> of data for a limited amount of time. (Maybe you can only store PII for 24
> hours, for instance, and PII is included in your
> attributes).
>
> The value for "nifi.bored.yield.duration" can probably be scaled back from
> "10 millis" to something like "1 millis". This comes
> into play when a processor's incoming queue is empty or outgoing queue is
> full, for instance. It will wait approximately 10 miliseconds
> before checking if the issue has resolved. This results in lower CPU
> usage, but it also leads to "artificial latency." For a production
> server, "1 millis" is probably just fine.
>
> You may also want to consider changing the values of
> "nifi.content.repository.archive.max.retention.period" and
> "nifi.content.repository.archive.max.usage.percentage"
> When you store the provenance data, it's often helpful to be able to view
> the data as it was at that point in the flow. These properties control how
> long you keep around this data after you've finished processing it, so
> that you can go back and look at it for debugging purposes, etc.
>
> These are probably the most critical things, in terms of performance and
> utilization.
>
> Thanks
> -Mark
>
>
> On Feb 13, 2019, at 2:31 PM, Chad Woodhead  wrote:
>
> Mark,
>
> That must be it! I have "nifi.provenance.repository.max.storage.size" = 1
> GB. Just bumped that to 200 GB like you suggested and I can see provenance
> again. I've always wondered why my provenance partitions never got very
> large!
>
> While we're on the subject, are there other settings like this that based
> on their default values I could be under-utilizing the resources (storage,
> mem, CPU, etc.) I have on my servers dedicated to NiFi?
>
> -Chad
>
> On Wed, Feb 13, 2019 at 1:48 PM Mark Payne  wrote:
>
>> Hey Chad,
>>
>> What do you have for the value of the
>> "nifi.provenance.repository.max.storage.size" property?
>> We will often see this if the value is very small (the default is 1 GB,
>> which is very small) and the volume
>> of data is reasonably high.
>>
>> The way that the repo works, it writes to one file for a while, until
>> that file reaches 100 MB or up to 30 seconds,
>> by default (configured via "nifi.provenance.repository.rollover.size" and
>> "nifi.provenance.repository.rollover.time").
>> At that point, it rolls over to writing to a new file and adds the
>> now-completed file to a queue. A background thread
>> is then responsible for compressing that completed file.
>>
>> What can happen, though, if the max storage space is small is that the
>> data can actually be aged off from the repository
>> before that background task attempts to compress it. That can result in
>> either a FileNotFoundException or an EOFException
>> when trying to read the TOC file (depending on the timing of when the
>> age-off happens). It could potentially occur on the
>> .prov file, in addition to, or instead of the .toc file.
>>
>> So generally, the solution is to increase the max storage size. It looks
>> like you have 130 GB on each partition and 2 partitions per
>> node, so 260 GB total per node that you can use for provenance. So I
>> would set the max storage size to something like "200 GB".
>> Since it is a soft limit and it may use more disk space than that
>> temporarily before shrinking back down, you'll want to give it a little
>> bit of wiggle room.
>>
>> Thanks
>> -Mark
>>
>>
>> On Feb 13, 2019, at 1:31 PM, Chad Woodhead 
>> wrote:
>>
>> Hey Joe,
>>
>> Yes nifi.provenance.repository.implementation=
>> org.apache.nifi.provenance.WriteAheadProvenanceRepository
>>
>> Disk space is fine as well. I have dedicated mounts for provenance (as
>> well all the repos have their own dedicated mounts):
>>
>> nifi.provenance.repository.dir.default=
>> /data/disk5/nifi/provenance_repository
>> nifi.provenance.repository.directory.provenance2=
>> /data/disk6/nifi/provenance_repository
>>
>> Both of these mounts have plenty of space and are only 1% full and have
>> never become close to being filled up.
>>
>> 
>>
>> -Chad
>>
>> On Wed, Feb 13, 2019 at 1:06 PM Joe Witt  wrote:
>>
>>> Chad,
>>>
>>> In your conf/nifi.properties please see what the implementation is for
>>> your provenance repository. This specied on
>>>
>>>
>>> nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
>>>
>>> Is that what you have?
>>>
>>> The above 

Re: running multiple commands from a single ExecuteStreamCommand processor

2019-02-13 Thread Vijay Chhipa
Mark,

Thanks for your quick response, 
When calling bash script that has multiple commands, is there a single flow 
file generated after all commands are executed (accumulating output from each 
command) or multiple flow files generated per command line in the bash script. 

Sorry for tagging along another question on top of this, I can ask it as a 
separate thread if it makes more sense. 

Thanks


> On Feb 13, 2019, at 12:50 PM, Mark Payne  wrote:
> 
> Vijay,
> 
> This would be treated as arguments to a single command.
> 
> One option would be to create a simple bash script that executes the desired 
> commands and invoke
> that from the processor. Or, of course, you can chain together multiple 
> processors.
> 
> Thanks
> -Mark
> 
> 
>> On Feb 13, 2019, at 1:48 PM, Vijay Chhipa  wrote:
>> 
>> Hi, 
>> 
>> I have a ExecuteStreamCommand  processor running a single command, 
>> (executing a  -jar  ) and it runs fine, 
>> 
>> I need to run the same command but with different arguments. 
>> 
>> My question is: Can I put multiple lines as command arguments and still have 
>> a single instance of the ExecuteStreamCommand?
>> 
>> Would those be treated as arguments to a single command, or each line of 
>> arguments would be treated as separate command?
>> 
>> 
>> Thanks 
>> 
>> Vijay
>> 
>> 
>> 
> 



smime.p7s
Description: S/MIME cryptographic signature


Re: Failed to read TOC File

2019-02-13 Thread Mark Payne
Depending on the size of the nodes, you may want to also increase the number of 
indexing threads
("nifi.provenance.repository.index.threads"). You may want to also increase the 
amount of time to store provenance
from 24 hours to something like 36 months... for most people just limiting 
based on size is enough. The time is really
only useful if you are worried about it from a compliance standpoint, such that 
you are only allowed to store that sort
of data for a limited amount of time. (Maybe you can only store PII for 24 
hours, for instance, and PII is included in your
attributes).

The value for "nifi.bored.yield.duration" can probably be scaled back from "10 
millis" to something like "1 millis". This comes
into play when a processor's incoming queue is empty or outgoing queue is full, 
for instance. It will wait approximately 10 miliseconds
before checking if the issue has resolved. This results in lower CPU usage, but 
it also leads to "artificial latency." For a production
server, "1 millis" is probably just fine.

You may also want to consider changing the values of 
"nifi.content.repository.archive.max.retention.period" and 
"nifi.content.repository.archive.max.usage.percentage"
When you store the provenance data, it's often helpful to be able to view the 
data as it was at that point in the flow. These properties control how
long you keep around this data after you've finished processing it, so that you 
can go back and look at it for debugging purposes, etc.

These are probably the most critical things, in terms of performance and 
utilization.

Thanks
-Mark


On Feb 13, 2019, at 2:31 PM, Chad Woodhead 
mailto:chadwoodh...@gmail.com>> wrote:

Mark,

That must be it! I have "nifi.provenance.repository.max.storage.size" = 1 GB. 
Just bumped that to 200 GB like you suggested and I can see provenance again. 
I've always wondered why my provenance partitions never got very large!

While we're on the subject, are there other settings like this that based on 
their default values I could be under-utilizing the resources (storage, mem, 
CPU, etc.) I have on my servers dedicated to NiFi?

-Chad

On Wed, Feb 13, 2019 at 1:48 PM Mark Payne 
mailto:marka...@hotmail.com>> wrote:
Hey Chad,

What do you have for the value of the 
"nifi.provenance.repository.max.storage.size" property?
We will often see this if the value is very small (the default is 1 GB, which 
is very small) and the volume
of data is reasonably high.

The way that the repo works, it writes to one file for a while, until that file 
reaches 100 MB or up to 30 seconds,
by default (configured via "nifi.provenance.repository.rollover.size" and 
"nifi.provenance.repository.rollover.time").
At that point, it rolls over to writing to a new file and adds the 
now-completed file to a queue. A background thread
is then responsible for compressing that completed file.

What can happen, though, if the max storage space is small is that the data can 
actually be aged off from the repository
before that background task attempts to compress it. That can result in either 
a FileNotFoundException or an EOFException
when trying to read the TOC file (depending on the timing of when the age-off 
happens). It could potentially occur on the
.prov file, in addition to, or instead of the .toc file.

So generally, the solution is to increase the max storage size. It looks like 
you have 130 GB on each partition and 2 partitions per
node, so 260 GB total per node that you can use for provenance. So I would set 
the max storage size to something like "200 GB".
Since it is a soft limit and it may use more disk space than that temporarily 
before shrinking back down, you'll want to give it a little
bit of wiggle room.

Thanks
-Mark


On Feb 13, 2019, at 1:31 PM, Chad Woodhead 
mailto:chadwoodh...@gmail.com>> wrote:

Hey Joe,

Yes 
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository

Disk space is fine as well. I have dedicated mounts for provenance (as well all 
the repos have their own dedicated mounts):

nifi.provenance.repository.dir.default=/data/disk5/nifi/provenance_repository
nifi.provenance.repository.directory.provenance2=/data/disk6/nifi/provenance_repository

Both of these mounts have plenty of space and are only 1% full and have never 
become close to being filled up.



-Chad

On Wed, Feb 13, 2019 at 1:06 PM Joe Witt 
mailto:joe.w...@gmail.com>> wrote:
Chad,

In your conf/nifi.properties please see what the implementation is for your 
provenance repository. This specied on

nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository

Is that what you have?

The above error I believe could occur if the location where provenance is being 
written runs out of disk space.  It is important to ensure that prov is sized 
and on a partition alone where this wont happen.  This is also true for the 
flow file repo.  The content repo is more resilient to this by design but still 

Re: Failed to read TOC File

2019-02-13 Thread Chad Woodhead
Mark,

That must be it! I have "nifi.provenance.repository.max.storage.size" = 1
GB. Just bumped that to 200 GB like you suggested and I can see provenance
again. I've always wondered why my provenance partitions never got very
large!

While we're on the subject, are there other settings like this that based
on their default values I could be under-utilizing the resources (storage,
mem, CPU, etc.) I have on my servers dedicated to NiFi?

-Chad

On Wed, Feb 13, 2019 at 1:48 PM Mark Payne  wrote:

> Hey Chad,
>
> What do you have for the value of the
> "nifi.provenance.repository.max.storage.size" property?
> We will often see this if the value is very small (the default is 1 GB,
> which is very small) and the volume
> of data is reasonably high.
>
> The way that the repo works, it writes to one file for a while, until that
> file reaches 100 MB or up to 30 seconds,
> by default (configured via "nifi.provenance.repository.rollover.size" and
> "nifi.provenance.repository.rollover.time").
> At that point, it rolls over to writing to a new file and adds the
> now-completed file to a queue. A background thread
> is then responsible for compressing that completed file.
>
> What can happen, though, if the max storage space is small is that the
> data can actually be aged off from the repository
> before that background task attempts to compress it. That can result in
> either a FileNotFoundException or an EOFException
> when trying to read the TOC file (depending on the timing of when the
> age-off happens). It could potentially occur on the
> .prov file, in addition to, or instead of the .toc file.
>
> So generally, the solution is to increase the max storage size. It looks
> like you have 130 GB on each partition and 2 partitions per
> node, so 260 GB total per node that you can use for provenance. So I would
> set the max storage size to something like "200 GB".
> Since it is a soft limit and it may use more disk space than that
> temporarily before shrinking back down, you'll want to give it a little
> bit of wiggle room.
>
> Thanks
> -Mark
>
>
> On Feb 13, 2019, at 1:31 PM, Chad Woodhead  wrote:
>
> Hey Joe,
>
> Yes nifi.provenance.repository.implementation=
> org.apache.nifi.provenance.WriteAheadProvenanceRepository
>
> Disk space is fine as well. I have dedicated mounts for provenance (as
> well all the repos have their own dedicated mounts):
>
> nifi.provenance.repository.dir.default=
> /data/disk5/nifi/provenance_repository
> nifi.provenance.repository.directory.provenance2=
> /data/disk6/nifi/provenance_repository
>
> Both of these mounts have plenty of space and are only 1% full and have
> never become close to being filled up.
>
> 
>
> -Chad
>
> On Wed, Feb 13, 2019 at 1:06 PM Joe Witt  wrote:
>
>> Chad,
>>
>> In your conf/nifi.properties please see what the implementation is for
>> your provenance repository. This specied on
>>
>>
>> nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
>>
>> Is that what you have?
>>
>> The above error I believe could occur if the location where provenance is
>> being written runs out of disk space.  It is important to ensure that prov
>> is sized and on a partition alone where this wont happen.  This is also
>> true for the flow file repo.  The content repo is more resilient to this by
>> design but still you want all three repo areas on their own partitions as
>> per best practices.
>>
>> Thanks
>> Joe
>>
>> On Wed, Feb 13, 2019 at 1:03 PM Chad Woodhead 
>> wrote:
>>
>>> I use the org.apache.nifi.provenance.WriteAheadProvenanceRepository and
>>> I am seeing the following error in my logs a lot and I can't view any
>>> provenance data in the UI:
>>>
>>> 2019-02-13 12:57:44,637 ERROR [Compress Provenance Logs-1-thread-1]
>>> o.a.n.p.s.EventFileCompressor Failed to read TOC File
>>> /data/disk5/nifi/provenance_repository/toc/158994812.toc
>>> java.io.EOFException: null
>>> at
>>> org.apache.nifi.provenance.toc.StandardTocReader.(StandardTocReader.java:48)
>>> at
>>> org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:93)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>> Any ideas on what could be going on?
>>>
>>> -Chad
>>>
>>
>


Re: running multiple commands from a single ExecuteStreamCommand processor

2019-02-13 Thread Mark Payne
Vijay,

This would be treated as arguments to a single command.

One option would be to create a simple bash script that executes the desired 
commands and invoke
that from the processor. Or, of course, you can chain together multiple 
processors.

Thanks
-Mark


> On Feb 13, 2019, at 1:48 PM, Vijay Chhipa  wrote:
> 
> Hi, 
> 
> I have a ExecuteStreamCommand  processor running a single command, (executing 
> a  -jar  ) and it runs fine, 
> 
> I need to run the same command but with different arguments. 
> 
> My question is: Can I put multiple lines as command arguments and still have 
> a single instance of the ExecuteStreamCommand?
> 
> Would those be treated as arguments to a single command, or each line of 
> arguments would be treated as separate command?
> 
> 
> Thanks 
> 
> Vijay
> 
> 
> 



Re: Failed to read TOC File

2019-02-13 Thread Mark Payne
Hey Chad,

What do you have for the value of the 
"nifi.provenance.repository.max.storage.size" property?
We will often see this if the value is very small (the default is 1 GB, which 
is very small) and the volume
of data is reasonably high.

The way that the repo works, it writes to one file for a while, until that file 
reaches 100 MB or up to 30 seconds,
by default (configured via "nifi.provenance.repository.rollover.size" and 
"nifi.provenance.repository.rollover.time").
At that point, it rolls over to writing to a new file and adds the 
now-completed file to a queue. A background thread
is then responsible for compressing that completed file.

What can happen, though, if the max storage space is small is that the data can 
actually be aged off from the repository
before that background task attempts to compress it. That can result in either 
a FileNotFoundException or an EOFException
when trying to read the TOC file (depending on the timing of when the age-off 
happens). It could potentially occur on the
.prov file, in addition to, or instead of the .toc file.

So generally, the solution is to increase the max storage size. It looks like 
you have 130 GB on each partition and 2 partitions per
node, so 260 GB total per node that you can use for provenance. So I would set 
the max storage size to something like "200 GB".
Since it is a soft limit and it may use more disk space than that temporarily 
before shrinking back down, you'll want to give it a little
bit of wiggle room.

Thanks
-Mark


On Feb 13, 2019, at 1:31 PM, Chad Woodhead 
mailto:chadwoodh...@gmail.com>> wrote:

Hey Joe,

Yes 
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository

Disk space is fine as well. I have dedicated mounts for provenance (as well all 
the repos have their own dedicated mounts):

nifi.provenance.repository.dir.default=/data/disk5/nifi/provenance_repository
nifi.provenance.repository.directory.provenance2=/data/disk6/nifi/provenance_repository

Both of these mounts have plenty of space and are only 1% full and have never 
become close to being filled up.



-Chad

On Wed, Feb 13, 2019 at 1:06 PM Joe Witt 
mailto:joe.w...@gmail.com>> wrote:
Chad,

In your conf/nifi.properties please see what the implementation is for your 
provenance repository. This specied on

nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository

Is that what you have?

The above error I believe could occur if the location where provenance is being 
written runs out of disk space.  It is important to ensure that prov is sized 
and on a partition alone where this wont happen.  This is also true for the 
flow file repo.  The content repo is more resilient to this by design but still 
you want all three repo areas on their own partitions as per best practices.

Thanks
Joe

On Wed, Feb 13, 2019 at 1:03 PM Chad Woodhead 
mailto:chadwoodh...@gmail.com>> wrote:
I use the org.apache.nifi.provenance.WriteAheadProvenanceRepository and I am 
seeing the following error in my logs a lot and I can't view any provenance 
data in the UI:

2019-02-13 12:57:44,637 ERROR [Compress Provenance Logs-1-thread-1] 
o.a.n.p.s.EventFileCompressor Failed to read TOC File 
/data/disk5/nifi/provenance_repository/toc/158994812.toc
java.io.EOFException: null
at 
org.apache.nifi.provenance.toc.StandardTocReader.(StandardTocReader.java:48)
at 
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:93)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Any ideas on what could be going on?

-Chad



running multiple commands from a single ExecuteStreamCommand processor

2019-02-13 Thread Vijay Chhipa
Hi, 

I have a ExecuteStreamCommand  processor running a single command, (executing a 
 -jar  ) and it runs fine, 

I need to run the same command but with different arguments. 

My question is: Can I put multiple lines as command arguments and still have a 
single instance of the ExecuteStreamCommand?
  
Would those be treated as arguments to a single command, or each line of 
arguments would be treated as separate command?


Thanks 

Vijay





smime.p7s
Description: S/MIME cryptographic signature


Re: Failed to read TOC File

2019-02-13 Thread Joe Witt
Chad,

In your conf/nifi.properties please see what the implementation is for your
provenance repository. This specied on

nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository

Is that what you have?

The above error I believe could occur if the location where provenance is
being written runs out of disk space.  It is important to ensure that prov
is sized and on a partition alone where this wont happen.  This is also
true for the flow file repo.  The content repo is more resilient to this by
design but still you want all three repo areas on their own partitions as
per best practices.

Thanks
Joe

On Wed, Feb 13, 2019 at 1:03 PM Chad Woodhead 
wrote:

> I use the org.apache.nifi.provenance.WriteAheadProvenanceRepository and I
> am seeing the following error in my logs a lot and I can't view any
> provenance data in the UI:
>
> 2019-02-13 12:57:44,637 ERROR [Compress Provenance Logs-1-thread-1]
> o.a.n.p.s.EventFileCompressor Failed to read TOC File
> /data/disk5/nifi/provenance_repository/toc/158994812.toc
> java.io.EOFException: null
> at
> org.apache.nifi.provenance.toc.StandardTocReader.(StandardTocReader.java:48)
> at
> org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:93)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> Any ideas on what could be going on?
>
> -Chad
>


Failed to read TOC File

2019-02-13 Thread Chad Woodhead
I use the org.apache.nifi.provenance.WriteAheadProvenanceRepository and I
am seeing the following error in my logs a lot and I can't view any
provenance data in the UI:

2019-02-13 12:57:44,637 ERROR [Compress Provenance Logs-1-thread-1]
o.a.n.p.s.EventFileCompressor Failed to read TOC File
/data/disk5/nifi/provenance_repository/toc/158994812.toc
java.io.EOFException: null
at
org.apache.nifi.provenance.toc.StandardTocReader.(StandardTocReader.java:48)
at
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:93)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Any ideas on what could be going on?

-Chad


Re: QueryRecord fails on nullable records

2019-02-13 Thread Mark Payne
Thanks. Unfortunately, I am not able to replicate locally, running master. Any 
chance that you have a recent build of master
that you can use to try to replicate? There were quite a few changes this time 
around for the record API in order to support
schema inference, etc.

On Feb 13, 2019, at 10:46 AM, Mike Thomsen 
mailto:mikerthom...@gmail.com>> wrote:

Schema access strategy is inherit record schema and the version is 1.8.

Thanks,

Mike

On Wed, Feb 13, 2019 at 10:37 AM Mark Payne 
mailto:marka...@hotmail.com>> wrote:
Mike,

That should fine. The NullPointerException seems to be coming from the Avro 
Writer. I presume that's what you are using as the Writer?
What do you have set as the Avro Writer's Schema Access Strategy? What version 
of NiFi are you running?

Thanks
-Mark


> On Feb 13, 2019, at 9:55 AM, Mike Thomsen 
> mailto:mikerthom...@gmail.com>> wrote:
>
> I have a pretty simple statement like this:
>
> SELECT * FROM FLOWFILE WHERE action = 'X'
>
> We have a long field that is nullable ( ["null", "long"] so we're clear) and 
> QueryRecord throws an exception saying that it couldn't handle a null in that 
> field.
>
>
> NullPointerException: null of long in field [name] of 
> org.apache.nifi.nifiRecord
>
> Am I missing something here?
>
> Thanks,
>
> Mike




Re: AmbariReportingTask help

2019-02-13 Thread Mark Payne
Chad,

The custom Processor would just need to be emitting a Provenance RECEIVE event 
(if the processor receives data as a new FlowFile)
or a FETCH event (if the processor is changing the contents of an existing 
FlowFile by pulling data from an external source).

The framework will then handle the rest.

Thanks
-Mark


> On Feb 13, 2019, at 10:53 AM, Chad Woodhead  wrote:
> 
> Hey Mark,
> 
> I see. That makes sense why I didn’t see metrics for flowfiles coming from 
> GenerateFlowFile processor.
> 
> When using custom NARs to consume from external sources, is there any code 
> that needs to be added to the NAR so the processor’s metrics are reported in 
> this category or does NiFi handle that automatically?
> 
> -Chad
> 
> 
>> On Feb 13, 2019, at 9:44 AM, Mark Payne  wrote:
>> 
>> Chad,
>> 
>> It represents any FlowFile that was received from an external source. This 
>> could be via
>> Site-to-Site or could be from something like GetHTTP, FetchSFTP, etc. It 
>> correlates
>> to any RECEIVE or FETCH provenance events.
>> 
>> If you go to the Summary table (menu in the top right / Summary) and then go 
>> to the
>> Process Groups tab, you can see the Sent/Received metrics over the past 5 
>> minutes there.
>> 
>> Thanks
>> -Mark
>> 
>> 
>>> On Feb 13, 2019, at 9:32 AM, Chad Woodhead  wrote:
>>> 
>>> I was looking for some clarification on the AmbariReportingTask, and 
>>> specifically the metrics FlowFilesReceivedLast5Minutes and 
>>> BytesReceivedLast5Minutes. 
>>> 
>>> Looking at the configuration for AmbariReportingTask, it has the property 
>>> ‘Process Group ID’ and explains that if left blank, the root process group 
>>> is used and global metrics are sent. (I have ‘Process Group ID’ set to 
>>> blank in my AmbariReportingTask)  So for global metrics on the root process 
>>> group, what does it use as a metric for FlowFilesReceived? All processors 
>>> or just the first processor in a flow considered for this metric? Is there 
>>> a way to see these metrics on the root process group in NiFi similar to 
>>> ‘Status History’ on a component?
>>> 
>>> Thanks,
>>> Chad
>> 
> 



Re: AmbariReportingTask help

2019-02-13 Thread Chad Woodhead
Hey Mark,

I see. That makes sense why I didn’t see metrics for flowfiles coming from 
GenerateFlowFile processor.

When using custom NARs to consume from external sources, is there any code that 
needs to be added to the NAR so the processor’s metrics are reported in this 
category or does NiFi handle that automatically?

-Chad


> On Feb 13, 2019, at 9:44 AM, Mark Payne  wrote:
> 
> Chad,
> 
> It represents any FlowFile that was received from an external source. This 
> could be via
> Site-to-Site or could be from something like GetHTTP, FetchSFTP, etc. It 
> correlates
> to any RECEIVE or FETCH provenance events.
> 
> If you go to the Summary table (menu in the top right / Summary) and then go 
> to the
> Process Groups tab, you can see the Sent/Received metrics over the past 5 
> minutes there.
> 
> Thanks
> -Mark
> 
> 
>> On Feb 13, 2019, at 9:32 AM, Chad Woodhead  wrote:
>> 
>> I was looking for some clarification on the AmbariReportingTask, and 
>> specifically the metrics FlowFilesReceivedLast5Minutes and 
>> BytesReceivedLast5Minutes. 
>> 
>> Looking at the configuration for AmbariReportingTask, it has the property 
>> ‘Process Group ID’ and explains that if left blank, the root process group 
>> is used and global metrics are sent. (I have ‘Process Group ID’ set to blank 
>> in my AmbariReportingTask)  So for global metrics on the root process group, 
>> what does it use as a metric for FlowFilesReceived? All processors or just 
>> the first processor in a flow considered for this metric? Is there a way to 
>> see these metrics on the root process group in NiFi similar to ‘Status 
>> History’ on a component?
>> 
>> Thanks,
>> Chad
> 



QueryRecord fails on nullable records

2019-02-13 Thread Mike Thomsen
I have a pretty simple statement like this:

SELECT * FROM FLOWFILE WHERE action = 'X'

We have a long field that is nullable ( ["null", "long"] so we're clear)
and QueryRecord throws an exception saying that it couldn't handle a null
in that field.


NullPointerException: null of long in field [name] of
org.apache.nifi.nifiRecord

Am I missing something here?

Thanks,

Mike


Re: AmbariReportingTask help

2019-02-13 Thread Mark Payne
Chad,

It represents any FlowFile that was received from an external source. This 
could be via
Site-to-Site or could be from something like GetHTTP, FetchSFTP, etc. It 
correlates
to any RECEIVE or FETCH provenance events.

If you go to the Summary table (menu in the top right / Summary) and then go to 
the
Process Groups tab, you can see the Sent/Received metrics over the past 5 
minutes there.

Thanks
-Mark


> On Feb 13, 2019, at 9:32 AM, Chad Woodhead  wrote:
> 
> I was looking for some clarification on the AmbariReportingTask, and 
> specifically the metrics FlowFilesReceivedLast5Minutes and 
> BytesReceivedLast5Minutes. 
> 
> Looking at the configuration for AmbariReportingTask, it has the property 
> ‘Process Group ID’ and explains that if left blank, the root process group is 
> used and global metrics are sent. (I have ‘Process Group ID’ set to blank in 
> my AmbariReportingTask)  So for global metrics on the root process group, 
> what does it use as a metric for FlowFilesReceived? All processors or just 
> the first processor in a flow considered for this metric? Is there a way to 
> see these metrics on the root process group in NiFi similar to ‘Status 
> History’ on a component?
> 
> Thanks,
> Chad



AmbariReportingTask help

2019-02-13 Thread Chad Woodhead
I was looking for some clarification on the AmbariReportingTask, and 
specifically the metrics FlowFilesReceivedLast5Minutes and 
BytesReceivedLast5Minutes. 

Looking at the configuration for AmbariReportingTask, it has the property 
‘Process Group ID’ and explains that if left blank, the root process group is 
used and global metrics are sent. (I have ‘Process Group ID’ set to blank in my 
AmbariReportingTask)  So for global metrics on the root process group, what 
does it use as a metric for FlowFilesReceived? All processors or just the first 
processor in a flow considered for this metric? Is there a way to see these 
metrics on the root process group in NiFi similar to ‘Status History’ on a 
component?

Thanks,
Chad

Re: Custom Processor - TestRunner Out of Memory

2019-02-13 Thread Otto Fowler
Can you create a jira with your use case?


On February 13, 2019 at 04:58:46, Mike Thomsen (mikerthom...@gmail.com)
wrote:

Not at the moment, that could be a useful improvement.

On Tue, Feb 12, 2019 at 3:11 PM Shawn Weeks 
wrote:

> With the NiFi TestRunner class for a Processor, is there a way to have it
> write the output stream of the processor to disk so that it’s not trying to
> store the thing in a ByteArrayOutputStream? I’ve got a test case that uses
> a rather large test file to verify some edge cases and I can’t figure out
> how to test it. I’m working off of the NiFi 1.5 Processor ArchType.
>
>
>
> Thanks
>
> Shawn Weeks
>


Re: Custom Processor - TestRunner Out of Memory

2019-02-13 Thread Mike Thomsen
Not at the moment, that could be a useful improvement.

On Tue, Feb 12, 2019 at 3:11 PM Shawn Weeks 
wrote:

> With the NiFi TestRunner class for a Processor, is there a way to have it
> write the output stream of the processor to disk so that it’s not trying to
> store the thing in a ByteArrayOutputStream? I’ve got a test case that uses
> a rather large test file to verify some edge cases and I can’t figure out
> how to test it. I’m working off of the NiFi 1.5 Processor ArchType.
>
>
>
> Thanks
>
> Shawn Weeks
>


Re: Nifi registry Kerberos Auth with Docker

2019-02-13 Thread Tomislav Novosel
Also, FYI.

If I set for INITIAL_ADMIN_IDENTITY my user's full DN, cn=...,ou=...,dc=...
I can also login into UI, but there is no properties button upper right in
the UI.

[image: 1.PNG]

If I set only USERNEMA to be u21g46, I can see properties button, but I
can't add new users.

BR,
Tom

On Fri, 8 Feb 2019 at 16:03, Bryan Bende  wrote:

> Thinking about it more, I guess if you are not trying to do spnego
> then that message from the logs is not really an error. The registry
> UI always tries the spnego end-point first and if it returns the
> conflict response (as the log says) then you get sent to the login
> page.
>
> Maybe try turning on debug logging by editing logback.xml  name="org.apache.nifi.registry" level="INFO"/> and changing to DEBUG.
>
> On Fri, Feb 8, 2019 at 9:51 AM Tomislav Novosel 
> wrote:
> >
> > Hi Bryan,
> >
> > I don't have this properties populated in Nifi registry instance
> > outside Docker (as a service on linux server), and everything works.
> >
> > What are this properties up to?
> >
> > Regards,
> > Tom
> >
> >
> >
> > On Fri, 8 Feb 2019 at 15:25, Bryan Bende  wrote:
> >>
> >> The message about "Kerberos service ticket login not supported by this
> >> NiFi Registry" means that one of the following properties is not
> >> populated:
> >>
> >> nifi.registry.kerberos.spnego.principal=
> >> nifi.registry.kerberos.spnego.keytab.location=
> >>
> >> On Fri, Feb 8, 2019 at 8:20 AM Tomislav Novosel 
> wrote:
> >> >
> >> > Hi Daniel,
> >> >
> >> > Ok, I see. Thanks for the answer.
> >> >
> >> > I switched to official Nifi registry image. I succeeded to spin up
> registry in docker container and to
> >> > setup Kerberos provider in identity-providers.xml. Also I configured
> authorizers.xml as per afficial Nifi documentation.
> >> >
> >> > I already have the same setup with Kerberos, but not in Docker
> container. And everything works like a charm.
> >> >
> >> > When I enter credentials, login does not pass. This is app log:
> >> >
> >> > 2019-02-08 12:52:30,568 INFO [NiFi Registry Web Server-14]
> o.a.n.r.w.m.IllegalStateExceptionMapper java.lang.IllegalStateException:
> Kerberos service ticket login not supported by this NiFi Registry.
> Returning Conflict response.
> >> > 2019-02-08 12:52:30,644 INFO [NiFi Registry Web Server-13]
> o.a.n.r.w.s.NiFiRegistrySecurityConfig Client could not be authenticated
> due to:
> org.springframework.security.authentication.AuthenticationCredentialsNotFoundException:
> An Authentication object was not found in the SecurityContext Returning 401
> response.
> >> > 2019-02-08 12:52:50,557 INFO [NiFi Registry Web Server-14]
> o.a.n.r.w.m.UnauthorizedExceptionMapper
> org.apache.nifi.registry.web.exception.UnauthorizedException: The supplied
> client credentials are not valid.. Returning Unauthorized response.
> >> >
> >> > Not sure what is going on here.
> >> >
> >> > Regards,
> >> > Tom
> >> >
> >> >
> >> > On Fri, 8 Feb 2019 at 11:36, Daniel Chaffelson 
> wrote:
> >> >>
> >> >> Hi Tomislav,
> >> >> I created that build a long time ago before the official apache one
> was up, and it is out of date sorry.
> >> >> Can I suggest you switch to the official apache image that Kevin
> mentioned and try again? It is an up to date version and recommended by the
> community.
> >> >>
> >> >> On Thu, Feb 7, 2019 at 5:54 PM Tomislav Novosel <
> to.novo...@gmail.com> wrote:
> >> >>>
> >> >>> Hi Kevin,
> >> >>>
> >> >>> I'm using image from Docker hub on this link:
> >> >>> https://hub.docker.com/r/chaffelson/nifi-registry
> >> >>>
> >> >>> I think I know where is the problem. The problem is in config file
> where
> >> >>> http host and http port property remains even if I manually set
> https host and htpps port.
> >> >>> I deleted http host and http port to be empty, but when I started
> container again, those values are again there.
> >> >>>
> >> >>> I don't know what the author of image wanted to say with this:
> >> >>>
> >> >>> The Docker image can be built using the following command:
> >> >>>
> >> >>> .
> ~/Projects/nifi-dev/nifi-registry/nifi-registry-docker/dockerhub/DockerBuild.sh
> >> >>>
> >> >>> What does this commend mean?
> >> >>>
> >> >>> And this:
> >> >>>
> >> >>> Note: The default version of NiFi-Registry specified by the
> Dockerfile is typically that of one that is unreleased if working from
> source. To build an image for a prior released version, one can override
> the NIFI_REGISTRY_VERSIONbuild-arg with the following command:
> >> >>>
> >> >>> docker build --build-arg=NIFI_REGISRTY_VERSION={Desired
> NiFi-Registry Version} -t apache/nifi-registry:latest .
> >> >>>
> >> >>> For this command above you need to have Dockerfile. I tried with
> Dockerfile from docker hub, but there are errors in execution on this line:
> >> >>>
> >> >>> ADD sh/ ${NIFI_REGISTRY_BASE_DIR}/scripts/
> >> >>>
> >> >>>  On the other hand, If I manage to get the image with first
> command, I will get Nifi registry version 0.1.0 which I don't want.
> >> >>>
> >> >>> I'm 

Re: Nifi registry Kerberos Auth with Docker

2019-02-13 Thread Tomislav Novosel
Hi all,

I gave up regarding Kerberos auth from Docker, it is strange issue.
I switched after to LDAP auth form Docker container and it works.

I'm using official nifi image and I used 'docker run' command form the site:
https://hub.docker.com/r/apache/nifi

But still, issue remains...after I login, I cant add new users or modify
them.

In conf folder I see in authorizations.xml that my Initial admin identitiy
user has rights to do that.

My conf for authorizers,xml is this:



file-user-group-provider

org.apache.nifi.registry.security.authorization.file.FileUserGroupProvider
./conf/users.xml
user1



file-access-policy-provider

org.apache.nifi.registry.security.authorization.file.FileAccessPolicyProvider
file-user-group-provider
./conf/authorizations.xml
user1





managed-authorizer

org.apache.nifi.registry.security.authorization.StandardManagedAuthorizer
file-access-policy-provider


In identity-providers.xml everything is good i believe as I can login into
Nifi UI.

Also when I open user1 properties in Nifi UI I can see privileges of that
initial user and it has all the rights to create new users, policies etc.

What am I missing?

Thanks,
Tom








On Fri, 8 Feb 2019 at 16:03, Bryan Bende  wrote:

> Thinking about it more, I guess if you are not trying to do spnego
> then that message from the logs is not really an error. The registry
> UI always tries the spnego end-point first and if it returns the
> conflict response (as the log says) then you get sent to the login
> page.
>
> Maybe try turning on debug logging by editing logback.xml  name="org.apache.nifi.registry" level="INFO"/> and changing to DEBUG.
>
> On Fri, Feb 8, 2019 at 9:51 AM Tomislav Novosel 
> wrote:
> >
> > Hi Bryan,
> >
> > I don't have this properties populated in Nifi registry instance
> > outside Docker (as a service on linux server), and everything works.
> >
> > What are this properties up to?
> >
> > Regards,
> > Tom
> >
> >
> >
> > On Fri, 8 Feb 2019 at 15:25, Bryan Bende  wrote:
> >>
> >> The message about "Kerberos service ticket login not supported by this
> >> NiFi Registry" means that one of the following properties is not
> >> populated:
> >>
> >> nifi.registry.kerberos.spnego.principal=
> >> nifi.registry.kerberos.spnego.keytab.location=
> >>
> >> On Fri, Feb 8, 2019 at 8:20 AM Tomislav Novosel 
> wrote:
> >> >
> >> > Hi Daniel,
> >> >
> >> > Ok, I see. Thanks for the answer.
> >> >
> >> > I switched to official Nifi registry image. I succeeded to spin up
> registry in docker container and to
> >> > setup Kerberos provider in identity-providers.xml. Also I configured
> authorizers.xml as per afficial Nifi documentation.
> >> >
> >> > I already have the same setup with Kerberos, but not in Docker
> container. And everything works like a charm.
> >> >
> >> > When I enter credentials, login does not pass. This is app log:
> >> >
> >> > 2019-02-08 12:52:30,568 INFO [NiFi Registry Web Server-14]
> o.a.n.r.w.m.IllegalStateExceptionMapper java.lang.IllegalStateException:
> Kerberos service ticket login not supported by this NiFi Registry.
> Returning Conflict response.
> >> > 2019-02-08 12:52:30,644 INFO [NiFi Registry Web Server-13]
> o.a.n.r.w.s.NiFiRegistrySecurityConfig Client could not be authenticated
> due to:
> org.springframework.security.authentication.AuthenticationCredentialsNotFoundException:
> An Authentication object was not found in the SecurityContext Returning 401
> response.
> >> > 2019-02-08 12:52:50,557 INFO [NiFi Registry Web Server-14]
> o.a.n.r.w.m.UnauthorizedExceptionMapper
> org.apache.nifi.registry.web.exception.UnauthorizedException: The supplied
> client credentials are not valid.. Returning Unauthorized response.
> >> >
> >> > Not sure what is going on here.
> >> >
> >> > Regards,
> >> > Tom
> >> >
> >> >
> >> > On Fri, 8 Feb 2019 at 11:36, Daniel Chaffelson 
> wrote:
> >> >>
> >> >> Hi Tomislav,
> >> >> I created that build a long time ago before the official apache one
> was up, and it is out of date sorry.
> >> >> Can I suggest you switch to the official apache image that Kevin
> mentioned and try again? It is an up to date version and recommended by the
> community.
> >> >>
> >> >> On Thu, Feb 7, 2019 at 5:54 PM Tomislav Novosel <
> to.novo...@gmail.com> wrote:
> >> >>>
> >> >>> Hi Kevin,
> >> >>>
> >> >>> I'm using image from Docker hub on this link:
> >> >>> https://hub.docker.com/r/chaffelson/nifi-registry
> >> >>>
> >> >>> I think I know where is the problem. The problem is in config file
> where
> >> >>> http host and http port property remains even if I manually set
> https host and htpps port.
> >> >>> I deleted http host and http port to be empty, but when I started
> container again, those values are again there.
> >> >>>
> >> >>> I don't know what the author of image wanted to say with this:
> >> >>>
> >> >>> The Docker image can be built using the following command:
> >>