Hi all,
I am trying to implement a use case where I require communicating
unconnected process groups. For example, for a flow I want to run some
pre/post flows. I want to start one flow after the previous flow has
completed. Is there a way to achieve it?
I read about wait/notify processor,
Thanks for all of the help with this. I have the cluster up and running. The
logs look great everything seems to be working but I cannot login into the UI.
I am using a wildcard self-signed certificate. The /proxy is in
authorizations.xml with the correct users for the nodes.
The error I see w
All identity strings are case & whitespace sensitive.
The node identities in your authorizers.xml have no whitespace, and
the identity showing in the logs does.
On Wed, Mar 21, 2018 at 11:05 AM, Scott Howell wrote:
> Thanks for all of the help with this. I have the cluster up and running. The
>
Thanks for that. I am still getting this error in my nifi-user.log
o.a.n.w.s.NiFiAuthenticationFilter Rejecting access to web api: Untrusted proxy
CN=*.{redacted}.com, OU={redacted}, O={redacted}, L=Kansas City, ST=Missouri,
C=US
Is there an issue with using a wildcard cert?
> On Mar 21, 201
One other thing I am seeing and I don’t know if this is an issue or not in my
authorizations.xml I do not have a policy for /proxy with action=“R” only
action=“W”.
> On Mar 21, 2018, at 11:03 AM, Scott Howell wrote:
>
> Thanks for that. I am still getting this error in my nifi-user.log
>
> o
There only needs to be W to /proxy so that part should be fine.
After you edited the Node Identities, did you delete users.xml and
authorizations.xml?
You would have to do that for those changes to take effect. You can
look in users.xml and see if you still have the user identities
without whites
Thanks I have checked that and the whitespace is correct in user.xml.
I did make a change to my authorizer.xml
file-provider
org.apache.nifi.authorization.FileAuthorizer
/opt/config/authorizations.xml
/opt/config/users.xml
uid=scott,ou=users,dc=mobilgov,dc=com
CN
I've never used wildcard certs before so I'll have to defer to others
that might know if there is any issue with that.
Could show the contents of these two files just so we can double check
the users/policies?
/opt/config/authorizations.xml
/opt/config/users.xml
On Wed, Mar 21, 2018 at 12:37 PM,
user.xml
Authorizations.xml
Ok that looks correct for the 1-node case.
So just to clarify what is working and not working...
With the config in the last email, you have a 1 node cluster that is
working and you can get into the UI?
For the two node case you would need each node to have a users.xml
with users for the two nod
I do have a one node cluster working with the configuration below.
This is the user.xml for my 2 node cluster
Authorizations.xml
▽
Have you tried using the same 1 node config for the 2 node scenario?
I think since you have wildcard server certs, requests are going to
come from "CN=*.{redacted}.com, OU={redacted}, O={redacted}, L=Kansas
City, ST=Missouri, C=US" no matter which node the request comes from,
so there would be no
I have an avro avsc file for a table with a definition like:
{"type":"record","name":"INV_ADJ","namespace":"NSP_SCH","fields":[{"name":"table","type":"string"},{"name":"op_type","type":"string"},{"name":"op_ts","type":"string"},{"name":"current_ts","type":"string"},{"name":"pos","type":"string"},{
Can you share a template of your process group?
Do the messages in Kafka have the Avro schema included in them or are
they just the data portion of the record?
On Wed, Mar 21, 2018 at 9:16 PM, Colin Williams
wrote:
> I have an avro avsc file for a table with a definition like:
>
> {"type":"recor
Hi Joe,
I don't believe the Avro schema included, and expect they are the data
portion... I think that's why I need to use the avsc file mentioned above...
On Wed, Mar 21, 2018 at 6:19 PM, Joe Witt wrote:
> Can you share a template of your process group?
>
> Do the messages in Kafka have the Av
Colin,
You're using the ConsumeKafka processors. Given that this is avro
data for which you have a schema/etc.. I strongly recommend you use
ConsumeKafkaRecord0_10...
In that you get to specify the record reader/writer you'll need. You
will also see dramatically higher performance.
Lets get yo
Hi Joe,
Thanks for the suggestion. I started by using the ConsumeKafkaRecord0_10.
But I had read the only way to configure the schema.name was via a
properties file, which I read also required a restart of NiFi.
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-1-3-0-Problem-with-schema-nam
17 matches
Mail list logo