Hi Bryan:
Thanks for you response.
Using GetSFTP and PutHDFS is helpful.
Now I meet another problem. Besides the HDFS, the priictures from remote server
also need to put into HBase. The filename is rowkey and the file as a column.
This is the reason why I store the pictures in local and then
Hi Tim,
Sorry for the inconvenience but this was a bug that was introduced in
Apache NiFi 1.5.0. This was addressed in this JIRA [1] and will be included
in Apache NiFi 1.6.0 which is currently under vote.
Matt
[1] https://issues.apache.org/jira/browse/NIFI-4894
On Thu, Mar 22, 2018 at 3:13
Hello,
I am working with NiFi 1.5.0 and I have created a customer service controller
for my project. I am able to add my controller service to my NiFi flow
configuration, but I am unable to enable that controller service without first
encountering error messages that I need to dismiss. After
Hi,
I have re-compiled nifi with mapr dependencies as per instructions at
http://hariology.com/integrating-mapr-fs-and-apache-nifi/
Created process flow with ListFile > FetchFile > PutHDFS. As soon as I start
this process group nifi-bootstrap.log is getting filled with
2018-03-21 22:56:26,806
Hello,
It would probably be best to use GetSFTP -> PutHDFS.
No need to write the files out to local disk somewhere else with
PutFile, they can go straight to HDFS.
The filename in HDFS will be the "filename" attribute of the flow
file, which GetSFTP should be setting to the filename it picked
Colin,
You would have not seen this error before from the Kafka processor you
were using as it would not be doing any deserialization. You would
have seen that error downstream in the LookupRecord processor you had.
Just wanted to clarify that point.
That you're seeing the error now with
Hi,
I configured ConsumeKafkaRecord and set schema.name to the value. I think
something had preventing me from editing it's value previously.
Anyhow I have what looks like a similar error message:
2018-03-22 09:24:24,990 ERROR [Timer-Driven Process Thread-6]
Hi all,
It is my requirement that put pictures from remote server(not in nifi cluster)
into hdfs.
First I use the GetSFTP and PutFile to get pictures to local, and then use
ExecuteFlumeSource and ExecuteFlumeSink to put pictures into hdfs from local.
However, there is a problem that the name
It’s of course not a recommended best practice for real-word use, but for a
demo, yes that should. Keep in mind that if you add those files, you cannot
change the initial admin identity using the Docker environment variables (as
the authorizers.xml initial admin only gets applied when
Thanks. Now test can login. One last question... if I copy users.xml and
authorizations.xml to the host file system and inject them into later runs
of the Docker image through volume references, would that work for making a
reproducible demo?
On Thu, Mar 22, 2018 at 10:08 AM, Kevin Doran
Yeah, from looking at your Docker compose file, your LDAP search base/filter is
configured as:
LDAP_USER_SEARCH_BASE='ou=people,dc=nifi,dc=com'
LDAP_USER_SEARCH_FILTER='uid={0}'
This means that NiFi is going to search the directory for any nodes that are
children of
I added two entries:
uid=test
cn=test, ou=people, dc=nifi, dc=com
Tried logging in w/ test/password (what the LDIF uses)
Got: Unknown user with identity 'test'. Contact the system administrator.
Any ideas?
On Thu, Mar 22, 2018 at 9:34 AM, Kevin Doran wrote:
> Mike,
>
>
>
>
Mike,
To my knowledge, the Docker image does not yet have support for adding the
LdapUserGroupProvider to authorizers.xml. It only adds the LdapProvider to
login-identity-provider.xml. This means you should be able to
login/authenticate as an LDAP user, but users and group will not sync in
Thanks. I fixed that, but it's still not returning any users from the LDAP.
It's weird because the LDAP docker image is set up using the same
configuration from Pierre's blog posts that I've gotten to work outside of
Docker. I'm also not seeing anything in the logs indicating that it's
trying the
Sorry, meant to include the link to start.sh, which is in our codebase [1].
I’m only pointing it out b/c it looked like in your Docker compose file that
you wanted this to be an LDAP demo.
[1]
https://github.com/apache/nifi/blob/master/nifi-docker/dockerhub/sh/start.sh#L30
From: Kevin
Good eye, Pierre.
Mike, unrelated to the initial admin question, but anticipating something you
might run int o after you get that part working. Change the "AUTH=tls"
environment variable value to "AUTH=ldap". (I know the README file for the
docker image uses ‘AUTH=tls’ in the documentation
They were. I did a copy from the Docker Hub page and didn't think they'd
harm anything in the YAML. Removing them got initialAdmin to work.
On Thu, Mar 22, 2018 at 8:20 AM, Pierre Villard wrote:
> Hmmm no... the single quotes must be the issue here... I would
Hmmm no... the single quotes must be the issue here... I would expect
identity="CN=initialAdmin, OU=NIFI"
In your yaml file, I'd try to use double quotes around your property values.
2018-03-22 13:16 GMT+01:00 Mike Thomsen :
> Yeah, that's the weird part. It looks valid
Yeah, that's the weird part. It looks valid to me:
On Thu, Mar 22, 2018 at 8:07 AM, Pierre Villard wrote:
> Hey Mike,
>
> Can you check the users.xml file created by NiFi when it started for the
> first time?
>
> 2018-03-22 12:41
Hey Mike,
Can you check the users.xml file created by NiFi when it started for the
first time?
2018-03-22 12:41 GMT+01:00 Mike Thomsen :
> I'm trying to use the Docker image to set up a secure NiFi demo, and am
> running into this error:
>
> Unknown user with identity
I'm trying to use the Docker image to set up a secure NiFi demo, and am
running into this error:
Unknown user with identity 'CN=initialAdmin, OU=NIFI'. Contact the system
administrator.
SSL works, I verified that the owner in the cert is "CN=initialAdmin,
OU=NIFI"
I've attached the Docker
Hello,
You don’t need to use a properties file. In your AvroRecordReader, just
change the Schema Name property from ${schema.name} to the actual name of
the schema you want to use from the schema registry.
It just means that the record reader can only be used with one schema now,
rather than
Hi Joe,
Thanks for the suggestion. I started by using the ConsumeKafkaRecord0_10.
But I had read the only way to configure the schema.name was via a
properties file, which I read also required a restart of NiFi.
23 matches
Mail list logo