I'm using the InferAvroSchema to generate the schema, such that I don't
need to know about the JSON structure in advance.
Another question.. Can the LookupRecord lookup based on the Mongo _id field?
Using the GetMongo processor, I had the following as my query: { _id:
ObjectId("${uuid}") }
Is t
I see... did anything change with your configuration of HBase and
ZooKeeper on the MapR side of things?
The original stacktrace your provided is indicating the HBase client
is not authorized to access the /hbase znode in ZooKeeper.
If nothing changed on the MapR side of things, I have no idea way
Interestingly I didn't have to change nifi-hbase_1_1_2-client-service/pom.xml
to include mapr version of library in nifi 1.6 code.
org.apache.hbase
hbase-client
1.1.2
org.slf4j
slf4j
Ravi,
The NullPointerException means that the Map-R version of the HBase
client (1.1.8-mapr-1703) is returning null for either
getClusterStatus() or getMaster().
Whatever version you were using before this was probably not returning
null. It would probably be a question to MapR as to why that ver
Hi Doug, thanks for sharing. I do believe this will help people looking to
perform similar actions. Is there a specific section of the Apache
documentation you found confusing? You mentioned the documentation around
exporting the flow.xml.gz file to be lacking; is there a specific place you
wou
I understand where the problem is. The reason why we replaced
*this.client.setEndpoint(urlstr);* with *this.client.setEndpoint(urlstr,
this.client.getServiceName(), this.region.getName()); *was that when
working VPC enabled services, the endpoint URL wasn't properly parsed.
Having said that, *this.
My incoming FlowFIle is a valid JSON Object. The key names could be
anything, they're not defined.
I need to add a top level object to every JSON Object, based on the result
from MongoDB.
Is that possible with the JsonTreeReader/Schema, or do I need to know what
the fields are?
Input:
{ "key1"
Setting line 289 in AbstractAWSProcessor.java to
this.client.setEndpoint(urlstr);
worked for me.
Regards,Andrew
On 11/30/18 09:12, Andrew McDonald wrote:
This is on an air-gapped system so I'll type in the essential part
Failed to receive messages from Amazon SQS due to
com.amazonaws.servi
For the record side of things, you just need to create a schema that
includes your existing JSON fields and a new branch that will have the
enriched fields in it.
On Fri, Nov 30, 2018 at 10:39 AM Ryan Hendrickson <
ryan.andrew.hendrick...@gmail.com> wrote:
> Hi Otto and Mike,
>The LookupRecor
Hi Otto and Mike,
The LookupRecord does look fruitful, although I don't have a defined
schema for the JsonTreeReader. Is there a way to just keep it generic? I
know I have valid JSON already, I just want to add the result of the
MongoQuery to a specific Json Path in the FlowFile.
The Look
LookupAttribute + the MongoDBLookupService should be able to do that.
On Thu, Nov 29, 2018 at 8:05 PM Otto Fowler wrote:
> Sounds like you want to look at enrichment with the LookupRecord
> processors and Mongo.
>
> https://community.hortonworks.com/articles/146198/data-flow-enrichment-with-nifi
This is on an air-gapped system so I'll type in the essential part
Failed to receive messages from Amazon SQS due to
com.amazonaws.services.sqs.model.AmazonSQSException: Credential should
be scoped to a valid region, not 'us-east-1'. (Service: AmazonSQS;
Status Code: 403; Error Code: Signature
12 matches
Mail list logo