Hi Faustina,
I'm not familiar with MongoDB query syntax, but based on the
Stackoverflow answer, to avoid JsonParseException and query a field
containing space, something like below may work? i.e. escaping the
double quotes
{"$where": "\"Incident Submitted Dt\" >= dd()"}
Thanks,
Koji
On Wed,
Hi,
I want my getmongo processor to fetch the data of past 2 days only. So I
referred this link to create a function and call that function from the
query field in getmongo processor: https://stackoverflow.com/que
stions/44573618/how-to-get-iso-string-in-nifi-getmongo-query
Hello All,
Logged https://issues.apache.org/jira/browse/NIFI-4352 for this issue.
Thanks!!
On Mon, Sep 4, 2017 at 10:24 AM, mayank rathi
wrote:
> Hello All,
>
> I am passing sql.args.1.type = 2005 and sql.args.1.value as a CLOB value
> to PutSQL and it is throwing
Hi Phil,
I have just uploaded a template to [1] that will illustrate integrating NiFi
with Kafka using the Confluent Schema Registry.
The template will be at the bottom of the linked page. The instructions
there should illustrate how to publish a compatible schema to the Confluent
Schema
Phil,
A couple of quick questions. With your Java application that is consuming
data have you configured the KafkaAvroDecoder for the Value? Have you also
tried consuming that data from the command line using
the kafka-avro-console-consumer (that ships with the Confluent Schema
Registry)?
Thanks,
***
Sonatel:Scan antiviral effectue (3)
***
Any Help please
NiFi PutHiveStreaming processor with Hive: Failed connecting to EndPoint
https://stackoverflow.com/q/45983631/8543695?sem=2
Any thoughts on this error?
On Mon, Sep 4, 2017 at 10:24 AM, mayank rathi
wrote:
> Hello All,
>
> I am passing sql.args.1.type = 2005 and sql.args.1.value as a CLOB value
> to PutSQL and it is throwing below error. How do I resolve this error?
>
> 2017-09-04 10:17:24,924
Hello Bryan,
Schema Write Strategy : Confluent Schema Registry Reference
Schema Access strategy : Use 'Schema Name' Property
Schema registry : ConfluentSchemaRegistry
Schema Name : topic7
Schema Text : ${avro.schema}
Compression Format : NONE
Thanks
Phil
-Original Message-
From:
Phil,
What is the "Schema Write Strategy" set to on the AvroRecordSetWriter?
-Bryan
On Tue, Sep 5, 2017 at 10:23 AM, wrote:
> Hello Joe
>
> This json content
> {"type":"room","id":"room11","attributes":{"position":"47.100,3.246","surface":223,"norme":"NF"}}
>
>
Hello Joe
This json content
{"type":"room","id":"room11","attributes":{"position":"47.100,3.246","surface":223,"norme":"NF"}}
is the input of the PublishKafkaRecord_0_10 processor. This processor have
among others the following properties set :
Record Reader : JsonTreeReader --> pointing
Hello
Can you share the details of how you're serializing the data? We'll
need to understand the configuration of the record writer of the
publish kafka process to help discuss about reading it back (be that
in nifi or otherwise).
Since we support multiple modes of encoding the schema
Hello, My SW environment --> Nifi 1.4 compiled from sources , on Ubuntu
14.04 and a running Confluent 3.3 Platform (running Confluent Registry and
kafkaStreams )
I have a process group with a PublishKafkaRecord_0_10 that serialize a record (
with 3 fields)
And another process group with
i wants to process shared drive data into NiFi.
I have shared file in which present in connected network.It prompts for
credentials before access that file.
Now i have to fetch that file from shared network(//hostname/shared/file)
with use of credentials(username/pwd) and then process those
13 matches
Mail list logo