Ok, I've got "split" working but that still leaves me with the dilemma: i
have to write multiple rows for "mylist" into the same table... Note that I
am working with db (Phoenix Query Server) that doesn't support "batching"
of multiple records in one insert as mysql. Any advise?
On Wed, Sep 5,
Hello,
Any user defined properties in the processor should be passed along to
Kafka so you should be able to add a new property with the name
sasl.mechanism and the value PLAIN, without that the processors will assume
GSSAPI.
On Wed, Sep 5, 2018 at 9:55 PM João Henrique Freitas
wrote:
>
> Hi
Hi James,
I did the same sequence of steps that you wrote.
When the kafka processor starts I see this:
2018-09-05 22:44:02,999 INFO [Timer-Driven Process Thread-1]
o.a.k.clients.producer.ProducerConfig ProducerConfig values:
acks = 0
batch.size = 16384
bootstrap.servers =
Thanks Dan. The blogs encourage us to keep building.
Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69
> On Sep 5, 2018, at 4:25 PM, dan young wrote:
>
> Heya Andy,
>
> yes, that seems legit...we'll make it work
Heya Andy,
yes, that seems legit...we'll make it work on our side...
Keep up the awesome work on NiFi, powers all of our ETL here now :)
Dano
On Wed, Sep 5, 2018 at 5:14 PM Andy LoPresto wrote:
> Dan,
>
> Does the proposal I submitted meet your requirements?
>
> Andy LoPresto
>
Dan,
Does the proposal I submitted meet your requirements?
Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69
> On Sep 5, 2018, at 4:09 PM, dan young wrote:
>
> We're using it as well, in the same/similar fashion
We're using it as well, in the same/similar fashion as being discussed in
the thread...
Dano
On Wed, Sep 5, 2018, 10:07 AM Brandon DeVries wrote:
> Andy,
>
> We use it pretty much how Joe is... to create a unique composite key. It
> seems as though that shouldn't be a difficult functionality
I've not tried this myself, but once you have a working JAAS config
(from
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs#send-and-receive-messages-with-kafka-in-event-hubs),
set the corresponding protocol and mechanism properties in the NiFi
I would suggest to use more convenient ways to achieve the same:
Split array into separate flow files, then use "EvaluateJsonPath" to
extract all the values into attributes (in case you know keys in advance).
If you don't know, when you can use first JoltTransformJSON, to generate
For PutSQL you have to generate a SQL statement.
For your previous question, I've suggested how to make that fork to
separate root attributes from array, and how to break array into separate
flow files (ff per element of array).
Now, having single elements, you can generate INSERT/UPDATE SQL
Connection timeout is 504 response code.
InvokeHTTP will send FF to "Retry" relationship. You can use
RouteOnAttribute, check invokehttp.status.code attribute for having
appropriate code and then do your failure handling. You might also set "Always
Output Response" to "true".
On Wed, Sep 5, 2018
I need to save data with "root" object to be saved into "root" table and
array "mylist" of contained objects to be saved into "mylist" table:
{
"id": 3,
"name": "ROOT",
"mylist": [{
"id": 10,
"info": "2am-3am"
},
{
"id": 11,
"info": "3AM-4AM"
Joe and Brandon,
Thanks for your input here. I agree that changing the behavior of an existing
processor (that is used in people’s flows) is a breaking change and probably
requires a major release, which is why I didn’t do that in the PRs. As written
today, they are fully backward-compatible.
I vote to keep for backward compatibility.
On Wed, 5 Sep 2018 at 13:33 Brandon DeVries wrote:
> Mike,
>
> We don't use it with Elasticsearch.
>
> Fundamentally, it feels like the problem is that this change would break
> backwards compatibility, which would require a major version bump. So, in
Mike,
We don't use it with Elasticsearch.
Fundamentally, it feels like the problem is that this change would break
backwards compatibility, which would require a major version bump. So, in
lieu of that, the options are probably 1) use a different name or 2) put
the new functionality in
Hi
Is there a way to generate a failure flow file for connection issues like
connection timeout for InvokeHTTP Processor?
Thanks
Saloni Udani
Brandon,
What processor do you use it for in that capacity? If it's an ElasticSearch
one we can look into ways to bring this functionality into that bundle so
Andy can refactor.
Thanks,
Mike
On Wed, Sep 5, 2018 at 12:07 PM Brandon DeVries wrote:
> Andy,
>
> We use it pretty much how Joe
Andy,
We use it pretty much how Joe is... to create a unique composite key. It
seems as though that shouldn't be a difficult functionality to add.
Possibly, you could flip your current dynamic key/value properties. Make
the key the name of the attribute you want to create, and the value is the
Hello!
I'm exploring Azure Event Hub with Kafka support. I know that's in preview.
But I would like to know how to use PublishKafka with this configuration:
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs
I don't know how to configure kafka
Hey Andy,
We're currently using the HashAttribute processor. The use-case is that we
have various events that come in but sometimes those events are just
updates of previous ones. We store everything in ElasticSearch. So for
certain events, we'll calculate a hash based on a couple of attributes
> How can I parse it to name/value pairs in groovy script?
I would recommend getting the Groovy binary distribution (we use 2.4.X) and
experimenting with that. Aside from us throwing in a few of the NiFi APIs,
it's a standard Groovy environment. You'll flatten the learning curve on
writing these
unsubscribe
I have json array attribute:
mylist
:[{"id":10,"info":"2am-3am"},{"id":11,"info":"3AM-4AM"},{"id":12,"info":"4am-5am"}]
How can I parse it to name/value pairs in groovy script?
Thanks,
Hi,
In our big data environment one of the architectural principles is to schedule
jobs with Azure Automation (runbooks). A scheduling database is used to decide
when to start which jobs. NiFi flows however are currently being scheduled in
NiFi itself. We're looking for a good approach to move
24 matches
Mail list logo