our own. POS should be good and if op_ts does
>>> not work for you, why not to generate your own timestamp using POS? (Now()
>>> expression). You also add another token that identifies transaction
>>> sequence number and order opts and then by transaction sequence number.
t;,
> "AWDT"):toNumber()}
>
>
>
> Lastly, depending on how you are using the result of toNumber(), keep in
> mind that some systems expect *seconds since epoch* (not *milliseconds*,
> which toNumber() outputs) for a Unix timestamp.
>
>
>
> Cheers,
> Kev
Hi Guys,
I am having trouble converting the ISO 8601 to a unixtimestamp . Here is
what i have tried
current_ts: 2018-11-11T00:17:27.937000
Using updateAttribute, I have configured the below property
${current_ts:toDate("-MM-dd'T'HH:mm:ss.SS"):toNumber()}
This gives the Output value : 15
n enable on your message.
>
>
> https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_fileformats.htm#GWUAD735
>
> Logdump tool is an awesome tool to look into your trail files and see
> what's in there.
>
> Boris
>
>
>
> On Mon, Nov 5, 2018 at 3:07 AM
Hi Timothy ,
Hope you are doing well. We have been using your data flow(
https://community.hortonworks.com/content/kbentry/155527/ingesting-golden-gate-records-from-apache-kafka-an.html#
)
with slight modifications to store the data in Hbase. To version the rows
we have been using Op_ts of golden
Hi,
Please let me know if there are any plans to fix the issue((
https://issues.apache.org/jira/browse/NIFI-4071) of ConvertJSONToSQL so
that i can work with Hive?
Regards,
Faisal
; exceeds rate of delivery then delivery must he made faster or data must be
> expired at some threshold age.
>
> thanks
>
> On Thu, Jul 12, 2018, 9:34 PM Faisal Durrani wrote:
>
>> Hi Koji,
>>
>> I moved onto another cluster of Nifi nodes , did the same configuration
&
uch as freaky network, or
> manually stop the port or RPG while some transaction is being
> processed. I don't think it is a configuration issue, because NiFi was
> able to initiate S2S communication.
>
> Thanks,
> Koji
>
> On Fri, Jul 6, 2018 at 4:16 PM, Faisal Durrani
>
nifi.remote.input.http.transaction.ttl=60 sec
nifi.remote.input.host=
Please let me know if there is any configuration changes that we need to make.
On Fri, Jul 6, 2018 at 9:48 AM Faisal Durrani wrote:
> Hi Koji ,
>
> Thank you for your reply. I updated the logback.xml and ran the test
&
distributed, I'd lower Remote Port
> batch settings at sending side.
> Then try to find a bottle neck in downstream flow. Increasing
> concurrent tasks at such bottle neck processor can help increasing
> throughput in some cases. Adding more node will also help.
>
> Thanks,
>
Hi, I've got two questions
1.We are using Remote Process Group with Raw transport protocol to
distribute the data across four node cluster. I see the nifi app log has a
lot of instance of the below error
1. o.a.nifi.remote.SocketRemoteSiteListener Unable to communicate
with remote instance Pe
>>
>> NiFi has processors that correspond to the Kafka version, and generally
>> it is best to use the client version that matches the broker version.
>>
>> So you should be using ConsumeKafka_1_0 which uses the Kafka 1.0.0 client
>> to go with the 1.0.0 broker.
>>
>> The ConsumeKafka processor uses Kafka client 0.9.0.
>>
>>
>> On Jun 20, 2018, at 9:53 PM, Faisal Durrani wrote:
>>
>> Kafka (1.0.0.3.1.0)
>>
>>
>>
very badly as soon
> as we moved to multi node setup.
> Can you describe both the Kafka setup and the NiFi setup?
>
> RE: In theory that would me that there should 200 task /thread running
> in parallel
> I'm not sure I follow this -- are you basing this on the number of topics?
&
urces ( tables) ? In theory that would me
that there should 200 task /thread running in parallel .
On Thu, Jun 14, 2018 at 11:24 PM Faisal Durrani wrote:
> hi Andrew,
> The kafka broker is hosted on a single node and this particular topic has
> just 1 partition. The consume kafk
et me
know if you have faced a similar situation.
On Thu, 14 Jun 2018, 11:10 p.m. Andrew Psaltis,
wrote:
> Hi Faisal,
> How many partitions are there for that TEST_KAFKA_TOPIC topic?
>
> On Thu, Jun 14, 2018 at 9:06 PM Faisal Durrani
> wrote:
>
>> Hi Mark, The heap siz
op-right menu you can go to Controller Settings. The default for
> "Maximum Timer Driven
> Thread Count" is 10, but you'll definitely want to increase that for your
> use case.
>
> Also, how many cores does the VM/node that NiFi is running on have?
>
> Thank
Id increase the flow controller thread pool size by quite a bit as well.
>
> On Sun, Jun 10, 2018, 10:13 PM Faisal Durrani wrote:
>
>> Hi,
>>
>> Yes the kafka service is hosted on a single server while NIFI is on a
>> cluster of 4 servers. I'm not entirely sure w
> Boris
>
> On Mon, Jun 11, 2018 at 10:36 PM Faisal Durrani
> wrote:
>
>> Hi Andrew,
>>
>> Thank you for your suggestion. We are using the timestamp property of the
>> PutHbase processor to enforce the order. This timestamp is extracted from
>> the golde
g/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.6.0/org.apache.nifi.processors.standard.EnforceOrder/index.html
>
> Thanks,
> Andrew
>
>
>
> On Mon, Jun 11, 2018 at 11:11 AM Faisal Durrani
> wrote:
>
>> Hi Andrew,
>> We are receiving the golden gate transactions from Kafka wh
ans?
>
> Thanks,
> Andrew
>
> On Mon, Jun 11, 2018 at 9:43 AM Faisal Durrani
> wrote:
>
>> Is there a recommended way to ensure the row counts form tables in source
>> (Oracle) are consistent with that of target tables in Hbase ( data-lake)?
>> .We are using Ni
que instance of the ConsumeKafka proc for each topic
> rhen, right?
>
> Id increase the flow controller thread pool size by quite a bit as well.
>
> On Sun, Jun 10, 2018, 10:13 PM Faisal Durrani wrote:
>
>> Hi,
>>
>> Yes the kafka service is hosted on a single server whi
er
> topic per partion. How many he threads for that processor?
>
> Finally, consider using ConsumeKafkaRecord and if you are using kaka 1 or
> newer use the latest processor.
>
> thanks
>
> On Sun, Jun 10, 2018, 9:21 PM Faisal Durrani wrote:
>
>> Doe
Is there a recommended way to ensure the row counts form tables in source
(Oracle) are consistent with that of target tables in Hbase ( data-lake)?
.We are using Nifi which receives the golden gate messages and then by
using different processor we store the transactions in Hbase ,so
essentially th
Does anyone know about this error from Kafka? I am using Nifi 1.5.0 with
ConsumerKafka processor.
ConsumeKafka[id=34753ed3-9dd6-15ed-9c91-147026236eee] Failed to retain
connection due to No current assignment for partition TEST_KAFKA_TOPIC:
This is the first time we are testing Nifi to consume f
24 matches
Mail list logo