e/display/FLINK/Powered+by+Flink
>
> 2016-10-05 14:31 GMT+02:00 Hironori Ogibayashi :
>>
>> Hi.
>>
>> Yes, I am really looking forward to the next major release.
>>
>> By the way, I got OK from our PR department about adding our company
>> name to the
ase, such as:
>
> - Change of parallelism via savepoints
> - Compatibility of savepoints across versions
>
> Greetings,
> Stephan
>
>
> On Tue, Oct 4, 2016 at 11:56 PM, Hironori Ogibayashi
> wrote:
>>
>> Thank you for the response.
>> Rega
-By
> wiki page [1] ?
>
> Thanks, Fabian
>
> [1] https://cwiki.apache.org/confluence/display/FLINK/Powered+by+Flink
>
> 2016-10-04 14:04 GMT+02:00 Hironori Ogibayashi :
>> Hello,
>>
>> Just for information.
>>
>> Last week, I have pre
title will also be published soon.
The use case itself might not be very interesting, but I think this is the
first Flink production use case in Japan opened to the public.
Thank you for great software.
Regards,
Hironori Ogibayashi
[1] https://issues.apache.org/jira/browse/FLINK-4022
>
>
> On September 27, 2016 at 6:17:06 PM, Hironori Ogibayashi
> (ogibaya...@gmail.com) wrote:
>
> Hello,
>
> I want FlinkKafkaConsumer to follow changes in Kafka topic/partition change.
> This means:
> - When we add
FlinkKafkaConsumer follow topic/partition change?
Regards,
Hironori Ogibayashi
is not correct. The issue has just been fixed.
>
> You will have to wait for the next minor release 1.1.2 or build the
> 'release-1.1' Git branch.
>
> Best,
> Max
>
> On Wed, Aug 24, 2016 at 11:14 AM, Hironori Ogibayashi
> wrote:
>> Ufuk, Max,
>>
gt;>>> at java.net.InetSocketAddress.checkHost(InetSocketAddress.java:149)
>>>> at java.net.InetSocketAddress.(InetSocketAddress.java:216)
>>>> at
>>>> org.apache.flink.client.program.ClusterClient.getJobManagerAddressFromConfig(ClusterClient.java:242)
>>>> ... 5 more
>>>> ---
>>>>
>>>> I am using JobManager HA and I set "recovery.mode: zookeeper",
>>>> recovery.zookeeper.quorum, recovery.zookeeper.path.root is my
>>>> flink-conf.yaml.
>>>> So, the client should be able to get JobManager address from zookeeper.
>>>> If I explicitly specify JobManager address with -m option, it works.
>>>>
>>>> Am I missing something?
>>>>
>>>> Regards,
>>>> Hironori Ogibayashi
per.path.root is my
flink-conf.yaml.
So, the client should be able to get JobManager address from zookeeper.
If I explicitly specify JobManager address with -m option, it works.
Am I missing something?
Regards,
Hironori Ogibayashi
e to the to be released 1.1 (vote just passed, binaries
> are being uploaded) this will be set automatically for YARN. You can
> also specify it via the new CLI parameter -z (this sets
> recovery.zookeeper.path.root).
>
> Hope this helps.
>
> Ufuk
>
> On Thu, Aug 4, 2
multiple Flink cluster job on YARN, and want to use checkpoint or
JobManager HA, do I need to specify different paths for each cluster/job? or
does YARN handle this nicely?
Regards,
Hironori Ogibayashi
sion of the
> TumblingProcessingTimeWindows right now.
>
> I've opened a Jira issue for adding an offset setting to the built-in window
> assigners: https://issues.apache.org/jira/browse/FLINK-4282
>
> Cheers,
> Aljoscha
>
> On Tue, 26 Jul 2016 at 12:51 Hironori Ogibayashi
>
ound is as I've described, just restart jobmanager-5.
>
>
>
> On Wed, Jul 27, 2016 at 2:55 PM, Hironori Ogibayashi
> wrote:
>> Thank you so much for your quick response.
>> I am running Flink 1.0.3.
>>
>> I have attached jobmanager logs. The failover happend
ow I can recover from
this situation? (restart JobManager?)
Regards,
Hironori Ogibayashi
y own WindowAssigner for this use case?
Thanks,
Hironori Ogibayashi
aConsumer.consumer) {", can you replace that by
> using the fair lock instead?
>
> If that solves it, we'll add that as a fix.
>
> Greetings,
> Stephan
>
>
> On Tue, Jul 5, 2016 at 9:24 AM, Hironori Ogibayashi
> wrote:
>>
>> Hi,
>>
>> S
nks :)
>
> On Thu, Jun 16, 2016 at 3:21 PM, Hironori Ogibayashi
> wrote:
>> Ufuk,
>>
>> Yes, of course. I will be sure to update when I got some more information.
>>
>> Hironori
>>
>> 2016-06-16 1:56 GMT+09:00 Ufuk Celebi :
>>> Hey Hiron
>
>
> On Wed, Jun 15, 2016 at 2:48 PM, Hironori Ogibayashi
> wrote:
>> Kostas,
>>
>> Thank you for your advise. I have posted my question to the Kafka mailing
>> list.
>> I think Kafka brokers are fine because no errors on producer side with
>>
> Have you tried posting the problem also to the Kafka mailing list?
> Can it be that the kafka broker fails and tries to reconnect but does not
> make it?
>
> Kostas
>
> On Jun 14, 2016, at 2:59 PM, Hironori Ogibayashi
> wrote:
>
> Kostas,
>
> I have attache
stuck in the polling loop,
> although Flink polls with
> a timeout. This would normally mean that periodically it should release the
> lock for the checkpoints to go through.
>
> The logs of the task manager can help at clarifying why this does not happen.
>
> Thanks,
> Kostas
>
.
>
> Thanks,
> Kostas
>
>> On Jun 14, 2016, at 11:52 AM, Hironori Ogibayashi
>> wrote:
>>
>> Hello,
>>
>> I am running Flink job which reads topics from Kafka and write results
>> to Redis. I use FsStatebackend with HDFS.
>>
>> I n
Hello,
I am running Flink job which reads topics from Kafka and write results
to Redis. I use FsStatebackend with HDFS.
I noticed that taking checkpoint takes serveral minutes and sometimes expires.
---
2016-06-14 17:25:40,734 INFO
org.apache.flink.runtime.checkpoint.CheckpointCoordinator -
C
Thank you for your response.
flink-spector looks really nice. I tried but got some errors regarding
types, maybe because of
the thing Alex mentioned.
I am looking forward to the new version.
Thanks,
Hironori.
2016-05-30 16:45 GMT+09:00 lofifnc :
> Hi,
>
> Flinkspector is indeed a good choice to
)")
}
}
---
But when I ran the test. I got this error:
java.lang.AssertionError: Wrong number of elements result expected:<2>
but was:<0>
It looks like test finishes before the end of the timeWindow, but I do
not know how to fix it.
Any advise would be appreciated.
Thanks,
Hironori Ogibayashi
gt; Cheers,
> Till
>
> On Tue, Apr 26, 2016 at 10:16 AM, Hironori Ogibayashi
> wrote:
>>
>> Hello,
>>
>> I am using GlobalWindow and my custom trigger (similar to
>> ContinuousProcessingTimeTrigger).
>> In my trigger I want to control the TriggerRe
able
to handle the event in onElement(). I need to filter that event
afterward so that it does not affect the computation result.
Thanks,
Hironori Ogibayashi
'll get it in the 1.0.2 release that we are just about to release.
>
> Cheers,
> Aljoscha
>
> On Wed, 13 Apr 2016 at 07:25 Hironori Ogibayashi
> wrote:
>>
>> Hello,
>>
>> I am trying to use HyperLogLog in
>> stream-lib(https://github.com/addthis/
Hello,
I am trying to use HyperLogLog in
stream-lib(https://github.com/addthis/stream-lib)
in my Flink streaming job, but when I submit the job, I got the
following error. My Flink version is 1.0.1.
---
org.apache.flink.client.program.ProgramInvocationException: The
program execution failed: Job
k dashboard. For this I would suggest to
> disable chaining, so that every operator is run in an isolated task:
>
> env.disableOperatorChaining();
>
> On Thu, 7 Apr 2016 at 05:11 Hironori Ogibayashi
> wrote:
>>
>> I tried RocksDB, but the result was almost the same.
&g
istinct value).
I think copying all 250MB(or more) file to HDFS in every checkpoint
will be heavy, so I will try storing the distinct values
in the external datastore (e.g. redis).
Also, when incremental snapshot get implemented, I want to try.
Regards,
Hironori
2016-04-05 21:40 GMT+09:00 Hironori Ogi
will be done while data processing keeps running (asynchronous snapshot).
>
> As to incremental snapshots. I'm afraid this feature is not yet implemented
> but we're working on it.
>
> Cheers,
> Aljoscha
>
> On Tue, 5 Apr 2016 at 14:06 Hironori Ogibayashi
> wrote:
Hello,
I am trying to implement windowed distinct count on a stream. In this
case, the state
have to hold all distinct value in the window, so can be large.
In my test, if the state size become about 400MB, checkpointing takes
40sec and spends most of Taskmanager's CPU.
Are there any good way to
so the trigger did not fire.
Thanks a lot for your help!
Regards,
Hironori
2016-04-01 0:15 GMT+09:00 Hironori Ogibayashi :
> Aljoscha,
>
> Thank you. That change looks good. I will try.
>
> Regards,
> Hironori
>
> 2016-03-31 22:20 GMT+09:00 Aljoscha Krettek :
>> Oh
imer
> return TriggerResult.FIRE;
> }
> return TriggerResult.CONTINUE;
> }
>
> What do you think? This should have the behavior that it continuously fires,
> but only if new elements arrive.
>
> Cheers,
> Aljoscha
>
> On Thu, 31 Mar 2016 at 14:46 Hironori Ogibaya
x27;m currently thinking
> about how to make the triggers more intuitive since right now they are not
> very easy to comprehend because the names can also be misleading.
>
> Cheers,
> Aljoscha
>
> On Wed, 30 Mar 2016 at 14:33 Hironori Ogibayashi
> wrote:
>>
>> Hi
Hi
I noticed that ContinuousProcessingTimeTrigger sometimes does not fire.
I asked similar question before and applied this patch.
https://github.com/apache/flink/commit/607892314edee95da56f4997d85610f17a0dd470#diff-19bbcb3ea1403e483327408badfcd3f8
It looked work but still I have strange behavior
mail-archives.apache.org/mod_mbox/flink-dev/201603.mbox/%3c16991435-118a-403b-b766-634908325...@apache.org%3e
>
> I created an associated doc to keep track of my proposed changes:
> https://docs.google.com/document/d/1Xp-YBf87vLTduYSivgqWVEMjYUmkA-hyb4muX3KRl08/edit?usp=sharing
>
>
firing and purging on time and also has
> the continuous triggering at earlier times.
>
> Let us know if you need more information about this. Kostas Kloudas also
> recently looked into writing custom Triggers, so maybe he has some material
> he could give to you.
>
> Cheers
Hello,
I have a question about TumblingProcessingTimeWindow and
ContinuousProcessingTimeTrigger.
The code I tried is below. Output the distinct count of the words,
counts are printed every 5 seconds and window is reset every 1 minute.
---
val input =
env.readFileStream(fileName,100,FileMonit
39 matches
Mail list logo