Hi Andrey,
I agree with Elias. This would be the most natural behavior. I wouldn't add
additional slightly different notions of time to Flink.
As I can also see a use case for the combination
* Timestamp stored: Event timestamp
* Timestamp to check expiration: Processing Time
we could (maybe
For HA implementation, is zookeeper is used only for leader selection, or it
also stores some data relevant for switching to backup server
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
Hi,
You POJO should implement the Serializable interface.
Otherwise it's not considered to be serializable.
Best, Fabian
Papadopoulos, Konstantinos
schrieb am Mi., 3. Apr. 2019, 07:22:
> Hi Chesnay,
>
>
>
> Thanks for your support. ThresholdAcvFact class is a simple POJO with the
> following
Hi,
Konstantin is right.
reinterpreteAsKeyedStream only works if you call it on a DataStream that
was keyBy'ed before (with the same parallelism).
Flink cannot reuse the partioning of another system like Kafka.
Best, Fabian
Adrienne Kole schrieb am Do., 4. Apr. 2019, 14:33:
> Thanks a lot
+1 to drop it
Previously released versions are still available and compatible with newer
Flink versions anyways.
On Fri, Apr 5, 2019 at 2:12 PM Bowen Li wrote:
> +1 for dropping elasticsearch 1 connector.
>
> On Wed, Apr 3, 2019 at 5:10 AM Chesnay Schepler
> wrote:
>
>> Hello everyone,
>>
>>
Hi Yantao,
Thanks, I have also commented in the original JIRA.
https://issues.apache.org/jira/browse/FLINK-8801?focusedCommentId=16807691=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel
@Nico @Till Do you mind review if an alternative fix would be needed? If
so, I can create a
+1 for dropping elasticsearch 1 connector.
On Wed, Apr 3, 2019 at 5:10 AM Chesnay Schepler wrote:
> Hello everyone,
>
> I'm proposing to remove the connector for elasticsearch 1.
>
> The connector is used significantly less than more recent versions (2&5
> are downloaded 4-5x more), and hasn't
Hi,
We are using flink 1.7.1 and running as docker container. State backend is
Ceph. Problem is that JobManager on startup exits with docker exit 0 (ie
Completed). The only error/exception that I see is given below. Please share
your thoughts.
2019-04-05 12:14:04,314 INFO
I guess there is something to do with the parallelism of the cluster. When
I set "taskmanager.numberOfTaskSlots" to 1 and do not use
"setParallelism()" I can see the logs. And on Eclipse I can see the logs.
Does anybody have a clue?
Thanks
*--*
*-- Felipe Gutierrez*
*-- skype:
What's the best way to enable the JMX Reporter while I am developing an
applicaiton in an IDE? The reason is I would like to experiment with adding
detailed metrics to my pipelines (and also see what standard operators
provide) without having to deploy to a regular cluster.
Thanks,
Frank
no. It did not work.
I also created a Sink that is a MQTT publisher (
https://github.com/felipegutierrez/explore-flink/blob/master/src/main/java/org/sense/flink/mqtt/MqttSensorPublisher.java)
and on my eclipse it works. When I deploy my job on my Flink cluster it
does not work. It might be
Hi Juan,
thanks for reporting this issue. If you could open an issue and also
provide a fix for it, then this would be awesome. Please let me know the
ticket number so that I can monitor it and give your PR a review.
Cheers,
Till
On Fri, Apr 5, 2019 at 5:34 AM Juan Gentile wrote:
> Hello!
>
>
Which Flink version are you using? The DISABLED value has not been
working since 1.5, so you may be stuck with uploading the app jar every
time.
On 04/04/2019 11:35, 徐涛 wrote:
Hi Experts,
When submitting a Flink program to Yarn, the app jar( a fat jar about 200M
with Flink
> I tried using [ keyBy(KeySelector, TypeInformation) ]
What was the result of this approach?
On 03/04/2019 17:36, Vijay Balakrishnan wrote:
Hi Tim,
Thanks for your reply. I am not seeing an option to specify a
.returns(new TypeHintString,String,String,String,String>>(){}) with KeyedStream
This kind of sounds like a Outputstream flushing issue. Try calling
"System.out.flush()" now and then in your sink and report back.
On 04/04/2019 18:04, Felipe Gutierrez wrote:
Hello,
I am studying the parallelism of tasks on DataStream. So, I have
configured Flink to execute on my machine
Hello!
We are having a small problem while trying to deploy Flink on Mesos using
marathon. In our set up of Mesos we are required to specify the amount of disk
space we want to have for the applications we deploy there.
The current default value in Flink is 0 and it’s currently is not
Hi,
I keep getting exceptions
"org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: Too many
ongoing snapshots. Increase kafka producers pool size or decrease number of
concurrent checkpoints."
I think that DEFAULT_KAFKA_PRODUCERS_POOL_SIZE is 5 and need to increase
this size.
17 matches
Mail list logo