Hi All,
If I add new slave (start taskmanager on new host) which does not
included in the conf/slaves, I see below logs conintuously printed:
...Trying to register at JobManager...(attempt 147,
timeout: 3 milliseconds)
Is it normal? And does the new slave successfully added in the cluster?
Ufuk, Thanks for replying !
Aljoscha, can you please assist with the questions below?
Thanks,
Liron
-Original Message-
From: Ufuk Celebi [mailto:u...@apache.org]
Sent: Friday, December 15, 2017 3:06 PM
To: Netzer, Liron [ICG-IT]
Cc: user@flink.apache.org
Subject: Re: Flink State
Hi,
After upgrading to Flink 1.4, we encounter this exception
Caused by: java.lang.NullPointerException: in com.viettel.big4g.avro.LteSession
in long null of long in field tmsi of com.viettel.big4g.avro.LteSession
at org.apache.avro.reflect.ReflectDatumWriter.write(ReflectDatumWriter.java:161)
On Tue, Dec 19, 2017 at 7:29 PM, Colin Williams <
colin.williams.seat...@gmail.com> wrote:
> Hi,
>
> I've been trying to update my flink-docker jobmanager configuration for
> flink 1.4. I think the system is shutting down after a leadership election,
> but I'm not sure what the issue is. My
Hi,
I've been trying to update my flink-docker jobmanager configuration for
flink 1.4. I think the system is shutting down after a leadership election,
but I'm not sure what the issue is. My configuration of the jobmanager
follows
jobmanager.rpc.address: 10.16.228.150
jobmanager.rpc.port: 6123
Hi,
I have a requirement to initialize few guava caches per jvm and some static
helper classes. I tried few options but nothing worked. Need some help.
Thanks a lot.
1. Operator level static variables:
public static Cache loadingCache;
public void open(Configuration parameters)
Hi Timo,
I am using Rocksdbstatebackend with hdfs path. I have following flink
dependencies in my sbt :
"org.slf4j" % "slf4j-log4j12" % "1.7.21",
"org.apache.flink" %% "flink-scala" % flinkVersion % "provided",
"org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided",
I have tried to add this in Both lib folder of flink and assembly jar as
dependency too. But getting the same error.
On Tue, Dec 19, 2017 at 11:28 PM, Jörn Franke wrote:
> You need to put flink-hadoop-compability*.jar in the lib folder of your
> flink distribution or
Sorry about the incorrect link.
Here it is:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/linking.html
Best,
Gordon
On 19 December 2017 at 8:23:05 AM, Soheil Pourbafrani (soheil.i...@gmail.com)
wrote:
Hi, Thank for replying!
I can't see any link! The link is not recognized
You need to put flink-hadoop-compability*.jar in the lib folder of your flink
distribution or in the class path of your Custer nodes
> On 19. Dec 2017, at 12:38, shashank agarwal wrote:
>
> yes, it's working fine. now not getting compile time error.
>
> But when i
Hi Shashank,
it seems that HDFS is still not in classpath. Could you quickly explain
how I can reproduce the error?
Regards,
Timo
Am 12/19/17 um 12:38 PM schrieb shashank agarwal:
yes, it's working fine. now not getting compile time error.
But when i trying to run this on cluster or
Thanks, Kostas.
On Tue, Dec 19, 2017 at 10:13 PM, Kostas Kloudas <
k.klou...@data-artisans.com> wrote:
> Hi Shailesh,
>
> The pattern operator does not use Flink’s windowing mechanism internally.
> Conceptually you may think that there are windows in both, and this is
> true, but there
> are
Hi Shailesh,
The pattern operator does not use Flink’s windowing mechanism internally.
Conceptually you may think that there are windows in both, and this is true,
but there
are significant differences that prevent using Flink windowing for CEP.
The above implies also that using triggers for
Hi,
Similar to the way it is exposed in Windows operator, is it possible to use
Triggers inside the Pattern Operator to fire partially matched patterns
(when certain events are very late and we want some level of controlled
early evaluation)?
I assume that Windows are used internally to
When the JobManager/TaskManager are starting up they log what config
they are loading. Look for lines like
"Loading configuration property: {}, {}"
Do you find the required configuration as part of these messages?
– Ufuk
On Tue, Dec 19, 2017 at 3:45 PM, Plamen Paskov
I opened an issue for it: https://issues.apache.org/jira/browse/FLINK-8295
Timo
Am 12/19/17 um 11:14 AM schrieb Nico Kruber:
Hi Dominik,
nice assessment of the issue: in the version of the cassandra-driver we
use there is even a comment about why:
try {
// prevent this string from
Hi Shashank,
the exception you get is a known issue [0] that will be fixed with Flink
1.4.1. We improved the dependency management but it seems this causes
some problems with the Cassandra connector right now.
So as a workaround you can add netty (version 4.0) to your dependencies.
This
Hi,
I'm trying to enable externalized checkpoints like this:
env.enableCheckpointing(1000);
CheckpointConfig checkpointConfig = env.getCheckpointConfig();
checkpointConfig.enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
I have tried that by creating class with companion static object:
@SerialVersionUID(507L)
@Table(keyspace = "neofp", name = "order_detail")
class OrderFinal(
@BeanProperty var order_name: String,
@BeanProperty var user: String )extends Serializable
{
yes, it's working fine. now not getting compile time error.
But when i trying to run this on cluster or yarn, getting following runtime
error :
org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could
not find a file system implementation for scheme 'hdfs'. The scheme is
not directly
Hi Dominik,
nice assessment of the issue: in the version of the cassandra-driver we
use there is even a comment about why:
try {
// prevent this string from being shaded
Class.forName(String.format("%s.%s.channel.Channel", "io", "netty"));
shaded = false;
} catch (ClassNotFoundException
Hi Shashank,
Scala case classes are treated as a special tuple type in Flink. If you
want to make a POJO out of it, just remove the "case" keyword and make
sure that the class is static (in the companion object).
I hope that helps.
Timo
Am 12/19/17 um 11:04 AM schrieb shashank agarwal:
HI,
I have upgraded flink from 1.3.2 to 1.4.0. I am using cassandra sink in my
scala application. Before sink, i was converting my scala datastream to
java stream and sinking in Cassandra. I have created pojo class in scala
liked that :
@SerialVersionUID(507L)
@Table(keyspace = "neofp", name =
Thanks Gyula,
It kind of helped. I did remove some KryoSerializers here and there and it
started working, but don’t understand it fully. Will try to understand and
reproduce it, as soon as I have some spare time.
> On 17 Dec 2017, at 17:52, Gyula Fóra wrote:
>
> Hi,
> I
Hi Vishal,
it is not guaranteed that add() and onElement() receive the same object,
and even if they do it is not guaranteed that a mutation of the object in
onElement() has an effect. The object might have been serialized and stored
in RocksDB.
Hence, elements should not be modified in
It is not possible at this moment. FlinkCEP can handle only one Pattern applied
statically. There is a JIRA ticket for that:
https://issues.apache.org/jira/browse/FLINK-7129 .
> On 19 Dec 2017, at 10:10, Jayant Ameta wrote:
>
> I've a datastream of events, and another
I've a datastream of events, and another datastream of patterns. The
patterns are provided by users at runtime, and they need to come via a
Kafka topic. I need to apply each of the pattern on the event stream using
Flink-CEP. Is there a way to get a PatternStream from the DataStream when I
don't
Hi All,
Does Flink fully supports MS Windows?
Is anyone running Flink on MS Windows in Production environment?
Regards,
Aviv
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
28 matches
Mail list logo