Hi,
you have to make sure that the Flink classes are contained in your class
path.
Either add the flink-dist jar from the binary distribution to your class
path, or use maven to build the backend.jar as a fat jar.
Why are you generating a java class from your dataflows?
Isn't it easier to just ca
connectors.kafka.FlinkKafkaConsumer.(FlinkKafkaConsumer.java:280)
> at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer081.(FlinkKafkaConsumer081.java:55)
>
> On Fri, Sep 18, 2015 at 11:02 AM, Jakob Ericsson > wrote:
>
>> That will work. We have some utility classes for exposing the ZK-info.
>>
>&g
Hi Jakob,
currently, its not possible to subscribe to multiple topics with one
FlinkKafkaConsumer.
So for now, you have to create a FKC for each topic .. so you'll end up
with 50 sources.
As soon as Kafka releases the new consumer, it will support subscribing to
multiple topics (I think even wit
Hey Martin,
I don't think anybody used Google Cloud Pub/Sub with Flink yet.
There are no tutorials for implementing streaming sources and sinks, but
Flink has a few connectors that you can use as a reference.
For the sources, you basically have to extend RichSourceFunction (or
RichParallelSourceFu
I don't think its working.
According to the Kafka documentation (
https://kafka.apache.org/documentation.html#upgrade):
0.8, the release in which added replication, was our first
> backwards-incompatible release: major changes were made to the API,
> ZooKeeper data structures, and protocol, and co
> gwenhael.pasqui...@ericsson.com> wrote:
> >
> > Thanks,
> >
> > In the mean time we’ll go back to 0.9.0 J
> >
> > From: Robert Metzger [mailto:rmetz...@apache.org]
> > Sent: jeudi 10 septembre 2015 16:49
> > To: user@flink.apache.org
> > Su
Hi Gwen,
sorry that you ran into this issue. The implementation of the Kafka
Consumer has been changed completely in 0.9.1 because there were some
corner-case issues with the exactly-once guarantees in 0.9.0.
I'll look into the issue immediately.
On Thu, Sep 10, 2015 at 4:26 PM, Gwenhael Pasqui
upcoming milestone release...
>
> On Tue, Sep 8, 2015 at 8:33 PM, Robert Metzger
> wrote:
>
>> As I said, the workaround is using the "bin/flink" tool from the command
>> line.
>> I think it should be possible to add a "student" account on the cluste
Damn. I saw this discussion too late. I think the "fork = true" is
documented here:
https://ci.apache.org/projects/flink/flink-docs-release-0.9/quickstart/scala_api_quickstart.html#alternative-build-tools-sbt
On Wed, Sep 9, 2015 at 2:46 PM, Giancarlo Pagano
wrote:
> I’ve actually found the probl
Hi,
Currently, Flink does not support automatic scaling of the YARN containers.
There are certainly plans to add this feature:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/add-some-new-api-to-the-scheduler-in-the-job-manager-tp7153p7448.html
Adding an API for manually starting a
ve experience with that but does there
> exists a possible work around?
>
>
> Am 08.09.2015 um 13:13 schrieb Robert Metzger :
>
> That's the bug: https://issues.apache.org/jira/browse/FLINK-2632
>
> On Tue, Sep 8, 2015 at 1:11 PM, Robert Metzger
> wrote:
>
>
That's the bug: https://issues.apache.org/jira/browse/FLINK-2632
On Tue, Sep 8, 2015 at 1:11 PM, Robert Metzger wrote:
> There is a bug in the web client which sets the wrong class loader when
> running the user code.
>
> On Tue, Sep 8, 2015 at 12:05 PM, Florian Heyl wrote:
&g
se our prof is
> responsible for that.
> The students are using the flink web submission client to upload their jar
> and run it on the cluster.
>
>
> Am 08.09.2015 um 12:48 schrieb Robert Metzger :
>
> Which version of Flink are you using?
>
> Have you tried submittin
Which version of Flink are you using?
Have you tried submitting the job using the "./bin/flink run" tool?
On Tue, Sep 8, 2015 at 11:44 AM, Florian Heyl wrote:
> Dear Sir or Madam,
> Me and my colleague are developing a pipeline based on scala and java to
> classify cancer stages. This pipeline
formation on the GC?
>
>
>
> Am 08.09.2015 um 09:34 schrieb Robert Metzger :
>
> The webinterface of Flink has a tab for the TaskManagers. There, you can
> also see how much time the JVM spend with garbage collection.
> Can you check whether the number of GC calls + the time
The webinterface of Flink has a tab for the TaskManagers. There, you can
also see how much time the JVM spend with garbage collection.
Can you check whether the number of GC calls + the time spend goes up after
30 minutes?
On Tue, Sep 8, 2015 at 8:37 AM, Rico Bergmann wrote:
> Hi!
>
> I also thi
Hi Jerry,
the issue occurs because Flink's storm compatibility layer does not support
custom configuration parameters currently.
There is this JIRA which aims to add the missing feature to Flink:
https://issues.apache.org/jira/browse/FLINK-2525
Maybe (but its unlikely) passing an empty Map in the
I think most cloud providers moved beyond Hadoop 2.2.0.
Google's Click-To-Deploy is on 2.4.1
AWS EMR is on 2.6.0
The situation for the distributions seems to be the following:
MapR 4 uses Hadoop 2.4.0 (current is MapR 5)
CDH 5.0 uses 2.3.0 (the current CDH release is 5.4)
HDP 2.0 (October 2013)
Hi Arnaud,
I think that's a bug ;)
I'll file a JIRA to fix it for the next release.
On Thu, Sep 3, 2015 at 10:26 AM, LINZ, Arnaud
wrote:
> Hi,
>
>
>
> I am wondering why, despite the fact that my java main() methods runs OK
> and exit with 0 code value, the Yarn container status set by the engl
I've filed a JIRA at INFRA:
https://issues.apache.org/jira/browse/INFRA-10239
On Wed, Sep 2, 2015 at 11:18 AM, Robert Metzger wrote:
> Hi Sachin,
>
> I also noticed that the GitHub integration is not working properly. I'll
> ask the Apache Infra team.
>
> On Wed, Sep
I'm sorry that we changed the method name between minor versions.
We'll soon bring some infrastructure in place a) mark the audience of
classes and b) ensure that public APIs are stable.
On Wed, Sep 2, 2015 at 9:04 PM, Ferenc Turi wrote:
> Ok. As I see only the method name was changed. It was a
The scale out data is the transactions.
>
> The seed data needs to be the same, shipped to ALL nodes, and then
>
> the nodes generate transactions.
>
>
> On Wed, Sep 2, 2015 at 9:21 AM, Robert Metzger
> wrote:
>
>> I'm starting a new discussion thread for the big
which will help
onboard spark/mapreduce folks.
I have prototypical code here that runs a simple job in memory,
contributions welcome,
right now there is a serialization error
https://github.com/bigpetstore/bigpetstore-flink .
On Wed, Sep 2, 2015 at 8:50 AM, Robert Metzger wrote:
> Hi Juan,
&
PM, Jay Vyas
wrote:
> Just running the main class is sufficient
>
> On Sep 2, 2015, at 8:59 AM, Robert Metzger wrote:
>
> Hey jay,
>
> How can I reproduce the error?
>
> On Wed, Sep 2, 2015 at 2:56 PM, jay vyas
> wrote:
>
>> We're also working on a bigp
ontributions welcome,
>
> right now there is a serialization error
> https://github.com/bigpetstore/bigpetstore-flink .
>
> On Wed, Sep 2, 2015 at 8:50 AM, Robert Metzger
> wrote:
>
>> Hi Juan,
>>
>> I think the recommendations in the Spark guide are quite good
Hi Juan,
I think the recommendations in the Spark guide are quite good, and are
similar to what I would recommend for Flink as well.
Depending on the workloads you are interested to run, you can certainly use
Flink with less than 8 GB per machine. I think you can start Flink
TaskManagers with 500
Hi Sachin,
I also noticed that the GitHub integration is not working properly. I'll
ask the Apache Infra team.
On Wed, Sep 2, 2015 at 10:20 AM, Sachin Goel
wrote:
> Hi all
> Is there some issue with travis integration? The last three pull requests
> do not have their build status on Github page
t;getRuntimeContext().getUserCodeClassLoader()".
>
> Let us know if that workaround works. We'll try to get a fix for that out
> very soon!
>
> Greetings,
> Stephan
>
>
>
> On Tue, Aug 18, 2015 at 12:23 PM, Robert Metzger
> wrote:
>
>> Java's Has
Hi,
yes, the Avro Schema is not serializable.
Can you make the "_schema" field "transient" and then lazily initialize the
field when serialize()/deserialize() is called?
That way, you initialize the schema on the cluster, so there is no need to
transfer it over the network.
I think Flink's own s
ke was to launch the job in detached mode (-yd)
> when my main function was not waiting after execution and was immediately
> ending. Sorry for my misunderstanding of this option.
>
>
>
> Best regards,
>
> Arnaud
>
>
>
> *De :* Robert Metzger [mailto:rmetz...@a
Hi,
Creating a slf4j logger like this:
private static final Logger LOG =
LoggerFactory.getLogger(PimpedKafkaSink.class);
Works for me. The messages also end up in the regular YARN logs. Also
system out should end up in YARN actually (when retrieving the logs from
the log aggregation).
Regards,
Is the log from 0.9-SNAPSHOT or 0.10-SNAPSHOT?
Can you send me (if you want privately as well) the full log of the yarn
application:
yarn logs -applicationId .
We need to find out why the TaskManagers are shutting down. That is most
likely logged in the TaskManager logs.
On Fri, Aug 28, 2015 a
t 5:58 PM, Robert Metzger
> wrote:
> > Therefore, my change will include a configuration option to set a custom
> > location for the file.
> >
> > On Wed, Aug 26, 2015 at 5:55 PM, Maximilian Michels
> wrote:
> >>
> >> The only problem with writ
system has
> been restarted anyways, this can actually be a problem if you want to
> resume a YARN cluster after you have restarted your system.
>
> On Wed, Aug 26, 2015 at 3:34 PM, Robert Metzger
> wrote:
> > Yep. I think the start-*.sh scripts are also writing the PID to
rmation.
>
> On Wed, Aug 26, 2015 at 3:25 PM, Robert Metzger
> wrote:
> > Great ;)
> >
> > Not yet, but you are the second user to request this.
> > I think I'll put the file somewhere else now.
> >
> > On Wed, Aug 26, 2015 at 3:19 PM, LINZ, Arnaud
&
not really nice to have an application write the configuration
> dir ; it’s often a root protected directory in usr/lib/flink. Is there a
> parameter to put that file elsewhere ?
>
>
>
>
>
> *De :* Robert Metzger [mailto:rmetz...@apache.org]
> *Envoyé :* mercredi 26 août 20
Hi Arnaud,
usually, you don't have to manually specify the JobManager address manually
with the -m argument, because it is reading it from the
conf/.yarn-session.properties file.
Give me a few minutes to reproduce the issue.
On Wed, Aug 26, 2015 at 2:39 PM, LINZ, Arnaud
wrote:
> Hi,
> Using la
Hi Flavio,
can you share a minimal version of your program to reproduce the issue?
On Wed, Aug 26, 2015 at 10:36 AM, Flavio Pompermaier
wrote:
> I'm running my job from my Eclipse and I don't register any Kryo class in
> the env.
>
> On Wed, Aug 26, 2015 at 10:34 AM, Stephan Ewen wrote:
>
>> H
);
>
> }
>
> }
>
> }
>
> *loggedIn *being static to the class, and *alinz* having all the proper
> rights.
>
>
>
> From what I’ve seen on google, spark and hive/oozie ran into the same
> error and somewhat corrected that, but I don’t
Hi Arnaud,
I suspect the "HdfsTools" are something internal from your company?
Are they doing any kerberos-related operations?
Is the local cluster mode also reading files from the secured HDFS cluster?
Flink is taking care of sending the authentication tokens from the client
to the jobManager a
I'm still working on writing a test case for reproducing the issue.
Which Flink version are you using?
If you are using 0.10-SNAPSHOT, which exact commit?
On Tue, Aug 18, 2015 at 2:09 PM, Robert Metzger wrote:
> I created a JIRA for the issue:
> https://issues.apache.org/jira/browse
Exactly, Timo opened the thread.
On Tue, Aug 18, 2015 at 2:04 PM, Kristoffer Sjögren
wrote:
> Yeah, I think I found the thread already... by Timo Walther?
>
> On Tue, Aug 18, 2015 at 2:01 PM, Stephan Ewen wrote:
> > Would have been great. I had high hopes when I saw the trick with the
> > "cons
r code (needed for deserialization)
> via "getRuntimeContext().getUserCodeClassLoader()".
>
> Let us know if that workaround works. We'll try to get a fix for that out
> very soon!
>
> Greetings,
> Stephan
>
>
>
> On Tue, Aug 18, 2015 at 12:23 PM, Robe
not possible since the state is very big (a Hashtable).
>
> How would I have to do serialization into a byte array?
>
> Greets. Rico.
>
>
>
> Am 18.08.2015 um 11:44 schrieb Robert Metzger :
>
> Hi Rico,
>
> I'm pretty sure that this is a valid bug you'v
Hi Rico,
I'm pretty sure that this is a valid bug you've found, since this case is
not yet tested (afaik).
We'll fix the issue asap, until then, are you able to encapsulate your
state in something that is available in Flink, for example a TupleX or just
serialize it yourself into a byte[] ?
On Tu
Hi Jay,
this is how you can register a custom Kryo serializer, yes.
Flink has this project (https://github.com/magro/kryo-serializers) as a
dependency. It contains a lot of Kryo Serializers for common types. They
also added support for for Guava's ImmutableMap, but the version we are
using (0.27)
Hi Stefanos,
you can also write yourself a little script/tool which is periodically
requesting the following JSON from the JobManager:
http://localhost:8081/setupInfo?get=taskmanagers&_=1438972693441
It returns a JSON string like this:
{"taskmanagers":[{"path":"akka:\/\/flink\/user\/taskmanager
Hi,
how did you build the jar file?
Have you checked whether your classes are in the jar file?
On Thu, Aug 6, 2015 at 11:08 AM, Michael Huelfenhaus <
m.huelfenh...@davengo.com> wrote:
> Hello everybody
>
> I am truing to build a very simple streaming application with the nightly
> build of flink
Hi Juan,
there is a configuration option which is not documented in the 0.9
documentation:
-
env.java.opts: Set custom JVM options. This value is respected by
Flink’s start scripts and Flink’s YARN client. This can be used to set
different garbage collectors or to include remote deb
Yes, I was running exactly that code. This is a repository containing the
files: https://github.com/rmetzger/scratch/tree/flink-sbt-master
Here is the program:
https://github.com/rmetzger/scratch/blob/flink-sbt-master/src/main/scala/org/myorg/quickstart/Job.scala
On Tue, Jul 28, 2015 at 2:01 AM, W
ing on the EMR master, then is it useful to allocate a big machine (8
> core, 30GB) on it? I thought it was the jm but it is not
>
>
>
>
>
> Il giorno 27/lug/2015, alle ore 14:56, Robert Metzger <
> rmetz...@apache.org> ha scritto:
>
> Hi Michele,
>
>
&
ive node in the resource
>> manager…sounds strange to me: ganglia shows 6 nodes and 1 is always offload
>>
>> the total amount of memory is 112.5GB that is actually 22.5 for each of
>> the 5
>>
>> now i am a little lost because I thought I was running 5 node for
Thank you for posting the full SBT files.
I now understand why you exclude the kafka dependency from Flink. SBT does
not support to read maven properties only defined in profiles.
I will fix the issue for Flink 0.10 (
https://issues.apache.org/jira/browse/FLINK-2408)
I was not able to reproduce
Hi Michele,
configuring a YARN cluster to allocate all available resources as good as
possible is sometimes tricky, that is true.
We are aware of these problems and there are actually the following two
JIRAs for this:
https://issues.apache.org/jira/browse/FLINK-937 (Change the YARN Client to
alloc
Can you share your full sbt build file with me?
I'm trying to reproduce the issue, but I have never used sbt before.
I was able to configure the assembly plugin, but the produced fat jar
didn't contain the zkclient. Maybe your full sbt build file would help me
to identify the issue faster.
Let me
Hi,
I don't know anybody who has reported about something like this before on
our lists.
Since you don't know the types before, the mapPartition approach sounds
good.
On Fri, Jul 10, 2015 at 5:02 PM, Flavio Pompermaier
wrote:
> Hi to all,
>
> I have a Flink job that produce json objects that I'd
Hey,
can you measure how fast jmeter is able to push data into Kafka? Maybe that
is already the bottleneck.
Flink should be able to read from Kafka with 100k+ elements/second on a
single node.
On Mon, Jun 29, 2015 at 11:10 AM, Stephan Ewen wrote:
> Hi Hawin!
>
> The performance tuning of Kafka
Hi Paul,
I don't think you need 10 GB of heap space for the JobManager. Usually 1 GB
are sufficient.
Since you have 3 nodes, I would start Flink with 3 task managers.
I think you can also launch such a cluster:
./flink-0.9.0/bin/yarn-session.sh -n 3 -jm 1024 -tm 13000
Regarding the memory you are
Hi Arnaud,
when using the PersistentKafkaSource, you can always cancel the job in the
web interface and start it again. We will continue reading from Kafka where
you left off.
You can probably also send the cancel request manually to the web
interface, to that URL:
http://localhost:8081/jobsInfo?g
> just ignore my previous question.
> My files started with underscore and I just found out that FileInputFormat
> does filter for underscores in acceptFile().
>
> Cheers,
> Ronny
>
> Am 01.07.2015 um 11:35 schrieb Robert Metzger :
>
> Hi Ronny,
>
> check out thi
Hi Ronny,
check out this answer on SO:
http://stackoverflow.com/questions/30599616/create-objects-from-input-files-in-apache-flink
It is a similar use case ... I guess you can get the metadata from the
input split as well.
On Wed, Jul 1, 2015 at 11:30 AM, Ronny Bräunlich
wrote:
> Hello,
>
> I w
>
> > On Jun 21, 2015, at 8:22 AM, Robert Metzger wrote:
> >
> > Okay, it seems like we have consensus on this. Who is interested in
> working on this? https://issues.apache.org/jira/browse/FLINK-2200
> >
> > On Mon, Jun 15, 2015 at 1:26 AM, Till Rohrmann
>
+1
lets remove the FAQ from the source repo and put it on the website only.
On Thu, Jun 25, 2015 at 3:14 PM, Ufuk Celebi wrote:
>
> On 25 Jun 2015, at 14:31, Maximilian Michels wrote:
>
> > Thanks for noticing, Chiwan. I have the feeling this problem arose when
> the website was updated. The pr
rialization to happen (even if not
>> necessary), in order to catch this kind of bugs before cluster deployment.
>> Is this simply not possible or is it a design choice we made for some
>> reason?
>>
>> -V.
>>
>> On 29 June 2015 at 09:53, Robert Metzger wro
;>
>>> I will try to explain my code a bit. The *Integer[] *array is
>>> initialized in the *getVerticesDataSet()* method.
>>>
>>> *DataSet >> vertices =
>>> getVerticesDataSet(env);*
>>> *...*
>&g
Hi,
The TaskManager which is running the Sync task is logging when its starting
the next iteration. I know its not very convenient.
You can also log the time and Iteration id (from the
IterationRuntimeContext) in the open() method.
On Fri, Jun 26, 2015 at 9:57 AM, Pa Rö
wrote:
> hello flink com
Hi Mihail,
the NPE has been thrown from
*graphdistance.APSP$InitVerticesMapper.map(APSP.java:74)*. I guess that is
code written by you or a library you are using.
Maybe the data you are using on the cluster is different from your local
test data?
Best,
Robert
On Thu, Jun 25, 2015 at 7:41 PM, Mi
files (*-site.xml) or just one specific hadoop config file (e.g.
>> core-site.xml or the hdfs-site.xml)?
>> >
>> > On Thu, Jun 25, 2015 at 3:04 PM, Robert Metzger
>> wrote:
>> > Hi Flavio,
>> >
>> > there is a file called "conf/flink-conf
cribe it better with an example please? Why Flink doesn't
> load automatically the properties of the hadoop conf files within the jar?
>
> On Thu, Jun 25, 2015 at 2:55 PM, Robert Metzger
> wrote:
>
>> Hi,
>>
>> Flink is not loading the Hadoop configuration from t
Hi,
Flink is not loading the Hadoop configuration from the classloader. You
have to specify the path to the Hadoop configuration in the flink
configuration "fs.hdfs.hadoopconf"
On Thu, Jun 25, 2015 at 2:50 PM, Flavio Pompermaier
wrote:
> Hi to all,
> I'm experiencing some problem in writing a f
Hey Maximilian Alber,
I don't know if you are interested in contributing in Flink, but if you
would like to, these small fixes to the documentation are really helpful
for us!
Its actually quite easy to work with the documentation locally. It is
located in the "docs/" directory of the Flink source.
Hey Hilmi,
here is a great example of how to use the Checkpointed interface:
https://github.com/StephanEwen/flink-demos/blob/master/streaming-state-machine/src/main/scala/com/dataartisans/flink/example/eventpattern/StreamingDemo.scala#L82
On Wed, Jun 17, 2015 at 12:44 AM, Hilmi Yildirim wrote
You don't have to enable the logging thread.
You can also get the metrics of the job manager via the job manager web
frontend. There, they also available in a JSON representation.
So if you want, you can periodically (say every 5 seconds) do a HTTP
request to get the metrics of all TMs.
On Mon, Ju
4, 2015 at 8:03 PM Robert Metzger
> wrote:
>
>> There was already a discussion regarding the two options here [1], back
>> then we had a majority for giving all modules a scala suffix.
>>
>> I'm against giving all modules a suffix because we force our users to
Hi,
the problem is that Flink is trying to parse the input data as CSV, but
there seem to be rows in the data which do not conform to the specified
schema.
On Thu, Jun 18, 2015 at 12:51 PM, hagersaleh
wrote:
> when run progrm on big data customer 2.5GB orders 5GB disply error why
>
>
> DataSou
Hi Daniel,
Are the files in HDFS?
what do you exactly mean by "`readTextFile` wants to read the file on the
JobManager" ?
The JobManager is not reading input files.
Also, Flink is assigning input splits locally (when reading from
distributed file systems). In the JobManager log you can see how man
s flink-ml,
>> flink-runtime, flink-scala, …, etc. with version variation.
>>
>> So we can reduce a number of deployed modules.
>>
>> Regards,
>> Chiwan Park
>>
>> > On Jun 13, 2015, at 9:17 AM, Robert Metzger
>> wrote:
>> >
>>
I agree that we should ship a 2.11 build of Flink if downstream projects
need that.
The only thing that we should keep in mind when doing this is that the
number of jars we're pushing to maven will explode (but that is fine)
We have currently 46 maven modules and we would create 4 versions of each
>> I was just wondering, is it possible to stream the talks or watch them
>> later on?
>>
>> On Mon, Jun 8, 2015 at 2:54 AM, Hawin Jiang
>> wrote:
>>
>>> Hi All
>>>
>>>
>>>
>>> As you know that Kostas Tzoumas and Robe
Great! I'm happy to hear that it worked.
On Tue, Jun 9, 2015 at 5:28 PM, hagersaleh wrote:
> I can solve problem when final Map map = new
> HashMap();
>
> very thanks
> code run in command line not any error
> public static void main(String[] args) throws Exception {
> final Map map
What exactly is the error you are getting when using the non-static field?
On Mon, Jun 8, 2015 at 2:41 PM, hagersaleh wrote:
> when use non-static filed display error
> and filter function not show map
>
>
>
>
> --
> View this message in context:
> http://apache-flink-user-mailing-list-archive.2
Hi Hilmi,
if you just want to count the number of elements, you can also use
accumulators, as described here [1].
They are much more lightweight.
So you need to make your flatMap function a RichFlatMapFunction, then call
getExecutionContext().
Use a long accumulator to count the elements.
If the
Hi,
this guide in our documentation should get you started:
http://ci.apache.org/projects/flink/flink-docs-master/setup/cluster_setup.html
You basically have to copy flink to all machines and put the hostnames into
the slaves file.
On Tue, Jun 2, 2015 at 4:00 PM, hagersaleh wrote:
> I run flin
Hi,
the problem is that "map" is a static field.
Can you make the "map" field a non-static variable of the main method? That
should resolve the issue.
On Sun, Jun 7, 2015 at 2:57 PM, hagersaleh wrote:
> when return value from linkedlist or map and use in filter function display
> error when run
2015 at 4:59 PM, Robert Metzger wrote:
> Hi Bill,
>
> the Scala Shell is a very recent contribution to our project. I have to
> admit that I didn't test it yet.
> But I'm also unable to find the script in the "bin" directory. There seems
> to be something wrong
Hi Bill,
the Scala Shell is a very recent contribution to our project. I have to
admit that I didn't test it yet.
But I'm also unable to find the script in the "bin" directory. There seems
to be something wrong.
I'll investigate the issue...
On Sat, Jun 6, 2015 at 2:33 PM, Bill Sparks wrote:
>
sitory that I use to know which dataset I need to load. All mysql
> classes are present in the shaded jar.
> Could you explain a little bit more in detail the solution to fix this
> problem please? Sorry but I didn't understand it :(
>
> Thanks,
> Flavio
> On 5 Jun 2015 18
; I answer on behalf of Flavio. He told me the driver jar was included.
> Smells lik class-loading issue due to 'conflicting' dependencies. Is it
> possible?
>
> Saluti,
> Stefano
>
> 2015-06-05 16:24 GMT+02:00 Robert Metzger :
>
>> Hi,
>>
>> is t
Hi,
is the MySQL driver part of the Jar file that you've build?
On Fri, Jun 5, 2015 at 4:11 PM, Flavio Pompermaier
wrote:
> Hi to all,
>
> I'm using a fresh build of flink-0.9-SNAPSHOT and in my flink job I set up
> a mysql connection.
> When I run the job from Eclipse everything is fine,
> whi
ion
> exception.But the open method is not getting called.
>
> On Fri, Jun 5, 2015 at 1:58 PM, Robert Metzger
> wrote:
>
>> Hi,
>>
>> I guess you have a user function with a field for the scripting engine.
>> Can you change your user function into a Rich* function,
Hi,
I guess you have a user function with a field for the scripting engine.
Can you change your user function into a Rich* function, initialize the
scripting engine in the open() method and make the field transient?
That should resolve it.
On Fri, Jun 5, 2015 at 10:25 AM, Ashutosh Kumar
wrote:
Yes, I've got this message.
On Thu, Jun 4, 2015 at 7:42 PM, Hawin Jiang wrote:
> Hi Admin
>
> Please let me know if you are received my email or not.
> Thanks.
>
>
>
> Best regards
> Hawin Jiang
>
> On Thu, Jun 4, 2015 at 10:26 AM, wrote:
>
>> Hi! This is the ezmlm program. I'm managing the
>>
to submit job 2f46ef5dff4ecf5552b3477ed1c6f4b9 (KMeans Flink)
>>>>> at org.apache.flink.runtime.jobmanager.JobManager.org
>>>>> $apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:595)
>>>>> at
>>>>> org.apache.flink.
e.flink.core.fs.local.LocalFileSystem.getFileStatus(LocalFileSystem.java:106)
> at
> org.apache.flink.api.common.io.FileInputFormat.createInputSplits(FileInputFormat.java:390)
> at
> org.apache.flink.api.common.io.FileInputFormat.createInputSplits(FileInputFormat.java:51)
> at
> org.apa
rn
> bash-4.1$ hadoop fs -chmod 777
> -chmod: Not enough arguments: expected 2 but got 1
> Usage: hadoop fs [generic options] -chmod [-R] OCTALMODE> PATH...
> bash-4.1$
>
> you understand?
>
> 2015-06-04 17:04 GMT+02:00 Robert Metzger :
>
>> It looks like th
; /user/cloudera/outputs/seed-1 does not exist or the user running Flink
> ('yarn') has insufficient permissions to access it.
> at
> org.apache.flink.core.fs.local.LocalFileSystem.getFileStatus(LocalFileSystem.java:106)
> at
> org.apache.flink.api.common.io.FileIn
okay, now it run on my hadoop.
> how i can start my flink job? and where must the jar file save, at hdfs or
> as local file?
>
> 2015-06-04 16:31 GMT+02:00 Robert Metzger :
>
>> Yes, you have to run these commands in the command line of the Cloudera
>> VM.
>>
>&g
Yes, you have to run these commands in the command line of the Cloudera VM.
On Thu, Jun 4, 2015 at 4:28 PM, Pa Rö
wrote:
> you mean run this command on terminal/shell and not define a hue job?
>
> 2015-06-04 16:25 GMT+02:00 Robert Metzger :
>
>> It should be certainly possible
; for that i use cloudera live. maybe it give an other way to test flink on
> a local cluster vm?
>
> 2015-06-04 16:12 GMT+02:00 Robert Metzger :
>
>> Hi Paul,
>>
>> why did running Flink from the regular scripts not work for you?
>>
>> I'm not an ex
> now i want run my app on cloudera live vm single node,
> how i can define my flink job with hue?
> i try to run the flink script in the hdfs, it's not work.
>
> best regards,
> paul
>
> 2015-06-02 14:50 GMT+02:00 Robert Metzger :
>
>> I would recommend using HD
901 - 1000 of 1085 matches
Mail list logo